Welcome to my website!
Hello! I’m Julina Maharjan, Ph.D. graduate in Computer Science at Kent State University. Specializes in Deep Learning and Foundational Models, and developing end-to-end ML pipelines across multimodal domains like psychology, public health, and earth observation. Experienced in optimizating and scaling AI solutions using big data technologies in high performing supercomputers. Passionate about advancing AI research to solve real-world language challenges.
About Me
I have a strong mathematical foundation backround on Machine/Deep Learning that allows designing, developing and evaluating end-to-end ML models.
When I’m not researching, you’ll often find me reconnecting with nature or staying energized through fitness activities. I believe in balancing intellectual curiosity with personal well-being.
Research Interests
- Advancing Foundational Models (Pretraining/Finetuning)
- LoRA as a Parameter Effiencint Fine Tuning (PEFT)
- Representation Learning
- Auto-Labeling like Active Learning, RLHF, pseudo labeling
- Model Optimization
Recent Publications
Psychometric Evaluation of Large Language Model Embeddings for Personality Trait Prediction
Journal of Medical Internet Research, 2025
Recent advancements in Large Language Models (LLMs) have sparked interdisciplinary interest in their ability to assess psychological constructs, particularly Personality. While prior machine learning research has focused on evaluating LLMs’ capability to infer personality traits, often via zero-shot or few-shot learning, few studies have systematically examined the applicability of LLM embeddings for Personality Prediction within a robust psychometric validity framework or explored their correlation with psychological and linguistic features. Addressing this gap, we investigate performance of LLM embeddings on a well-labeled PANDORA dataset (Big Five Personality traits from Reddit).PLUME: Parameter-Efficient Personalization of Large Language Models via Low-Rank User Modulation in Shared Subspaces
CoLM, 2026 On Review
Personalizing large language models (LLMs) is essential for delivering AI assistance that aligns with individual users’ styles, intents, and preferences. While per-user fine-tuning can substantially enhance personalization quality, it introduces significant parameter and storage overhead, limiting scalability to large user populations. We propose PLUME (Personalized Low-Rank Adaptation through User Modulation and Shared Subspace), a lightweight framework that achieves efficient and expressive per-user adaptation by leveraging a shared task-specific subspace. Specifically, PLUME first learns a global task subspace from aggregated user data. Personalization is then achieved by training only a lightweight small square matrix within this subspace, enabling each user to obtain a tailored model while keeping shared components fixed. Cross-layer shared parameters and rank-1 residual terms are further introduced to significantly reduce redundancy while maintaining expressiveness. Experiments on multiple personalized text generation benchmarks demonstrate that PLUME achieves comparable or superior performance to strong baselines, while reducing per-user parameters by over 95%. These results establish shared-subspace modulation with minimal residuals as a scalable and semantically grounded approach to LLM personalization.Large-Scale Deep Learning–Enabled Infodemiological Analysis of Substance Use Patterns on Social Media: Insights From the COVID-19 Pandemic
J Med Internet Res, 2025
The COVID-19 pandemic intensified the challenges associated with mental health and substance use (SU), with societal and economic upheavals leading to heightened stress and increased reliance on drugs as a coping mechanism. Centers for Disease Control and Prevention data from June 2020 showed that 13% of Americans used substances more frequently due to pandemic-related stress, accompanied by an 18% rise in drug overdoses early in the year. Simultaneously, a significant increase in social media engagement provided unique insights into these trends. Our study analyzed social media data from January 2019 to December 2021 to identify changes in SU patterns across the pandemic timeline, aiming to inform effective public health interventions.Differential Analysis of Age, Gender, Race, Sentiment, and Emotion in Substance Use Discourse on Twitter during the COVID-19 Pandemic: An NLP Approach
JMIR, 2025
User Demographics are often hidden in social media data due to privacy concerns. However, demographic information on Substance Use can provide valuable insights, allowing Public Health policymakers to focus on specific cohorts and develop efficient prevention strategies, especially during global crises like COVID-19.Intersection of Big Five Personality Traits and Substance Use on Social Media Discourse: AI-Powered Observational Study
Journal of Medical Internet Research, 2025
Personality traits are known predictors of substance use (SU), but their expression and association with SU in digital discourse remain largely unexamined. During the COVID-19 pandemic, the online social engagement heightened and led to an amplification in SU rates, thereby creating a unique natural opportunity to investigate these dynamics through large-scale digital discourse data. In our study, we offer insights beyond traditional self-report methods, which are crucial for developing timely and targeted public health interventions.Benchmarking Personality Inference in Large Language Models Using Real-World Conversations
Journal of Medical Internet Research, 2025
Large language models (LLMs) have transformed natural language processing, enabling contextually coherent text generation at scale. Although conversational language contains signals associated with personality traits, mapping naturalistic conversation to stable personality-related representations remains challenging.
Contact Me
Feel free to connect if you’re interested in collaborating or learning more about my work. You can reach me at [email protected] or connect with me on LinkedIn.
Thank you for visiting my website!
