Work
About
Interested in language technology, my work spans AI research, computational modeling, and experimental psycholinguistics.
Featured Projects (Click to expand!)
The model was designed to simulate how startups (“innovators”) and institutional “investors” interact over time under different policy/economic regimes. The model is meant to be a tool for policymakers and analysts to explore tradeoffs between innovation and economic openness versus national security and international competitiveness.
We model innovators, e.g. startup firms and university-affiliated research centers, and institutional investors that provide innovators capital. The simulation captures firm formation, collaboration, funding, learning, and knowledge diffusion. Firms explore a hypothesis space in search of productive directions for innovation and efficient resource allocation. New firms are more likely to model themselves after successful firms.
Model is calibrated using real-world macroeconomic data and Pitchbook data. Policy interventions can be introduced at arbitrary points in simulated time, enabling counterfactual analysis and comparison across alternative policy scenarios.
I led the full port of the model from Python to Julia. This involved identifying Julia as a viable alternative, becoming proficient enough with the language from a point of zero knowledge, and reimplementing the Python model without disrupting the project’s timeline. Performance-critical components saw runtime improvements of over 250%, enabling faster iteration during development and allowing the simulation of much larger networks. Much of this performance gain came from restructuring the code to take advantage of Julia’s type system, which offers major performance boosts with explicit types, and compiled execution.
I designed and implemented a modular, customizable visualization interface that displays custom innovation metrics and network statistics over time and allows for applying interventions at any time, supporting counterfactual comparisons. This visualization was important to support exploratory analysis and communication with non-technical stakeholders. With it we can easily identify emergent behavior and directly compare policy outcomes.
A particularly valuable aspect of this project was the close, iterative collaboration with my PI, Dr. Ron Legere. Rapid feedback cycles made it possible to move quickly from conceptual ideas to implementation and testing. I had to write flexible, well-structured code and to carefully consider model fidelity, interpretability, and performance.
For this paper, I worked with 3 graduate students in the linguistics department to investigate whether LMs can generalize based on limited input the same way linguists believe humans can, and in fact must, to compensate for our limited, underspecifying language input. Linguists theorize this is done specifically by forming abstract representations that apply against superficially different constructions. We use surprisal to model grammaticality, similar to human reading times in psycholinguistic experiments. We find that despite LMs being sensitive to basic filler-gap dependencies, they lack a shared, abstract representation of these dependencies, preventing them from generalizing like humans. Our results contribute to a body of work suggesting that neural LMs and humans learn and represent language fundamentally differently.
My role on this project was predominately technical, I implemented the CFG framework to generate controlled sets of sentences with different dependency structures. Then we measured surprisal at specified critical regions where the (un)grammaticality would be recognizable by the model.
This work was presented as a talk at the 49th Boston University Conference on Language Development (BUCLD 49), and I was fortunate enough to be able to travel to Boston to attend the conference.
Experimental Semantics
With PhD student Malhaar Shah, I ran experiments to investigate how humans interpret quantifiers like 'more' and 'most'. We aimed to understand the "algorithm" or truth conditional formula humans use to evaluate propositions. Following this research, we continued to meet to discuss topics in semantics including: logical form, quantifiers & conservativity, comparitives, and type shifting.
Child Language Acquisition Lab
The Acq Lab investigates how even from limited, noisy input, children still manage to learn language so consistently. Using experimental methods from psycholinguistics and cognitive science, we ask and answer questions about the timeline of acquisition. Graduate and undergraduate students work alongside each other in the lab to design, run, and analyze experiments on early language development from just months old up to 4 years old.
This past semester, I supported multiple stages of the experimental pipeline. I acted using a bear puppet to create experimental stimuli for studies, assisted with coding, data organization, and lab logistics to keep the lab running smoothly and support ongoing experimental work.
During weekly lab meetings, I actively engaged with research discussions, providing feedback on experimental design and presentations. In Spring ’26, I will continue in the lab and begin collaborating more directly with a graduate student on a specific acquisition project, expanding my role toward more focused research contributions.
Selected Classwork
B.S. Computer Science, B.A. Linguistics — GPA: 4.0/4.0 — Italics indicate Spring ’26 classes
- 2025-26 — Machine Learning, Economics & Computation, Web Development, Semantics, Understanding Language Understanding
- 2024-25 — Natural Language Processing, Compilers, Theory of Computation, Data Science, A-bar Movement, Modals & Conditionals, Philosophy of Langauge
- 2023-24 — Computer Systems, Programming Languages, Algorithms, Modeling Collective Behavior, Syntax (I & II), Phonology, Acquisition
- 2022-23 — Object Oriented Programming (I & II), Linear Algebra, Discrete Structures, Language and Mind
Archive
Color indicates clickable for feature
- Sep 2025–Present —
Undergraduate Research Assistant, Acquisition Lab, UMD
Acq Lab (PI: Jeff Lidz) designs and runs experiments on child language acquisition. Lab is staffed by graduate and undergraduate students. As an undergrad, my responsibilities have been predominately logistics/data organization related. During weekly lab meetings, I offer feedback on experimental design and research presentations for conferences.
- Aug 2024–Sep 2025 —
Critical Technology Protection Decision Framework (CPT-DF) Intern, ARLIS
Developed a Julia-based simulation of the innovation and startup ecosystem to inform government policy to protect critical technology. Model was originally in Python, took on the job of porting to Julia, leading to over 250% performance improvements. Design emphasized modularity, customizability, and reproducibility. Towards the end, I designed and implemented a customizable visualization interface for innovation metrics and network statistics. Role was offered following RISC internship from Summer '24, and I was repeatedly retained beyond my original contract.
- Jun 2025–Aug 2025 —
NSF/NIST Research Fellow, TRAILS Lab, UMD
Led project direction and design decisions for a team of five undergraduate/graduate students to develop a verification system for AI visual-question-answering (VQA) systems for blind & low-vision users. Conducted an extensive literature review on model verification, and decided to use sentence embeddings to measure output consistency, thus validity. Presented findings to other researchers at TRAILS at the end of the work period.
- Jan 2025–May 2025 —
Undergraduate Teaching Assistant, LING311: Phonology I, UMD
Prepared and administered in-class exercises on phonological topics like: rule ordering, distinctive features. Tracked learning and provided feedback for 15 of 30 students in the class. Gave input on test and assignment design, as well as technical support to the instructor.
- Sep 2024–Dec 2024 —
Undergraduate Research Assistant, Department of Linguistics, UMD
Ran experiments with PhD student Malhaar Shah to investigate the interpretation of quantifiers 'more' and 'most,' to understand the "algorithm" or truth conditional formula humans use to evaluate propositions. Following this research, continued to meet to discuss topics in semantics including: logical form, quantifiers & conservativity, comparitives, and type shifting.
- Sep 2022–Dec 2024 —
Honors College Citation, University of Maryland Honors College
Completed a sequence of interdisciplinary seminars: Collective Behavior, Arbitrating Body Rights, Politics of Laziness, and Ecology of Poverty., designed to provide breadth across social and natural sciences and critical engagement with current issues.
- Jun 2024–Aug 2024 —
RISC Intern, ARLIS
Conducted research with a interdisciplinary team to synthesize recommendations on U.S. innovation policy to protect IP while fostering economic growth. Presented findings to ARLIS, government stakeholders, and intelligence community professionals. Policy analysts gave positive feedback on our actionable recommendations, and agreed with our assessment of current U.S. innovation policy gaps.
- Oct 2023–Jun 2024 —
Undergraduate Researcher, Language Science Center, UMD
Led programming efforts on research investigating how language models learn syntactic constraints. Humans learn language by generalizing, which helps us learn with such limited input. Our investigation was into whether LMs did the same. Used filler-gap dependencies, which linguists hypothesize share an underlying representation, to probe model learning. Actively participated in writing and editing process. Paper was selected for a talk at 49th BU Conference on Language Development in November.
- Sep 2023–Dec 2023 —
Undergraduate Teaching Assistant, CMSC132: Object Oriented Programming II, UMD
Provided personalized support to CS students during office hours and review sessions. Answered general programming questions, guided students to solve projects on their own, and helped prepare students for exams. Provided personalized feedback for exams and projects.