SQI scientists, thinkers, and builders share innate curiosity and the capacity to look at research challenges from multiple perspectives.
Greta Tuckute received her Ph.D. from MIT’s Department of Brain and Cognitive Sciences in 2025 and was advised by Prof. Evelina Fedorenko. She’s now a Research Fellow at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, where she works on understanding how the human brain processes language and what that can tell us about AI systems, “My work falls in the intersection of neuroscience, artificial intelligence and cognitive science.”
Dr. Tuckute’s academic path started at the University of Copenhagen and the Technical University of Denmark where she earned a B.S. and an M.S. in Molecular Biomedicine, with a focus on visual attention. In 2019, while she was pursuing her M.S., she saw an early OpenAI demo featuring a language model that was given an opening for a whimsical story about people and unicorns living in the Andes and was then asked to continue writing the story. The artificial model’s storytelling was remarkably fluent and coherent. She remembers the demo as an ‘a-ha moment’ in her research journey. “And then I thought, ‘but wait a second, we do this in our human minds and brains all the time, so we must know how that happens—right?’” As she learned more about how humans seemingly effortlessly understand and produce language — taking in continuous streams of words, inferring the meaning, anticipating what comes next, and responding — she started to ask how, exactly, does the brain pull that off? That question turned out to be an active area of research. Finding out that there was still a genuine gap in our understanding of one of the most fundamental things humans do led her to MIT for her Ph.D.
MIT and Prof. Fedorenko’s lab offered Greta a place where the questions about language and mind could be asked with both rigor and real ambition. The Fedorenko lab (also called the EvLab) houses research in language production, language comprehension, specific syntactic phenomena, and computational modeling. It's a place where, if you want to understand any corner of how language works in the brain, someone nearby is probably already an expert on it (or writing their thesis as you ask the question). During the first years of her Ph.D., Greta also collaborated with Prof. Josh McDermott on questions at the interface of audition, speech, and language.
Work from the EvLab and other MIT researchers has been foundational in the study of language: large-scale neuroimaging studies helped determine where the brain's language areas actually are across hundreds of individuals and how these language areas develop. And while this research was forming the current scientific understanding of language in the human brain, the world began to witness the rise of large language models (LLMs) and their use in AI more broadly.
Working alongside other SQI researchers, Greta and her colleagues began asking how well the internal representations of these language models predict what happens in the human brain when we read or listen to a sentence. Although these models weren’t designed to mimic the brain—they were initially intended solely to generate coherent text—the activation patterns inside LLMs turned out to be already remarkably similar to that of humans.
If LLMs behave like the brain during language processing, how can that behavior be applied? “I study sentences, and there are an infinite number of sentences out there,” Greta says. “My human brain can come up with some sentences, but as an experimenter it’s hard to come up with the most efficient sentences to test. For instance, I was interested in understanding what kinds of sentences the language network responds most and least strongly to. Using LLMs, we could generate targeted sentences, ‘super-stimuli’, that would evoke especially strong responses in language areas in the human brain”. Their study found that the language network responds strongly to unusual, but well-formed sentences—and LLMs helped make that kind of stimulus search much more systematic.
Over the past few years, industry-developed LLMs have multiplied and become more powerful, with more capabilities, driven by engineering goals that go beyond just understanding language. Models that can do reasoning, coding, and multimodal tasks are impressive, but they become harder to use as precise tools for studying specific functions or regions of the brain. Since completing her doctorate, Greta is continuing research in developing more biologically-inspired networks that begin where the brain begins — with raw audio input — and work their way toward language in stages that reflect how the brain is organized. The goal is to create a model system that can be examined from the inside: to understand how continuous, noisy speech is transformed into words. Unlike standard LLMs, which begin from text alone, these models make it possible to ask new questions about the interface between perception and language. For example, can we trace the circuits that handle acoustic information like speaker identification and ask whether they resemble the circuit that deals with word meaning?
In addition to building a more precise understanding of how language is processed in the human mind and brain, Greta’s research in the coming years will dig further into the humanlike capabilities of artificial language systems. “We have these intelligent systems,” she says. “What are their similarities and dissimilarities? Will it be possible to build more efficient systems, ones that can operate from one example as opposed to fifty? And what about making them more sensitive and receptive to human speech? What if we want to build a system that can infer the emotional tone of my voice as I’m speaking to you—something that humans are incredibly good at—or know when I’m being ironic or something like that?” This research direction feels, in spirit, like a continuation of what Greta experienced with the Quest— a loop between model-building and neuroimaging, between artificial systems and biological ones, each informing the other.
"Over just a few years, Greta became a leading junior figure in the newly emerged, but quickly growing, area of computational cognitive neuroscience,” says Prof. Fedorenko. “Her paper on driving and suppressing the language system marks a significant advance in human neuroscience, providing the first demonstration of successful non-invasive modulation of neural activity in high-level cognitive areas. And in general, she is pursuing a research program that is innovative, exciting, and potentially transformative. The things I love and value the most about Greta include her drive, independence, rigor, and creativity. I was extremely lucky to have recruited her to my lab and I cannot wait to witness her successes in the years to come."
Thinking across methods, across disciplines, across the boundary between natural and artificial intelligence is still relatively new and rare. As commercial interest in LLMs continues to grow, so does the need for engineering as well as a more precise understanding of language in the brain. When asked about the potential impact of this work, Greta makes an analogy to cardiac surgery, “If somebody wants to replace a heart valve, you better hope that they have a good model of the heart. In the same way, if we want to build intelligent systems—and understand our own—we need quantitatively accurate models” Research like Greta’s—and that of others in SQI and beyond—is advancing the understanding of the brain, of intelligence, and of models.