Date: 11/19/2024
Time: 1:00 pm - 2:00 pm
Location
Special Collections Seminar Room
Description
To truly understand human language, we must look at words in the context of the human generating the language, i.e., who is speaking, where and in what situation, when they are speaking, and to whom it is addressed. For example, a person feeling exhilarated on a hike would complete the statement “I am feeling…” differently as compared to when they are feeling dejected during a break-up.
Factors such as demographics, personality, modes of communication, and emotional states have also shown to play a crucial role in NLP models pre-LLMs (large language modeling) era. Advances in language modeling yielded in Transformer-based LLMs as the base of most current NLP systems. However, traditional language modeling views words or documents independent of the aforementioned human context. To address this, we have taken first steps of mathematically defining the inclusion of human context in language modeling, and empirically comparing the effects of including different types of human contexts in language modeling on downstream tasks.
For more information, please visit SB Engaged.
Clara Tran
Email: clara.tran@stonybrook.edu
Latest posts by Clara Tran (see all)
- Dr. Jesus Rios on “Can a machine learn chemistry?” - November 8, 2024
- Dr. Prerana Shrestha on “Engineering protein synthesis modulators to understand the neural basis of emotional behaviors” - October 7, 2024
- “Engineering Protein Synthesis Modulators to Understand the Neural Basis of Emotional Behaviors” on September 24 at 1pm - September 16, 2024