New England NLP Meeting Series

NENLP 2024 Meetup Schedule
10:00-10:15 Reception
10:15-10:30 Welcome Remarks (Ellie Pavlick)
10:30-11:30 Morning Spotlights
10:30-10:50 Nikhil Prakash Northeastern University Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
10:50-11:10 Aditya Yedetore Boston University Semantic Training Signals Promote Hierarchical Syntactic Generalization in Neural Networks
11:10-11:30 Pratyusha Sharma MIT The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
11:30-12:45 Panel: Academia's Role in NLP (Ellie Pavlick, Jacob Andreas, Anna Rumshisky)
12:45-1:45 Lunch
1:45-3:15 Poster Session
3:15-4:15 Keynote: Vlad Lialin (1X Technologies)
4:15-5:00 Afternoon Spotlights
4:15-4:35 Nihal Nayak Brown University Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
4:40-5:00 Namrata Shivagunde UMass Lowell Deconstructing In-Context Learning: Understanding Prompts via Corruption
5:00-5:15 Closing Remarks
Poster session
Aditya Yedetore Boston University Semantic Training Signals Promote Hierarchical Syntactic Generalization in Neural Networks
Alex Gu MIT CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution
Alexis Ross MIT Toward In-Context Teaching
Alyssa Loo Brown University On The Heuristics of Transformer Models on Negation Tasks
Andi Peng MIT Preference-Conditioned Language-Guided Abstraction
Anton Kovalev UMass Lowell Entropy-based LLM Knowledge Probing
Benjamin Lipkin MIT LINC: Logical Inference via Neurosymbolic Computation
Catherine Chen Brown University Axiomatic Causal Interventions for Reverse Engineering Relevance Computation in Neural Retrieval Models
Hayley Ross Harvard University When is a fake concert still a concert? A study of adjective-noun composition in LLMs
Jack Merullo Brown University Talking Heads: Communication Across Layers in Transformer Language Models
Koyena Pal Northeastern University Model Lakes
Linlu Qiu MIT Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Lucas Torroba Hennigen MIT Towards Verifiable Text Generation with Symbolic References
Megan Wei Brown University Do music generation models understand music theory?
Namrata Shivagunde UMass Lowell Deconstructing In-Context Learning: Understanding Prompts via Corruption
Nihal Nayak Brown University Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
Nikhil Prakash Northeastern University Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
Pratyusha Sharma MIT The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Ruochen Zhang Brown University Multilingual Large Language Models Are Not (Yet) Code-Switchers
Sanjana Ramprasad Northeastern University Examining hallucinations in summarization via model introspection
Shannon Shen MIT Learning to Decode Collaboratively with Multiple Language Models
Sherin Muckatira UMass Lowell Emergent Abilities in Reduced-Scale Generative Language Models
Simeng Han Yale University FOLIO: Natural Language Reasoning with First-Order Logic
Tassallah Amina Abdullahi Brown University Improving Zero-Shot Text Classification through Retrieval-based Query Reformulation
Tian Yun Brown University mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
Vijeta Deshpande UMass Lowell LocalTweets to LocalHealth: A Mental Health Surveillance Framework Based on Twitter Data
William Merrill NYU The Illusion of State in State-Space Models
Yong Zheng-Xin Brown University LexC-Gen: Generating Data for Extremely Low-Resource Languages with Large Language Models and Bilingual Lexicons
Zhaofeng Wu MIT Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks