2022

[PhD thesis] Laura Aina. A Deep Learning Perspective on Linguistic Ambiguity.

[Paper & poster] Laura Aina, Nikos Voskarides, Roi Blanco. Performance-Efficiency Trade-Offs in Adapting Language Models to Text Classification Tasks . In Proc. AACL 2022 Asia-Pacific Chapter of the Association for Computational Linguistics.

[Paper] Ionut Sorodoc, Laura Aina, Gemma Boleda. Challenges in including extra-linguistic context in pre-trained language models . In Proc. Workshop on Insights from Negative Results in NLP.

2021

[Paper & poster] Laura Aina, Xixian Liao, Gemma Boleda, Matthijs Westera. Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution. In Proc. CoNLL 2021 Conference on Computational Natural Language Learning.

[Paper & presentation] Laura Aina, Tal Linzen. The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. In Proc. BlackboxNLP 2021: Analyzing and Interpreting Neural Networks for NLP.

2020

[Paper & poster] Laura Aina, Thomas Brochhagen, Gemma Boleda. Modeling word interpretation with deep language models: The interaction between expectations and lexical information . In Proc. CogSci 2020 42nd Annual Conference of the Cognitive Science Society.

2019

[Paper & poster] Laura Aina, Kristina Gulordava, Gemma Boleda. Putting words in context: LSTM language models and lexical ambiguity. In Proc. ACL 2019 57th Annual Meeting of the Association for Computational Linguistics.

[Paper] Laura Aina, Raffaella Bernardi, Raquel Fernández. Negated Adjectives and Antonyms in Distributional Semantics: not similar? . Italian Journal of Computational Linguistics (invited contribution).

[Paper & poster] Laura Aina, Carina Silberer, Ionut-Teodor Sorodoc, Matthijs Westera, Gemma Boleda. What do entity-centric models learn? Insights from entity linking in multi-party dialogue. In Proc. NAACL 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics.

2018

[Poster] Laura Aina. Lexical modulation via distributional semantics: a general framework for modeling word meaning in context. Semantics and Philosophy in Europe colloquium (SPE10), 2018.

[Paper & presentation] Laura Aina, Raffaella Bernardi, Raquel Fernández. A distributional study of negated adjectives and antonyms. In Proc. CLiC-it 2018 5th Italian Conference on Computational Linguistics, 2018. [data]

[Paper & poster] Kristina Gulordava, Laura Aina, Gemma Boleda. How to represent a word and predict it, too: improving tied architectures for language modelling . In Proc. EMNLP 2018 Conference on Empirical Methods in Natural Language Processing.

[Presentation] Laura Aina. Lexical modulation via distributional semantics: a general framework for modeling word meaning in context. 26th Annual Meeting of the European Society for Philosophy and Psychology, 2018.

[Paper & presentation]Laura Aina, Carina Silberer, Ionut-Teodor Sorodoc, Matthijs Westera, Gemma Boleda. AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library. In Proc. SemEval 2018 Workshop on Semantic Evaluation, 2018.

2017

[MSc Thesis] Laura Aina. Not logical: A distributional semantic account of negated adjectives. MSc thesis (supervised by Raquel Fernández and Raffaella Bernardi), Universiteit van Amsterdam, 2017.

[Paper & presentation] Laura Aina, Natalia Philippova, Valentin Vogelmann, Raquel Fernández. Referring Expressions and Communicative Success in Task-oriented Dialogues. In Proc. SEMDIAL 2017 (SaarDial) Workshop on the Semantics and Pragmatics of Dialogue. 2017.

[Paper] Marianna Bolognesi, Laura Aina. Similarity is closeness: Using distributional semantic spaces to model similarity in visual and linguistic metaphors. Corpus Linguistics and Linguistic Theory, in press.