Fahime Same: Referring expression generation in context
Referring expression generation in context
Buch
lieferbar innerhalb 2-3 Wochen
(soweit verfügbar beim Lieferanten)
(soweit verfügbar beim Lieferanten)
EUR 40,00*
Verlängerter Rückgabezeitraum bis 31. Januar 2025
Alle zur Rückgabe berechtigten Produkte, die zwischen dem 1. bis 31. Dezember 2024 gekauft wurden, können bis zum 31. Januar 2025 zurückgegeben werden.
- Language Science Press, 06/2024
- Einband: Gebunden, HC runder Rücken kaschiert
- Sprache: Englisch
- ISBN-13: 9783985541003
- Bestellnummer: 11884804
- Umfang: 276 Seiten
- Gewicht: 708 g
- Maße: 246 x 175 mm
- Stärke: 23 mm
- Erscheinungstermin: 3.6.2024
- Serie: Topics at the Grammar-Discourse Interface 9
Achtung: Artikel ist nicht in deutscher Sprache!
Klappentext
Reference production, often termed Referring Expression Generation (REG) in computational linguistics, encompasses two distinct tasks: (1) one-shot REG, and (2) REG-in-context. One-shot REG explores which properties of a referent offer a unique description of it. In contrast, REG-in-context asks which (anaphoric) referring expressions are optimal at various points in discourse. This book offers a series of in-depth studies of the REG-in-context task. It thoroughly explores various aspects of the task such as corpus selection, computational methods, feature analysis, and evaluation techniques. The comparative study of different corpora highlights the pivotal role of corpus choice in REG-in-context research, emphasizing its influence on all subsequent model development steps. An experimental analysis of various feature-based machine learning models reveals that those with a concise set of linguistically-informed features can rival models with more features. Furthermore, this work highlights the importance of paragraph-related concepts, an area underexplored in Natural Language Generation (NLG). The book offers a thorough evaluation of different approaches to the REG-in-context task (rule-based, feature-based, and neural end-to-end), and demonstrates that well-crafted, non-neural models are capable of matching or surpassing the performance of neural REG-in-context models. In addition, the book delves into post-hoc experiments, aimed at improving the explainability of both neural and classical REG-in-context models. It also addresses other critical topics, such as the limitations of accuracy-based evaluation metrics and the essential role of human evaluation in NLG research. These studies collectively advance our understanding of REG-in-context. They highlight the importance of selecting appropriate corpora and targeted features. They show the need for context-aware modeling and the value of a comprehensive approach to model evaluation and interpretation. This detailed analysis of REG-in-context paves the way for developing more sophisticated, linguistically-informed, and contextually appropriate NLG systems.Anmerkungen:
Bitte beachten Sie, dass auch wir der Preisbindung unterliegen und kurzfristige Preiserhöhungen oder -senkungen an Sie weitergeben müssen.