Acquiring Terminological Relations with Neural Models for Multilingual LLOD Resources
MetadataShow full item record
Specialized communication strongly benefits from the availability of structured and consistent domain-specific knowledge in LLOD language resources. Manually curating such language resources is cumbersome and time-intensive. Thus, automated approaches for extracting terms, concepts, and their interrelations are required. Recent advances in computational linguistics have enabled the training of highly multilingual neural language models, such as GPT-3 or XLM-R, that can successfully be adapted to various downstream tasks, from sentiment classification and text completion to information extraction. Furthermore, several approaches exist to extract and explore lexico-semantic relations by means of these language models, however, only few focus on curating, representing, and interchanging domain-specific language resources in the LLOD cloud.