Abstract:
Current research state-of-the-art in automatic data-to-text generation, a major task in natural language generation, is dominated by large language models based on the Transformer neural network architecture. These models are capable of producing lifelike, natural texts; however, they are hard to control and often do not adhere to the input data, omitting important content or producing “hallucinated” text which is not grounded in the input data. In this talk, I will first show the basic operation principles of the large language models. I will then detail our experiments aiming at higher accuracy of generated text in two ways: (1) improving accuracy of the generating language models themselves, (2) automatically detecting errors in generated texts.

Bio:
Ondřej Dušek is an assistant professor at the Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University. His research is in the areas of natural language generation and dialogue…

Create Content Fast with AI: read more about Frase

Artificial intelligence makes it fast & easy to create content for your blog, social media, website, and more! Jasper is the AI Content Platform that helps you and your team break through creative blocks to create amazing, original content 10X faster.

Special Offer: Get 10,000 bonus credits

Playing with Bootstrap 5 for Y… Previous post Playing with Bootstrap 5 for Y…
what's on my ipad pro 20… Next post what's on my ipad pro 20…

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

error: Content is protected !!