12/31/2023 0 Comments Conversational ai with rasa![]() We’ve done experiments (as have others) on using paraphrasing to augment NLU training data, but we haven’t seen convincing evidence that this meaningfully improves your model. Paraphrasing provides a nice illustration of this trade-off. Or you can make a simulator that shows a great deal of variability but frequently goes its own way and strays from the instructions you provided. ![]() You can make a user simulator that’s very boring and always follows your instructions to the letter, but also produces very little surprising behavior and only tests your bot in ways you already anticipated. When building a user simulator, there’s a trade-off between faithfulness and variability. What if we could create a simulated user to kick the tires on chatbot and help us bootstrap conversation-driven development? No Free Lunch This approach is especially mature in autonomous driving, where major players have built products for simulation. This is the case for many real-world applications of ML, and one way to pull yourself up by the bootstraps is to use a simulator. At inception, you’re faced with a “cold start” problem: you don’t have the data to build your system, and you don’t have a working system to help you collect data. ![]() Simulation for Machine LearningĬhatbots require training data that represents both what users are going to say and what the bot should do in response. This post describes some of our experiences using LLMs to build user simulators than can help us accelerate the development of AI systems. We’ve experimented with half a dozen different applications of generative models at Rasa and seen some interesting and promising results. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |