Evaluations can accelerate LLM app development, but it can be challenging to get started. We've kicked off a new video series focused on evaluations in LangSmith.
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
This video focuses on Regression Testing, which lets a user highlight particular examples in an eval set that show improvement or regression across a set of experiments.
Blog: blog.langchain.dev/regression...
LangSmith: smith.langchain.com/
Documentation: docs.smith.langchain.com/eval...
Негізгі бет Regression Testing | LangSmith Evaluations - Part 15
Пікірлер: 3