With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
This video introduces how to use LangSmith's many pre-built evaluators for tasks such as RAG (question answering), evaluating generations based upon supplied criteria, etc.
Documentation:
docs.smith.langchain.com/eval...
Code:
github.com/langchain-ai/langs...
Негізгі бет Pre-Built Evaluators | LangSmith Evaluations - Part 5
Пікірлер: 2