With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.
This introduces how to use the LangSmith UI to compare (e.g., different prompts, LLMs, etc) across a dataset.
Documentation:
docs.smith.langchain.com/user...
Code:
github.com/langchain-ai/langs...
Негізгі бет Eval Comparisons | LangSmith Evaluations - Part 7
Пікірлер: 5