Great video, as always! I'm curious how this would work with an external vector DB attached and/or with user feedback, like chain2 changes it's opinion based on the feedback of the user. This would be a very interesting use case imo.
@snakesolid6080
Жыл бұрын
Summary. ---------------------------------------------- Evaluating yourself is critical when building autonomous agents around large language models. In this video, I'd like to discuss the concept of RCI chains and how to build them using the new LangChain expression language. Autonomous agents often have problems without testing, so there is a need to find a way to test them. And surprisingly, one of the best ways to test them is to let the language model check its output itself. We are going to introduce the concept of RCI chains. This concept comes from a paper called Recursive Critique and Improvement of Language Models for Computer Tasks.RCI stands for Recursive Critique and Improvement. They show a simple idea: we can start with a zero-initial cue, ask questions or pose problems to the language model, and then improve it by critiquing and inspecting the output. Finally, we use the results of the critique and improvement as new prompts and move on to the next round of critique and improvement. This is the basic idea of the RCI chain, which allows for multiple critiques and improvements recursively. In this video, we will focus on three prompts and three chains. The first prompt is the initial question, the second prompt is the assessment of the initial question, and the third prompt is the improvement prompt. We will build the RCI chain by combining these three chains. We first define two chains, an initial problem chain and an evaluation chain. Then, we define an RCI chain that takes the output of the initial problem chain as the input to the assessment chain and the output of the assessment chain as the input to the improvement chain. In this way, we combined the three chains to form the complete RCI chain. In the code, we use the LangChain expression language to define and run these chains. We do this by connecting different hints and models together to form a chain structure, and by running each link in the chain to enable recursive critique and improvement. Finally, we run the entire RCI chain and get the final output. In this example, we use a chat model, give it a problem, and keep improving the output through a process of critique and improvement. In the end, we get an output that fulfills the requirements. To summarize, RCI Chaining is a method for improving the output of a language model using recursive criticism and improvement. It can be applied to a variety of tasks such as question and answer, writing, etc. By building and running RCI chains, we can continually improve the capabilities of language models and achieve more accurate and reliable results. We hope this article has helped you understand the concept and application of RCI chains. If you have any questions, please leave a comment below. Thanks for reading!
@Ane-h8r
11 ай бұрын
Thank you very much for the LangChain resources. Trying the same approach with LLaMA 2 and doesn't seems working with the Lang chain agents and tools. Curious how will langchain components like Agents, Output parsers etc. work with LLaMA 2
@blueman333
Жыл бұрын
Super neat!! I have one question. Can we attach conditional routing using the language expression? I am asking for something like chain1 | Condition(if_true, chain_true, chain_false) | chain_4
@samwitteveenai
Жыл бұрын
This a good question. Don't think it will work in the style you wrote above (but I could be wrong). You can do it with router chains and also probably with nested functions. I will look at make a video about this.
@MariuszWoloszyn
Жыл бұрын
Unfortunately there's a subtle bug in your code. When you create final chain (chain3) it uses chain1 as well as critque_chain which also uses the same chain1. The caveat is that the chain1 is called twice and returns different answers each time. I realized that when I created initial model like that: `model = ChatOpenAI(model_name="gpt-4", temperature=0, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])` and used the streamig callback as debug rather than langchain.debug = True. There's additional issue with the original code as ChatOpenAI does not have model argument but rather mode_name but since it accepts all **kwargs it's easy to miss that, especially in the colab without proper linter.
@samwitteveenai
Жыл бұрын
Thanks for pointing this out. I thought this might be the case after I published and have been traveling since. I will check it out and update the Colab in the next day or two.
@anthanh1921
Жыл бұрын
Thinking this could be improve if we find a way to pass back the chain3 output to the evaluator and iterate until it the evaluator approve to make it a true RCI agent, else it's still a linear pipeline that enhance the initial output 1 time.
@hiranga
Жыл бұрын
awesome! @Sam Witteveen - great video👌🏾👌🏾👌🏾👌🏾👌🏾👌🏾👌🏾👌🏾Have you got OpenAI Function Agent streaming working in any of your experiments? Trying to set this up for a Fastapi POST request but finding streaming quite a challenge - was wondering if you may have already crossed that hurdle before??
@alvintsoiwc1908
Жыл бұрын
instead of openai, can build a rci example with llama 2?
@dhrumil5977
Жыл бұрын
Can you make a video on how to us llama2 with petals and langchain
@fkltan
Жыл бұрын
awesome video as always. btw, can u do a tutorial video on langchain but using chromdb as a server to store embeddings (with chromadb running as a server in a docker container)? there's almost nothing available online about it apart from that single page on the langchain website. almost all code examples refer only to the basic saving to disk/persistent use case. i've been trying to get that setup working with a different embedding function (Instructor) but having a lot of issues getting it to work.
@aliattieh9216
Жыл бұрын
Hey sam love your videos. I was wondering if there is a way to create a csv_agent using initialize_agent function and passing tools to it. The reason is that i need to pass multiple tools to the agent other than csv. is this possible?
@samwitteveenai
Жыл бұрын
I think you could do it as having the csv retrieval be a tool. I have done something similar for the Vectorstore retrieval as a tool in the past, but not directly for CSV etc.
@rafaeldelrey9239
Жыл бұрын
This LCEL doesn't seem too straightforward or intuitive at all.
@JOHNSMITH-ve3rq
Жыл бұрын
Pretty good but also very verbose and amateur.
@Market-MOJOE
Жыл бұрын
Thats funny Months back I put together a fewbot comparing and contrasting itself to another similar functional one. One tnhat comes to mind was Phind v perplexity then used gpt4 as a objective 3rd sat but thought I was spinning my wheels.never got into the Os models but prety sure I have (bard v claud)+gpt few others but interestingm thanks 4 content
Жыл бұрын
Excellent work Sam. Thank you very much!
@winxalex1
Жыл бұрын
You break the idea of the pipes by executing chain1 and then pass the result. Simple you could done DictionaryOutParser/JsonOutParser or custom method in chain or so on...
@AdrienSales
Жыл бұрын
Just in time while I was asking myself about best practices around using multiple chains... inclduing the DOs and the DONTs ;-p
@ShlomiSchwartz
Жыл бұрын
Great video! Quick question: Can we direct the chain to use an alternate chain based on the critique_chain outcome? Like A->B->C to A->B->D? Thanks! 😊
@cbusse7842
Жыл бұрын
wouldnt it be better to combine MPT and falcon models or llama2 than open AI to prove the value of the RCI chain compared to GPT4?
@siddharthvij9087
Жыл бұрын
This is useful..m stuck in similar issue ...will try this approach Thank you for sharing
@mi7
Жыл бұрын
oh thanks mate as usual ur content is the best , Cheers from Chile :)
@umeshtiwari9249
Жыл бұрын
Gud video . Thanks. please make a playlist on this chain and cover some use cases as well. thanks again
@aiexplainai2
Жыл бұрын
very cool as always - I like LangChain take a step to simplify the process but this piping still seems to have quite a limitation and only works well with relatively simple chain
@jawadmansoor6064
Жыл бұрын
how about using a small model like 7b llama? or even pythia small models?
@micbab-vg2mu
Жыл бұрын
Great video - thank you
@rajivmehtapy
Жыл бұрын
Great content you have generated.
@attilavass6935
Жыл бұрын
Has anyone tried this concept for code generation / fix / improvement? When it generates eg. Python code, runs it in a sandbox env, uses error / warning / log messages as feedback / evaluation for the LLM, etc.
@samwitteveenai
Жыл бұрын
yeah kind of, this is what the GPT-4 code interpreter does a lot
@klammer75
Жыл бұрын
Can’t wait for the series and keep,up the great work!🥳😎🦾
Пікірлер: 32