In this demo, I combine several agentic patterns - reflection, planning, and multi-agent workflows - to replace a complex prompt. I was able to match results from GPT-4 by combining multiple steps utilizing only GPT-3.5 and Claude Haiku.
This video was inspired by Andrew Ng's recent work on agentic workflows, in which he demonstrates the potential to exceed state-of-the-art performance in LLMs using agentic workflows over single prompts. Ng showed that non-SOTA models, like GPT-3.5, can outperform even GPT-4 when utilized within an agentic framework.
I recommend you watch Andrew's talk ( • What's next for AI age... ) or read his article (www.deeplearning.ai/the-batch..., they're both excellent.
This demo build on a previous demo I shared, where I explored creating an agent to extract long-term memories. You can view that demo here: • Build an Agent with Lo...
Interested in talking about a project? Reach out!
Email: christian@botany-ai.com
LinkedIn: linkedin.com/in/christianerice
Follow along with the code on GitHub:
github.com/christianrice/ai-d...
Timestamps:
0:00 - Intro
0:27 - Basic Demo
1:16 - Why Add Agentic Reasoning?
2:43 - Agentic Reasoning Design Patterns
4:41 - Improvements from Agentic Reasoning
5:42 - System Design
7:58 - Demo
11:22 - View the Prompts
13:30 - Considerations
14:04 - Code Explanation
15:10 - Closing Thoughts
Негізгі бет Ғылым және технология Building Agents: Visualize a Multi-Agent Workflow that Outperforms a Single SOTA Prompt
Пікірлер: 30