I’ve come across multiple videos of this channel and I am not even a subscriber. The way I’ve realized it’s always this channel is because the guy says “here” a lot 🤣 great content tho!
@SpenceDuke
11 ай бұрын
Very thankful for these videos, please continue with the series
@kenchang3456
11 ай бұрын
Thanks Sam, this title caught my eye as something I could use in my POC for better search results. Really appreciate you sharing.
@sup5356
11 ай бұрын
concise, interesting and useful content as always. Super series this, very interesting. Many thanks!
@flipper71100
11 ай бұрын
This was awesome and quite informative, I heard about BM25 a couple of days back and now I know where it fits. Also I would request you to do a video on RAG Fusion if possible.
@samwitteveenai
11 ай бұрын
Yeah will certainly going to do a RAG Fusion video
@micbab-vg2mu
11 ай бұрын
Great thank you !!! This hybrid approach is quite interesting.
@carlosperezcpe
11 ай бұрын
Hey man, don't forget to use night mode. If watching on a big screen it's had. Thanks for the video 👊
@samwitteveenai
11 ай бұрын
Thanks for the tip
@muhammadhasnain8177
11 ай бұрын
Thanks for this video please continue this series
@K-Djoon
11 ай бұрын
Thank you so much for your sharing. This is so amazing!!
@toddnedd2138
11 ай бұрын
By evaluating vector DBs for production, i came across weaviate, which supports hybrid search out-of-the-box also with weighting the search results. Maybe it depends on your use-case if you go with langchain or a build-in DB solution.
@sup5356
11 ай бұрын
yes! same, it's excellent
@morespinach9832
7 ай бұрын
How is Weaviate different from Pinecone or the functionality inside Neo4J?
@KeithBourne
4 ай бұрын
Weaviate lets you pick the ranking algorithm you want to use, which is a step up from using LangChain directly. But not everyone uses Weaviate for various important reasons, and there are only two algorithms currently available. I imagine that is something LangChain will add directly eventually, and Weaviate will continue to build out. The Reciprocal Rank Fusion algorithm that LangChain uses is probably good enough for most use cases, so you can probably live without Weaviate if you are already committed to something else. But definitely good to consider weaviate if hybrid is important to you, plus a whole bunch of other reasons. But as far as this demo goes, it shows the default way to do hybrid search with LangChain, so very useful for anyone looking into this approach. Then you just build your knowledge from there! For example, you may want to write a function that adds a third retriever to the rankings weighted in a certain way that is beneficial to your specific use case. Start with this demo, and then replace the ensemble retriever with your own function within the LangChain chain.
@arindamdas70
11 ай бұрын
@sam thanks for the content, will you please explain how can we use hybrid search for the document which you have used for self querying retrieval.
@cjp3288
11 ай бұрын
Hey Sam!, these videos are great, thank you for taking the time to make them. In your oppinion what is the best, managed, large scale RAG solutions provider? I'm helping a company that has around 2000 plus documents,
@akash_a_desai
11 ай бұрын
Thanks a lot, please add agents with vectordb & rag video
@pavanpraneeth4659
11 ай бұрын
Awesome
@ChairmanHehe
11 ай бұрын
how is bm25 able to retrieve documents that do not contain any verbatim words from the query?
@SachinChavan13
3 ай бұрын
This is very important question to understand. There are lot of videos of KZitem but many of them just do not deep dive explain. They just explain what's working and ignore what's not working. I am facing tons of problems while implementing RAG in actual projects.
@henkhbit5748
11 ай бұрын
Interesting add-in feature from langchain. You can search with BM25retriever in source documents (pdf etc.)? Does it search directly in the source document or in the embedding? I suppose ensemble both searches will affect performance when querying a lot of documents/embeddings... Thanks for the update!👍
@samwitteveenai
11 ай бұрын
The ensemble allows you to do both. Yes you can use the BM25 alone if you just want that.
@ninonazgaidze1360
11 ай бұрын
Which model would you use in production for a search over in documents? Thank you
@hqcart1
11 ай бұрын
how to do this if you have millions of text rows and you want your search to be sub 100ms?
@dontknowmyname5973
10 ай бұрын
Very useful, thank you! Is it possible to update it with opensearch vectorstore instead of FAISS?
@KeithBourne
4 ай бұрын
Thats what makes LangChain great, you just swap out vectorstores/retreivers. For example, for ChromaDB: vectorstore = Chroma.from_documents( documents=documents, embedding=embedding_function, collection_name=collection_name, client=chroma_client ) dense_retriever = vectorstore.as_retriever()
@perpetuallearner8257
11 ай бұрын
Hey SAM, can you please make a video on RAG using knowledge graphs?? Thanks
@samwitteveenai
11 ай бұрын
Yeah I really want to do this, one challenge is find a way for people to easily run Neo4J. I have been looking at alternatives if you have any suggestions please let me know.
@tinyentropy
10 ай бұрын
How do I use this with more than a single key word?
@stephenthumb2912
10 ай бұрын
this is a nice idea but the actual implementation is way different than the example. there's no way you can implement this at all like the example which is disappointing. keyword style searches are imo very poorly supported in langchain. there are tons of gotchas for e.g. in the elasticseach classes which seems to me be one of the only realistic ways to implement this.
Пікірлер: 33