Hi, Jesús and Alexander! Nice workshop! Just some doubts: 1. I didn't understand how the retrieval of the most relevant part of the KG is made. For a vector-based RAG approach is clear that cosine similarity is used, but how do we compute the similarity between our query/question and the nodes of the KG? 2. Is there any other videos where you explain how to build a KG from our own documents? 3. Can I reproduce the experiments using my own model instead of OpenAI's ? Many thanks in advance!
@neo4j
8 ай бұрын
1. The vectors we search over are actually properties of nodes. So when you use cosine/euclidean similiarity to retreive the most similar vectors, you're actually getting also the nodes that have these vectors as properties. That's your entry point to the graph and from there you can do the graph exploration. 2. We go into this in a few other Going Meta Episodes. Outside of that: Tutorial how to build: kzitem.info/news/bejne/x6qiwGSlnoqKhXo or kzitem.infolBiFiqkhUdc or check out our GenAI Playlist from NODES kzitem.info/door/PL9Hl4pk2FsvUOTavg2q_n7vLZmvUCNmS5 3. Of course. An easy way to work with open-source models run locally is using Ollama
@ayantanulaha8629
3 ай бұрын
What if I need the section and sub-section also from where in the document the answer extracted?
@chris_pingel
9 ай бұрын
I love this. Did something similar with European aerospace regulations (Part 21 - Airworthiness and Environmental Certification) several years ago, importing the regulations into a graph for structured access, but now this graph has suddenly become even more valuable more or less overnight. Thanks for the deep insights.
@neo4j
9 ай бұрын
Thank you!
@Yoyo-sy5kl
8 ай бұрын
Wonderful wonderful video on cutting edge and unique technology. The video was easily digestible in explanation especially for someone without software engineering experience like me, but could benefit from this application of RAG in my field (Bioinformatics). Thanks for this informational series, keep up the good work. Also another question does the graph query "cypher magic" as you called it incorporate various graph traversal algorithms, or could it possibly take advantage of key properties of a graph, like finding completed sub-graphs. I guess maybe there is a video already available about Neo4j cypher querying, if so I'll check that out.
@neo4j
8 ай бұрын
Thank you very much! the cypher magic refers to a few things we explained earlier of this little RAG series of Going Meta. Did you watch episodes 21 & 22 for context?
@marketlyfe4414
6 ай бұрын
Guys, I noticed with this and your last video, you hard code the additional cypher query to fetch results based on the original vector query. In reality, given a graph database (could be many query types), we couldn't know upfront that we would need ' the related clauses', or what specific relation node types we may need (clauses on works for 'definitions'). Is it possible to make a video on textToCypher, detailing how we can make this work with Neo4j (given it's current limitations)
@jbarrasa4649
6 ай бұрын
That’s exactly what we cover on the next episode: How to dynamically generate the cypher queries. Check it out and let us know if that addresses your question.
@Neri-xg5cm
9 ай бұрын
Kudos for the team for putting all of this together, very interesting and entertaining playlist :) One question here, the side-by-side results for No-RAG, Basic-RAG and Contextualized-RAG; For Basic-RAG, are you extracting top-1 best match? If yes, you could have extracted top-10 nearest match and used as context for LLM right?, would we see the same different responses? would Basic-RAG be as good as Contextualized-RAG?
@jbarrasa4649
9 ай бұрын
You're absolutely right, and while I did use the k=1 limit when invoking the vector_index.similarity_search method to show the results on the notebook, I did not set that limit when I created the retriever vector_index.as_retriever(). What this means is that behind the scenes, the retrieval step in the basic-RAG case is effectively returning a few results (whatever the default set by langchain is). But to your question, the problem I see with just extending the number of neighbors returned (k=10, for instance) is the risk of passing low-relevance context to the LLM and missing key bits of information (the clauses in the definition in the example) because they are unlikely to be in close vector proximity of the question. Thanks for your interest, I hope this clarifies things?
@rickyS-D76
Ай бұрын
@@jbarrasa4649 Thanks for the great presentation, it helped a lot. In the github of Ep 23, you haven't include the streamlit app. do you have it anywhere else? Thanks
@fanchuankang1228
9 ай бұрын
@DreamsAPI
9 ай бұрын
Hi Guys, wonderful conversation, just at very beginning of experience with Neo4j, a few days ago in JSON Schema Slack channel heard from Jason and Greg how ontologies, along with JSONata could help with extracting values from a JSON schema (source.json) and replacing the values of a different JSON schema (target.son). Both JSON source.json and target.json files will have different JSON structures, key names, etc. Really excited that you guys are focusing on using these new technologies to being more dynamic.
@beginnerscode5684
7 ай бұрын
The video is excellent, but Jesús Barrasa states that Michael is responsible for creating the knowledge graph. Where can I locate the link to the video demonstrating the creation of the knowledge graph by Michael?
@neo4j
7 ай бұрын
Isnt that in the repo? github.com/jbarrasa/goingmeta
@beginnerscode5684
7 ай бұрын
@@neo4j I could not find the link to the creation of the knowledge graph which was mentioned by Jesús Barrasa?
Пікірлер: 18