Hi How to read and save json document in vector db
@DevXplaining
3 ай бұрын
Hmm, perhaps a topic for another video. Short answer is... depends.
@Chrisbees
5 ай бұрын
This is really cool. Thank you for making this video, now I have to look more into SpringAI. But what would you say about code safety when using AI, dont that have access to your code that shouldn't be public?
@DevXplaining
5 ай бұрын
Yeah, it's a good question. AI takes many forms, so rule of thumb is: anything you would not like to/be permitted to publish on front page of a popular magazine tomorrow, you should not send to AI - unless you know how the data is handled, stored, etc. And additionally, is your input used to train the models (which could cause it to pop up as an answer for future users). So for example using ChatGPT like I do here, I'd limit it to general advice. But for example Github Copilot Enterprise is my tool for daily work, and there are some rules in place to stop my code from leaking. So either know how your AI handles input data, or be safe and just use it for general questions. But as always, keep your secrets secret! :)
@Chrisbees
5 ай бұрын
very insightful, Thank you@@DevXplaining
@SuperVinayaka
4 ай бұрын
Great video. I had a question. lets say we are building a chatBot and we need to make sure that the bot answers questions that are available in a document as well as personalized questions for a specific customer [which can be retrieved using some API's that have been exposed]. How would you do it?
@DevXplaining
4 ай бұрын
Hmm ,interesting question. If I got it right, some customers would get answers based on the document, but others would get answers based on some APIs? If so, in both cases you want to inject some context in system level instruction, either from vector database/document, or an API - so that part remains the same. So when you need to answer an API, you would call the API, get the context for the question, then include that in system direction. No need for vector database, as long as what the API returns a) has all the context info needed and b) is relevant and c) is not too massive to chat in the available tokens. So in both cases, grab the context information, feed in the system level instruction like in my code. Decide on top of that which context is used for which user. One thing I'm not doing in this video, is keeping a conversational memory: I'm just asking a question, and activating context + chatgpt to handle it, but then forgetting all the conversation. So with conversational memory, would need to be a bit more careful, because also the conversational context is typically injected into the calls. Also I think would have to be careful with size, of course depending on your model token limits. And you would need to keep the contexts separate so each user would see their own personal context only (instead of same shared one for everyone). I hope this helps and I got the question right!
Пікірлер: 9