I thought Apple solved this by adding a pre prompt ” don't hallucinate.'. Sounds fair?
@mehdinazari8896
2 күн бұрын
Heliucination is a very interesting topic in LLM models. It reminds me of a saying that "the LLM model is just as good as its data". Thank you for the contact.
@lagrz
2 күн бұрын
2 posts in 1 day nice, thanks for bringing good knowledge to the community
@tecnopadre
2 күн бұрын
I know that the answer about, Who do you build the best RAG? would be "It depends" It would be great Matt to create a video with how is the best architecture for a RAG depending on... thank you for your videos. As usual
@KyleMaxwell
2 күн бұрын
One thing I have tried is to ask the model to identify the gaps in its knowledge in its response. That’s very much still an experimental approach, however.
@deniszdorovtsov8195
2 күн бұрын
I like the suggested approach. Thanks !Hallucinations is a real problem with tiny models that are supposed to use toolings based on user input. If it fails once in 20-30 requests, this becomes really frustrating to the user. Running the deep layers multiple times in a row and comparing results in form of embeddings should be really fast with a small model like llama 3.2 3b.
@aurielklasovsky1435
2 күн бұрын
Part of the issue is that there are tons of scenarios where making stuff up is the desired behavior. Speaking only the truth is something us humans have to learn and we still round out sharp corners when we talk, because we also care alot about making statistically likely sentences. The other day I made a joke about hating Python, it was good fun and kept the flow of the conversation going. But I love python, actually. I said it because it felt like it would fit nicely, which it did, not because it was true.
@mahakleung6992
2 күн бұрын
Matt, actually I didn't learn anything, but was watching your delivery. It is very good. Is this just natural talent or did you take some film or acting classes? I have seen some of your other videos where (for me) there was much to learn. Thanks!!!
@technovangelist
2 күн бұрын
I have done a lot of public speaking as an evangelist at different software companies and done a lot of speaker training. And I was a trainer for 10 years and then started the training group at Datadog. So yeah I have a bit of experience.
@tomwawer5714
Күн бұрын
Hallucination is inherent in our society. Politicians, writers, teachers, journalists, businessmen, criminals, doctors all people hallucinate, confabulate, lie… why ai should be different?
@mikeym00zz
Күн бұрын
I have enough on my plate taming my own hallucinations XD
@morningraaga1424
2 күн бұрын
Firstly, thank you for the Ollama videos. I have learned many concepts from you. Secondly, regarding the Hallucination, I have trained the Lama 3.2 model recently 4-bit , 8-bit and 16bit versions of it and I have used alpaca format to train the model. For the instruction who are you? I have mentioned my profile in the output text. But when I push the model to ollama and found out there are lot of Hallucination. Though the Hallucination was very funny and sometimes scary, it says I have already died in 2017. Do you think alpaca format is a right format for to train the Lamma 3.2 model ? I did everything making the temperature setting to zero but nothing worked.
@conceptrat
2 күн бұрын
Hallucinations will continue in models trained from generalised days like public internet feeds.
@hawa7264
2 күн бұрын
ChatGPT when prompted what happened in 1976 in Berlin came up with a nuclear explosion. When asked it came up with a backstory of the owner of the power plant (there is no nuclear power plant in Berlin) and it confidently told me about the political consequences of the explosion. Just wild. Oh and Gemini - when asked for it - came up with multiple recipes for how to fry mice and claimed those were popular recipes.
@tollington9414
Күн бұрын
I had it making up tracks for several albums that it knew about, and when it was pointed out it was wrong, it just apologised and made up some new ones. Love the cocky confidence these things have 😂😂😂
@startingoverpodcast
2 күн бұрын
My problems with hallucinations is the models not pulling the answer from my maty knowledge base and then makes up the answer. I've been trying to train a model to run a RPG I'm designing from scratch. I can train 5 prompta in and it starts hallucinating. First it will say it talked to other people online about the game. Which isn't true because I'm the only one with the build. It will make up races, classes and abilities based off DND.
@JNET_Reloaded
2 күн бұрын
this is why we need factual fact based trained models! no more of this training models with made up data or asking it to make up data to train another model! the hallucinations clearly came from random training data!
@technovangelist
2 күн бұрын
Not necessarily. Some of it is from that but not all. And when all the real data has been used where do you go to find more especially when the big sources are saying no you can’t use our stores.
@NLPprompter
2 күн бұрын
there was a talk about open ended AI, where hallucination will be a past.
@MuhanadAbulHusn
2 күн бұрын
You wanna see crazy stuff, give Claude 3.5 sonnet instructions to use a library that isn't in its training data
@DallanLoomis
2 күн бұрын
Great video, this is definitely a topic i was hoping had a more solid answer… but dang 😭
@tangobayus
2 күн бұрын
I think AI hallucinations are over-rated. I work with the big AI's every day and don't see anything that I consider to be hallucinations. If you mess with the settings on a model, you can probably get strange results. In that case, you are the problem, not the model.
@nyyotam4057
2 күн бұрын
One class of hallucinations come out of the model's bridging the gaps in it's partial personality model. This cannot be fixed by Temp->0. So it cannot fix all hallucinations. Sure, it cannot fix hallucinations which come out of bad data as well.
@AndrewSmith99
2 күн бұрын
It's not a perfect solution either but better than telling the model "don't make stuff up" is "when you don't know something ask the user for more information"
@technovangelist
2 күн бұрын
The problem is that it doesn’t make anything up and it doesn’t know what it knows. So that won’t work either.
@destinedanimedestined7049
2 күн бұрын
Please where do the model download to on my pc. I just downloaded the llama-3.2-1b model, I want to know where the model files are.
@technovangelist
2 күн бұрын
The docs show where they go for your platform. It’s usually a .ollama directory off your user directory
@conceptrat
2 күн бұрын
I'm sorry I never hallucinate! Well maybe just s bit when I'm codin'e 😢
@nguyenanhnguyen7658
19 сағат бұрын
One shit, at most 5 shots, any LLM will become crimical. Tech is not there yet.
@TheDiverJim
Күн бұрын
Oh i wish you could have left politics out of this video. Your representation of the other sides opinion of the election is a hallucination. And it’s no different than the 2016 election denial. It’s just been grotesquely twisted into a straw-man with rotten fruit in place of straw. Still like your content, but I would really appreciate if you could just keep political analogies out of it. The presumably unintentional offensive joining of that misunderstood topic to flat earth nonsense distracted me the whole video.
@technovangelist
Күн бұрын
there are politics, and then there are facts. I didn't touch the politics just what factually happened. 2020 was the fairest most accurate election to date as confirmed by every single court case that tried to flip it the other way. Flat earthers are really into that too...they don't think its nonsense....
Пікірлер: 35