that was a great video. good idea to update your video explaining the concepts with latest GPT-4 models
@MatthewatCourseCareers
3 ай бұрын
Hey Abdul - We sent you an email about a paid partnership. Let me know what you think.
@purnamaheshimmandi1212
3 ай бұрын
I'm getting timeout errors while writing data into COSMOS mongo collection using the databricks approach. How to handle through exceptions with changing the provisioned throughput?
@julianomoraisbarbosa
4 ай бұрын
# til
@adiltofiq221
4 ай бұрын
Thanks a lot man
@abdulzedan
4 ай бұрын
Happy to help!
@Andrew-ks7zj
4 ай бұрын
Do you have a course we can purchase?
@abdulzedan
4 ай бұрын
Unfortunately, I do not, but may have one in the future... stay tuned!
@Brian-to1du
4 ай бұрын
How did you learn this by yourself?
@abdulzedan
4 ай бұрын
It was a mix between my education as well as my work experience! :)
@ethernalspirit4559
4 ай бұрын
Your knowledge and explanation is truly next level
@abdulzedan
4 ай бұрын
Really appreciate it - Cheers!
@matthew1667
5 ай бұрын
Great video ty but did you mean "data" instead of "query" at 25:54?
@abdulzedan
5 ай бұрын
Thanks for the correction! I did indeed mean data sources!
@jackbeats1350
5 ай бұрын
Excellent video, concise, articulate, much thanks to this :)
@abdulzedan
5 ай бұрын
Glad it was helpful!
@MaltzLadd
5 ай бұрын
Embedding is a technique in natural language processing (NLP) to convert text into numeric vectors. These vectors can be used to represent the meaning of words, phrases or sentences.
@nandkishorpandey8961
5 ай бұрын
1
@KehrJoris
5 ай бұрын
Embedding and vectoring are important tools in LLM. They enable large language models to understand and process natural language more effectively.
@samuelfeder9764
5 ай бұрын
That was superb, thank you! 😁
@abdulzedan
5 ай бұрын
Really happy to hear that, thank you!
@maxtriplex7397
5 ай бұрын
Thanks a lot man!very informative and clear ✌🏼
@abdulzedan
5 ай бұрын
Thank you! I appreciate it ☺️
@Canna_Science_and_Technology
5 ай бұрын
Long winded comment alert. In a RAG-based Q&A system, the efficiency of query processing and the quality of the results are paramount. One key challenge is the system’s ability to handle vague or context-lacking user queries, which often leads to inaccurate results. To address this, we’ve implemented a fine-tuned LLM to reformat and enrich user queries with contextual information, ensuring more relevant results from the vector database. However, this adds complexity, latency, and cost, especially in systems without high-end GPUs. Improving algorithmic efficiency is crucial. Integrating techniques like LORA into the LLM can streamline the process, allowing it to handle both context-aware query reformulation and vector searches. This could significantly reduce the need for separate embedding models, enhancing system responsiveness and user experience. Furthermore, incorporating a feedback mechanism for continuous learning is vital. This would enable the system to adapt and improve over time based on user interactions, leading to progressively more accurate and reliable results. Such a system not only becomes more efficient but also more attuned to the evolving needs and patterns of its users.
@abdulzedan
5 ай бұрын
Very good insight, thanks for sharing. Think many in research are working to address the bottlenecks you mentioned above ^^ with things like quantum inspired network for multi-dimensional data representations. Really appreciate your thoughts - thank you :)
@Canna_Science_and_Technology
5 ай бұрын
I’m deeply grateful for this detailed exploration of embeddings and vector databases. As someone who has built several RAG systems and transitioned to newer embedding models like BGE for performance, I’ve often worked with these tools without fully grasping their underlying mechanics. Your video has been a revelation, explaining not just the ‘how’ but the ‘why’, especially enlightening me on why the dimensions of my indexes need to match the embedding model I use. It’s incredibly satisfying to understand these concepts that are so crucial to my work. Thank you for shedding light on these complex topics and aiding my professional growth.
@abdulzedan
5 ай бұрын
Thank you for the kind words, feels truly humbling to hear this and I am glad that this has helped you in some way!
@abdullahnaji1964
5 ай бұрын
very informative thank you for posting
@abdulzedan
5 ай бұрын
Glad it was helpful!😊
@texturalbard
5 ай бұрын
Could you please make a video about attention mechanisms?
@abdulzedan
5 ай бұрын
No promises! But I’ll see what I can do 😊
@Ridz149
5 ай бұрын
I don't know how I got here but this video seems very full of information. I want to ask what level of university is this, and why are you teaching youtube this?
@abdulzedan
5 ай бұрын
Thank you! this is something you would probably find in a graduate course. With this said, there's a lot of information I condensed to drive the foundational understanding better. :)
@RealROI
6 ай бұрын
This was excellent bro, thanks for putting this together
@user-iy2mf1ld3k
6 ай бұрын
U really did it!!! Thanks man!!!
@parkerrex
6 ай бұрын
Finally someone I can understand 😂
@duanerobinson9013
7 ай бұрын
Awesome video! Low key, sober, and packed with information.
@chandrachoodR
8 ай бұрын
Really fantastic video on using the Azure openai to create smart apps and services , thanks for
@Shoaibkhan-oj3oe
8 ай бұрын
is there any way to create an API for this same bot and use it in a custom website?
@sakibali1265
8 ай бұрын
Hi, Great video, I wanted ask that cant we directly give the contents of the file in fine tuning. If yes then what are the efforts for that and what will be its limitations. If the pdfs are to be needed to be used for long time. And what will happen if i want to add additional pdf contains after fine tuning a gpt model will i be able to do it ? Note I have also viewed your "Azure OpenAI 101: Powering ChatGPT with your Data - A Deep Dive" video but i want to know from this perspective. Thanks
@CrispinCourtenay
9 ай бұрын
Curious, could you connect Kafka, Red Panda etc. to Azure Blob and then have ACS in a near real time streaming pipeline?
@parkerrex
6 ай бұрын
That’s what I’m Looking for. Any luck ?
@haodeng9639
9 ай бұрын
The cost lots of token. I will think twice deploy any openai apps
@COLD17
9 ай бұрын
So how about you show how the indexes are created instead of just clicking through the portal of a finished setup?
@user-ep3dv7sr5m
9 ай бұрын
Is there a way to integrate the power of web app like Rest API for my app? 🤔
@abdulzedan
9 ай бұрын
That's precisely what we're doing in this demonstration. By deploying into the web app, it is already leveraging the API's behind the scenes :)
@user-ep3dv7sr5m
9 ай бұрын
@@abdulzedan Thanks for responding, what I was referring to is if there is any point where I can make calls from my application for the same chat, receiving responses about my own data including the references
@AshPrasad-fz7mr
9 ай бұрын
Assuming we feed a csv of last month's sales data, can it answer, 'give me the top sellers?'
@abdulzedan
9 ай бұрын
No, the supported formats can be found here: learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data#data-formats-and-file-types
@czarahlvic4290
9 ай бұрын
Hi I just followed your video and experiencing this "The requested information is not found in the retrieved data. Please try another query or topic." kindly help
@abdulzedan
9 ай бұрын
This happens when the "retrieval" part of the RAG in Cog Search isn't fully optimized/configured. If you enable semantic search/vector search (now supported with Cognitive Search), this should go away. Sometimes, asking the question in a different way would also alleviate the issue you have!
@rmehdi5871
9 ай бұрын
Great video, Abdul! In you scheme at 8:36, do you know a format of the information exchange between ACS and ChatGPT, namely between 1. User prompt/question and ACS; 2. From ACS to ChatGPT ("augmented prompt"?); and 3. Answer provided from ChatGPT? Are those JSON structures or in a different format? How to find and view those communication exchanges in the system? Thanks!
@abdulzedan
9 ай бұрын
Thank you! Not sure I get your question, I would reach out to Azure Support for clarity on this :)
@aliakseitsyuliou5681
9 ай бұрын
Nice video. But I still have no clue how should we resolve the problem with stateless data? Any tips? Upload the data set to the current session it's not a point at all. Just to play for fun(IMHO).
@abdulzedan
9 ай бұрын
Thank you for sharing your thoughts. In the video, this is why we're deploying what we're doing in the playground, to an Azure Web App. This way, it is no longer "stateless" and you can leverage it on an ongoing basis. Hope this helps :)
@aliakseitsyuliou5681
9 ай бұрын
@@abdulzedan thank you for the fast response. I'll check this possibility!
@simmo2337
10 ай бұрын
promo sm 👇
@yazardari4237
10 ай бұрын
Excellent video, exhaustive and comprehensive as always
@abdulzedan
10 ай бұрын
Appreciate the kind words, thank you!
@jay.hiraya
10 ай бұрын
How do you customise a model using your own data? In my case I have a 500pages of pdf file. Thanks!
@abdulzedan
10 ай бұрын
kzitem.info/news/bejne/u2yc0WWdk5x1gZw
@syedbabarali1
10 ай бұрын
Nice tutorial @Abdul Zedan.. I also did the same thing but I have stuck on a 'Authentication Not Configured' error while trying to run my app on 'Azure App Service' from Chatplayground. Could you please guide me on how to skip or bypass this page
@abdulzedan
10 ай бұрын
Thanks Babar! This can be due to invalid/missing permissions. Please visit: learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data#important-considerations and configure the identity provider in the app!
@syedbabarali1
10 ай бұрын
@@abdulzedan Thank your problem resolved
@maxtriplex7397
10 ай бұрын
Been waiting for this video! Thanks a lot!
@abdulzedan
10 ай бұрын
Glad you found it helpful!
@ahmedt3069
10 ай бұрын
Thank you abdul great work
@abdulzedan
10 ай бұрын
Thank you! I am glad you found it useful
@andypejman
10 ай бұрын
Thank you, so in your example, what happens if you ask GPT something that it doesn't have the answer to from your own dataset? Does it try to make something up or does it just tell you it doesn't have the answer? Or is this something you have to prompt it so that it is trained in a certain way?
@abdulzedan
10 ай бұрын
Thank you for the comment! If you head to 25:52 in the video, I make a note of the "Limit responses to your data content" option. If this is selected, then the model will tell you it cannot retrieve/have the answer you are looking for. Additionally, in the system message, you can further augment the limit your responses to your data content by adding a system message that states how you would like the model to respond in situations where it doesn't know the answer. Hope this helps :)
@dannyparkins
10 ай бұрын
@@abdulzedan I thought there is a "Temperature" setting that kind of determines how "creative" the model is allowed to be.
@abdulzedan
10 ай бұрын
This is also true here, in the right of the chat playground, under configuration, you will see the option "parameters". Here you can adjust the temperature value of the model!@@dannyparkins
@FlameOfAnor9
10 ай бұрын
Thanks Abdul. Great work!
@abdulzedan
10 ай бұрын
Thank you! I’m glad you enjoyed it ☺️
@sine_wave_tech
11 ай бұрын
How much money are we talking about, in fine tuning things?
@abdulzedan
10 ай бұрын
Can't really give you a range as it can depend on a lot of factors. Here is a good article to check out: medium.com/devrain/calculating-azure-openai-service-usage-costs-a-comprehensive-guide-40b0880660f9
@pallavigampala
Жыл бұрын
The video is very informative, but can we fine tune the codex models where we can train a new language to the Chatbot ?
@abdulzedan
Жыл бұрын
Thank you for the kind words - you can definitely do this, but it’s prudent to first see the available capabilities (I.e using the existing models with your data, I’ll be coming up with a video soon on that)! Cheers
@hero11520
Жыл бұрын
Awsome
@KiranSuman-od2vt
Жыл бұрын
Felt great watching this
@user-ue7mk5dk6k
Жыл бұрын
Great
@noureddinezekaoui4546
Жыл бұрын
Hello, thank you for this amazing video. However, when I tried the same script you presented in the Microsoft documentation, I wasn't able to fine-tune my custom model and ended up with the message below: "" Job not in terminal status: notRunning. Waiting. Status: notRunning Status: failed Checking other fine-tune jobs in the subscription. Found 2 fine-tune jobs. "" Please, so you have any idea about how to fix this issue?
@abdulzedan
Жыл бұрын
I am glad you found this useful! Currently, fine-tuning is disabled for new customers in all regions, except for those who already had fine-tuning deployments. I would reach out to Azure Support to confirm that this is the issue for your job failure!
@aipujols
Жыл бұрын
Nice video, Abdul! Very thorough and detailed explanation on how to train custom OpenAI model. Thanks!!!
@abdulzedan
Жыл бұрын
Really happy to hear that it’s helped you! Thank you for the kind words ☺️
Пікірлер