In this tutorial and presentation, we'll dive into transformer-based embeddings for long-form text, highlighting some of the theory around why BERT-esque models perform much better than recurrent neural networks (RNNs). From there, we'll go into some examples utilizing the "sentence-transformers" library, showcasing its use in a variety of tasks such as sentiment classification, clustering, and, RAG.
What you'll learn:
- The theory behind long-form text embeddings
- Using sentence-transformers in an actual application
Helpful resources:
- Slides: resource.zilliz.com/hubfs/Emb...
- Blog: zilliz.com/learn/Neural-Netwo...
- Text embedding models MTEB leaderboard: huggingface.co/spaces/mteb/le...
- Tutorial notebook: github.com/milvus-io/bootcamp...
Негізгі бет Ғылым және технология Tutorial: Diving into Text Embedding Models
Пікірлер