Great presentation. Very useful, thanks for putting the effort and sharing it!
@枕头不软睡不香
Жыл бұрын
Very Great presentation!
@Aldar198706
Жыл бұрын
Thanks! Great content
@francescocariaggi1145
3 жыл бұрын
Very interesting talk!
@nomanshahid9771
7 ай бұрын
How do we use triplets with multiple negative ranking loss? Doesn't this loss only takes positive pairs?
@高长宽
3 жыл бұрын
太棒了!
@boscojay1381
2 жыл бұрын
16:00, I have heard you talk about the multiple negative ranking loss quiet often, is it the same as the SupCon loss or the contrastive loss? I also noticed in your simCSE paper implementation with sentence transformers library, you again use the MNR loss. Is it a special case of contrastive loss?? I appreciate your response to this question. Thanks
@tianshuwang7543
2 жыл бұрын
Hello, thank you for the awesome talk. I have a question why contrastive / triplet loss might only optimize the local structure? Don't these losses increase the distance between negative cases, which usually include random pairs in a batch?
@NilsReimersTalks
2 жыл бұрын
They increase the distance only for the pairs you provide. If you provide poor pairs, they might only optimize some local structures.
@tianshuwang7543
2 жыл бұрын
@@NilsReimersTalks Thx, I got your point.
@zurechtweiser
2 жыл бұрын
Es gibt einige vortrainierte Modelle, die für cosine-sim optimiert sind, andere für das Skalarprodukt. Wie wirkt es sich auf das Ergebnis aus, das jeweils andere zu nutzen?
@muhammadhammadkhan1289
3 жыл бұрын
Can you share the slides?
@NilsReimersTalks
3 жыл бұрын
Slides are here: nils-reimers.de/talks/2021-09-State-of-the-art-Bi-Encoders.zip
Пікірлер: 14