Artwork

Contenuto fornito da Zeta Alpha. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Zeta Alpha o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Shallow Pooling for Sparse Labels: the shortcomings of MS MARCO

1:07:17
 
Condividi
 

Manage episode 355037191 series 3446693
Contenuto fornito da Zeta Alpha. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Zeta Alpha o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this first episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castellla discuss the paper "Shallow Pooling for Sparse Labels" by Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan and Charles L. A. Clarke from the University of Waterloo, Canada.

This paper puts the spotlight on the popular IR benchmark MS MARCO and investigates whether modern neural retrieval models retrieve documents that are even more relevant than the original top relevance annotations. The results have important implications and raise the question of to what degree this benchmark is still an informative north star to follow.

Contact: castella@zeta-alpha.com

Timestamps:

00:00 — Introduction.

01:52 — Overview and motivation of the paper.

04:00 — Origins of MS MARCO.

07:30 — Modern approaches to IR: keyword-based, dense retrieval, rerankers and learned sparse representations.

13:40 — What is "better than perfect" performance on MS MARCO?

17:15 — Results and discussion: how often are neural rankers preferred over original annotations on MS MARCO? How should we interpret these results?

26:55 — The authors' proposal to "fix" MS MARCO: shallow pooling

32:40 — How does TREC Deep Learning compare?

38:30 — How do models compare after re-annotating MS MARCO passages?

45:00 — Figure 5 audio description.

47:00 — Discussion on models' performance after re-annotations.

51:50 — Exciting directions in the space of IR benchmarking.

1:06:20 — Outro.

Related material:

- Leo Boystov paper critique blog post: http://searchivarius.org/blog/ir-leaderboards-never-tell-full-story-they-are-still-useful-and-what-can-be-done-make-them-even

- "MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries" https://dl.acm.org/doi/abs/10.1145/3459637.3482011

  continue reading

13 episodi

Artwork
iconCondividi
 
Manage episode 355037191 series 3446693
Contenuto fornito da Zeta Alpha. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Zeta Alpha o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this first episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castellla discuss the paper "Shallow Pooling for Sparse Labels" by Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan and Charles L. A. Clarke from the University of Waterloo, Canada.

This paper puts the spotlight on the popular IR benchmark MS MARCO and investigates whether modern neural retrieval models retrieve documents that are even more relevant than the original top relevance annotations. The results have important implications and raise the question of to what degree this benchmark is still an informative north star to follow.

Contact: castella@zeta-alpha.com

Timestamps:

00:00 — Introduction.

01:52 — Overview and motivation of the paper.

04:00 — Origins of MS MARCO.

07:30 — Modern approaches to IR: keyword-based, dense retrieval, rerankers and learned sparse representations.

13:40 — What is "better than perfect" performance on MS MARCO?

17:15 — Results and discussion: how often are neural rankers preferred over original annotations on MS MARCO? How should we interpret these results?

26:55 — The authors' proposal to "fix" MS MARCO: shallow pooling

32:40 — How does TREC Deep Learning compare?

38:30 — How do models compare after re-annotating MS MARCO passages?

45:00 — Figure 5 audio description.

47:00 — Discussion on models' performance after re-annotations.

51:50 — Exciting directions in the space of IR benchmarking.

1:06:20 — Outro.

Related material:

- Leo Boystov paper critique blog post: http://searchivarius.org/blog/ir-leaderboards-never-tell-full-story-they-are-still-useful-and-what-can-be-done-make-them-even

- "MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries" https://dl.acm.org/doi/abs/10.1145/3459637.3482011

  continue reading

13 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida