ββMachine Learning-Powered Search Ranking of Airbnb Experiences.
Post on how #AirBnB DS team built custom search, including notes on how they approached problem and what business results they achived.
Link: https://medium.com/airbnb-engineering/machine-learning-powered-search-ranking-of-airbnb-experiences-110b4b1a0789
#ranking #search #reallifeds #production
Post on how #AirBnB DS team built custom search, including notes on how they approached problem and what business results they achived.
Link: https://medium.com/airbnb-engineering/machine-learning-powered-search-ranking-of-airbnb-experiences-110b4b1a0789
#ranking #search #reallifeds #production
Ranking Items With Star Ratings and How Not To Sort By Average Rating
Two absolute must read articles for proper sorting handling. Sorting items with just an average score is wrong and there is some good classic statistics explanation why.
Link: https://www.evanmiller.org/ranking-items-with-star-ratings.html
Link2: https://www.evanmiller.org/how-not-to-sort-by-average-rating.html
#Statistics #rating #scoring #ranking
Two absolute must read articles for proper sorting handling. Sorting items with just an average score is wrong and there is some good classic statistics explanation why.
Link: https://www.evanmiller.org/ranking-items-with-star-ratings.html
Link2: https://www.evanmiller.org/how-not-to-sort-by-average-rating.html
#Statistics #rating #scoring #ranking
www.evanmiller.org
How Not To Sort By Average Rating
Users are rating items on your website. How do you know what the highest-rated items are?
ββLightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Pre-training transformers simultaneously on text and images proved to work quite well for model performance on multiple tasks, but such models usually have a low inference speed due to cross-modal attention. As a result, in practice, these models can hardly be used when low latency is required.
The authors of the paper offer a solution to this problem:
- pre-training on three new learning objectives
- extracting feature indexes offline
- using dot-product matching
- further re-ranking with a separate model
LightningDOT outperforms the previous state-of-the-art while significantly speeding up inference time by 600-2000Γ on Flickr30K and COCO image-text retrieval benchmarks.
Paper: https://arxiv.org/abs/2103.08784
Code and checkpoints will be available here:
https://github.com/intersun/LightningDOT
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-lightningdot
#pretraining #realtime #ranking #deeplearning
Pre-training transformers simultaneously on text and images proved to work quite well for model performance on multiple tasks, but such models usually have a low inference speed due to cross-modal attention. As a result, in practice, these models can hardly be used when low latency is required.
The authors of the paper offer a solution to this problem:
- pre-training on three new learning objectives
- extracting feature indexes offline
- using dot-product matching
- further re-ranking with a separate model
LightningDOT outperforms the previous state-of-the-art while significantly speeding up inference time by 600-2000Γ on Flickr30K and COCO image-text retrieval benchmarks.
Paper: https://arxiv.org/abs/2103.08784
Code and checkpoints will be available here:
https://github.com/intersun/LightningDOT
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-lightningdot
#pretraining #realtime #ranking #deeplearning
π2