At the heart of RAGSys lies a dynamic, self-improving knowledge base:
RAGSys transcends traditional RAG limitations:
Redefining in-context learning for enterprise LLM deployment:
At Crossing Minds, we redefine user experiences through the power of personalization and generative AI. As experts in the field, we lead on this transformative journey, supporting enterprises in crafting exceptional information retrieval processes.
With an average of 10 years of experience in creating cutting-edge machine learning pipelines and leveraging the latest advancements in AI powered personalization, we are here to reshape the way content is discovered.
Our approach to Discovery Ops is revolutionizing user experiences, maximizing revenue, engagement, and retention.
Increases factually accuracy and stylistic consistency for domain-specific tasks.
Supplements the LLM with task-specific data that are not part of the LLM’s training data.
Some methods are In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Fine-Tuning.
RAGSys brings the best of both worlds:
Fine-Tuning and RAG
Optimizing the Retrieval System for In-Context Learning
Integrates techniques from Recommender Systems into the RAG retrieval process
Context: B2B Marketplace Catalog
One of our client is a leading B2B marketplace in their industry
5000+ merchants, 1M+ items
All merchants are creating items manually, creating many duplicates
They need to consolidate the items catalog
In-Context Learning Embedding and Reranker Benchmark (ICLERB) is a benchmark to evaluate embedding and reranking models used to retrieve examples for In-Context Learning (ICL).
In contrast to the commonly used MTEB, which evaluates embeddings based on their ability to retrieve relevant documents, ICLERB evaluates the performance impact on downstream tasks when using these embedding or reranking models for ICL.
On this page, you'll find the leaderboard of embedding and reranking models evaluated on ICLERB. You can also find the leaderboard on Huggingface.
To learn more about the methodology of ICLERB, you can read our white paper.
The table below shows the average NDCG@K for various embedding and reranker models when used to retrieve examples for In-Context Learning for a number of different tasks.