Join the waitlist for the
Crossing Minds RAGSys API

We'll send you an invitation when the RAGSys API is available.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Key Innovations

Adaptive Knowledge Repository

At the heart of RAGSys lies a dynamic, self-improving knowledge base:

  • Custom Retrieval Database: Create and maintain a tailored database specific to your use cases and domain expertise. This allows your ML team to build a proprietary knowledge base that continuously enhances your LLM's performance in your unique business context.
  • Model-Agnostic Design: Our adaptive retrieval database is engineered to be compatible across various LLM architectures. This flexibility allows you to switch between different LLM providers or versions without losing your accumulated knowledge and optimizations.

  • Continuous Learning: The system features an automated feedback loop that refines and expands its knowledge base in real-time, ensuring your AI capabilities evolve alongside your business needs.
RAG-Sys features a self-improving knowledge base with a custom example database, model-agnostic design, and continuous learning, enhancing LLM performance and adapting to unique business needs.

Advanced RAG

RAGSys transcends traditional RAG limitations:


  • Entropy-Maximizing Selection: Our proprietary algorithms ensure LLMs receive a diverse, information-rich input, improving response quality and reducing redundancy.

  • Quality-Weighted Retrieval: Multi-factor scoring system prioritizes high-quality, relevant information, significantly reducing hallucinations and improving factual accuracy.

  • Domain-Specific Customization: Flexible rule engine allows seamless integration of business logic and regulatory requirements into the retrieval process.
RAG-Sys overcomes traditional RAG limitations with entropy-maximizing selection, quality-weighted retrieval, and domain-specific customization, enhancing input quality, reducing redundancy, and integrating business logic.

Efficient In-Context Learning

Redefining in-context learning for enterprise LLM deployment:


  • Optimal Example Selection: Leveraging advanced information theory, RAGSys identifies the most informative examples for in-context learning, dramatically improving task performance.

  • Accelerated Fine-Tuning: By optimizing the retrieval model instead of the entire LLM, RAGSys achieves fine-tuning speeds up to 300x faster than traditional methods.

  • Transfer Learning Across Models: Retrieval engines trained on one LLM can be efficiently transferred to another, allowing you to leverage your optimizations across different models and providers.
RAG-Sys redefines few-shot learning for enterprise LLMs with optimal example selection, accelerated fine-tuning, and transfer learning across models, enhancing performance and efficiency.

Introductions

At Crossing Minds, we redefine user experiences through the power of personalization and generative AI. As experts in the field, we lead on this transformative journey, supporting enterprises in crafting exceptional information retrieval processes.

With an average of 10 years of experience in creating cutting-edge machine learning pipelines and leveraging the latest advancements in AI powered personalization, we are here to reshape the way content is discovered.

Our approach to Discovery Ops is revolutionizing user experiences, maximizing revenue, engagement, and retention.

LLMs and Knowledge Injection

Increases factually accuracy and stylistic consistency for domain-specific tasks. 

Supplements the LLM with task-specific data that are not part of the LLM’s training data.

Some methods are In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Fine-Tuning.

Introducing RAGSys

RAGSys brings the best of both worlds:
Fine-Tuning and RAG

Optimizing the Retrieval System
for In-Context Learning

Integrates techniques from Recommender Systems into the RAG retrieval process

Real-World Applications

Context: B2B Marketplace Catalog

One of our client is a leading B2B marketplace in their industry
5000+ merchants, 1M+ items
All merchants are creating items manually, creating many duplicates
They need to consolidate the items catalog

Watch the RAGSys Webinar

What is ICLERB?

In-Context Learning Embedding and Reranker Benchmark (ICLERB) is a benchmark to evaluate embedding and reranking models used to retrieve examples for In-Context Learning (ICL).

In contrast to the commonly used MTEB, which evaluates embeddings based on their ability to retrieve relevant documents, ICLERB evaluates the performance impact on downstream tasks when using these embedding or reranking models for ICL.

Link to Paper

Getting started

On this page, you'll find the leaderboard of embedding and reranking models evaluated on ICLERB. You can also find the leaderboard on Huggingface.

To learn more about the methodology of ICLERB, you can read our white paper.

ICLERB Leaderboard

How to Read

The table below shows the average NDCG@K for various embedding and reranker models when used to retrieve examples for In-Context Learning for a number of different tasks.

Empower your enterprise with Crossing Minds' AI-powered platform, engineered to redefine intelligent information retrieval at scale.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Fully customized personalization engine aligns with your complex enterprise needs
Advanced A/B testing elevates decision-making and drives continuous innovation
Seamless business rules easily integrate with existing systems
Dedicated, expert guidance tailored for the challenges faced by enterprise-level IT
Continuous oversight ensures your personalization strategy performs at peak
Officially backed by
trusted by brands like