Most RAG applications consist of two pieces: the vector database and the embedding model to generate the vector. A scalable vector database seems pretty much like a solved problem with providers like Cloudflare, Supabase, Pinecone, and many many more.
Embedding models, on the other hand, seem pretty limited compared to their LLM counterparts. OpenAI has one of the best LLMs in the world right now, with multimodal support for images and documents, but their embedding models only support a handful of languages and only text input while being pretty far behind open source models based on the MTEB ranking: https://huggingface.co/spaces/mteb/leaderboard
The closest model I found that supports multi-modality was OpenAI’s clip-vit-large-patch14, which supports only text and images. It hasn't been updated for years with language limitations and has ok retrieval for small applications.
Most RAG applications I have worked on had extensive requirements for image and PDF embeddings in multiple languages.
Enterprise RAG is a common use case with millions of documents in different formats, verticals like law and medicine, languages, and more.
So, we at JigsawStack launched an embedding model that can generate vectors of 1024 for images, PDFs, audios and text in the same shared vector space with support for over 80+ languages.
- Supports 80+ languages - Support multimodality: text, image, pdf, audio - Average MRR 10: 70.5 - Built in chunking of large documents into multiple embeddings
Today, we launched the embedding model in a closed Alpha and did up a simple documentation for you to get started. Drop me an email at [email protected] or DM me on X with your use case and I would be happy to give you free and unlimited access in exchange for feedback!
Some limitations:
- While our model does support video, it's pretty expensive to run video embedding, even for a 10 second clip. We’re finding ways to reduce the cost before launching this, but you can embed the audio of a video. - Text embedding is the fastest response, while other modalities might take a few seconds. Which we expected as most other modalities require preprocessing