I understand the model is, like for other commercial ones, available exclusively through their API, right?
There was a part here about multilingualism but that was wrong! Sorry!
FWIW: Voyage also has separate `law`, `code`, and `finance` models. See [1]
Really cool results, anyway.
voyage-multimodal-3 is multilingual as well, supporting the same set of languages as voyage-3.
It is interesting that you’re not as up front about multilingualism compared to cohere. They seem to mention it a lot, which led to my confusion.
Words like 'you' and 'apple' will be a unitary token. More complex terms like 'pikachu' may be divided into pik-a-chu.
Until now, the standard approach to creating multimodal models involved
training separate components for different modalities and then stitching them
together to roughly mimic some of this functionality. These models can
sometimes be good at performing certain tasks, like describing images, but
struggle with more conceptual and complex reasoning.
We designed Gemini to be natively multimodal, pre-trained from the start on
different modalities. Then we fine-tuned it with additional multimodal data to
further refine its effectiveness. This helps Gemini seamlessly understand and
reason about all kinds of inputs from the ground up, far better than existing
multimodal models — and its capabilities are state of the art in nearly every
domain.
One distinction to make here is that token embeddings and the embeddings/vectors that are output from embedding models are related but separate concepts. There are numerous token embeddings (one per token) which become contextualized as they propagate through the transformer, while there is a single vector/embedding that is output by embedding models (one per input data, such as long text, photo, or document screenshot).
LLMs, including multimodal LLMs, do have embeddings, but they're embeddings learned by generating text, instead of finding similar documents
Then, in my case it was more "sad" from a commercial point of vue, because it is means that despite their models potentially be betters, almost no one use them, and they are not well known. And it will probably not change as there is a high barrier to entry to have to trust them to suddenly start using their models with their APIs out of the blue. Not that many persons will test, benchmark and then recommend the models.
Also, sad on a last aspect that is not inconsistent with paying their employees:
- If you only offer an API but not a way to self host the commercial models, you are limiting yourself a lot the potential customers that are looking for alternatives to OpenAI. This is the same somehow shitty move as Adobe forcing full "cloud" solutions.
I can see why this may be unclear/confusing -- we will correct it. Thank you for the feedback!
0.4 cosine similarity is pretty good for real-world data that isn't an near-identical duplicate.
https://i0.wp.com/blog.voyageai.com/wp-content/uploads/2024/...
why does it pop up at the end?
>All CLIP-like models perform poorly on mixed-modality search due to a phenomenon known as the modality gap. As illustrated in the figure below, the closest vector to the snippet “I address you, members of the Seventy-Seventh Congress…” is not its screenshot, but other texts. This leads to search results that are skewed towards items of the same modality; in other words, text vectors will be closer to irrelevant texts than relevant images in the embedding space.
> ... the vectors truly capture the semantic content contained in the screenshots. This robustness is due to the model’s unique approach of processing all input modalities through the same backbone.
With that said, I think this benchmark is a pretty narrow way of thinking about multi-modal embedding. Having text embed close to images of related text is cool and convenient, but doesn't necessarily extend to other notions of related visual expression (e.g. "rabbit" vs a photo of a rabbit). And on the narrow goal of indexing document images, I suspect there are other techniques that could also work quite well.
This seems like a great opportunity for a new benchmark dataset with multi-modal concept representations beyond media-of-text.
FWIW, there are other deployment options besides the API as well: AWS (https://docs.voyageai.com/docs/aws-marketplace-model-package), Azure (https://docs.voyageai.com/docs/azure-marketplace-managed-app...), Snowflake (https://docs.voyageai.com/docs/snowflake), and vector database integrations (https://docs.voyageai.com/docs/integrations-and-other-librar..., https://milvus.io/docs/integrate_with_voyageai.md, https://docs.pinecone.io/integrations/voyage, https://weaviate.io/developers/weaviate/model-providers/voya..., https://qdrant.tech/documentation/embeddings/voyage/, etc).
https://github.com/tjmlabs/ColiVara
The main benchmark for this is the Vidore leaderboard. Where we would love to see where VoyageAI performs compared to the more open-source implementations.
Quantitative benchmarks are great, but sparse.
And just to be clear. I don't think that delivering strong embeddings for different domains is an easy task. However, it's 2024 not 2016.