How to use self-hosted Weaviate (1.25.4) with Gemini

I am self-hosting Weaviate using docker-compose. I use LlamaIndex and Gemini to answer simple questions, yet Gemini always reply with “The provided context does not mention X”. after some research I found out that I need to setup Weaviate in a certain way to be able to use it with Gemini, and the resources provided are very-limited, especially when I use docker-compose configurator, I only found support for PaLM Models

I use Weaviate 1.25.4 and my application runs in a python 3.11 environment

I would really appreciate an example. Thank you in advance.

Hi @Omar_Khalil ! Welcome to our community :hugs:

Usually, LlamaIndex will take care of the vectorization for you, unless you create the collection beforehand, specify the vectorizer and do not provide the vectors while ingestion your data.

Can you share some reproducible code so we can better understand and help you?

Thanks!

Hi @Omar_Khalil,

The text2vec-palm and generative-palm modules handle Gemini and VertexAI models from Google. So, you can use these modules for Gemini :wink:

We are working to rename these modules to text2vec-google and generative-google, as this is confusing for a lot of people.

1 Like

By the way, check our recipes for generative search.

You can find both examples here: