I am self-hosting Weaviate using docker-compose. I use LlamaIndex and Gemini to answer simple questions, yet Gemini always reply with “The provided context does not mention X”. after some research I found out that I need to setup Weaviate in a certain way to be able to use it with Gemini, and the resources provided are very-limited, especially when I use docker-compose configurator, I only found support for PaLM Models
I use Weaviate 1.25.4 and my application runs in a python 3.11 environment
I would really appreciate an example. Thank you in advance.
Usually, LlamaIndex will take care of the vectorization for you, unless you create the collection beforehand, specify the vectorizer and do not provide the vectors while ingestion your data.
Can you share some reproducible code so we can better understand and help you?