Querying on llama-index Weaviate Vector Store

Hey everyone!
I have a query regarding the llama-index loader for Weaviate, if anyone can help me with that?

vector_store = WeaviateVectorStore(weaviate_client=client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(data, storage_context=storage_context)
query_engine = index.as_query_engine()
response = query_engine.query(
        "xyz")

print(response)

I am loading data into the Weaviate vector store using Llama-index in the code above. The problem is that every time I ask for the query, the data is loaded repeatedly in Weavite. What I want is that once the data is indexed and stored in Weaviate, when I run the query, I want to get access to the already loaded data instead of indexing it all over again. How can I solve this issue?

Hi @Vruti_Dobariya - I don’t have any experience with Llamaindex so I’m not sure exactly what’s going on here. But typically if something like this is happening, you might be repeating the data ingestion process.

Can you see in your code if you might be doing that multiple times?

Also - hey @erika-cardenas - would you know what’s going on here?

So, here in the code, I am loading the data and indexing it in the first three lines. Now to answer the query, the index is retrieved, and the query is answered accordingly. Now, llama-index provides a feature to store this index on local storage, and we can access it without the need to load the data again. But what I want is that once the data is stored and indexed in Weraviate, I can raise a query directly on Weavite instead of storing the index on my local machine without the need of reuploading the data.

For example, in the documentation below:

They showed how if the data is already loaded in Pinecone, they can access it directly without needing to store it in local storage. Do we have the same for Weaviate? If we do, then can you please let me know how to implement that?

1 Like