I am using the RetrieverQueryEngine
from llama_index to create a custom vector and knowledgegraph index query engine. I have previously stored my vector index with Weaviate when I only needed a vector Index. But know I am using both. What would you suggest is a good production ready way to remotely store my VectorIndex AND KnowledgeGraph Index? Create 2 stores? Just like below or is it more complicated??
from llama_index.vector_stores import WeaviateVectorStore
# construct vector store
vector_store = WeaviateVectorStore(weaviate_client = client, index_name="content_vec", text_key="content")
vector_store.add(nodes)
# construct knowledgegraph store
kg_store = WeaviateVectorStore(weaviate_client = client, index_name="content_kg", text_key="content")
kg_store.add(nodes)