As a novice to llama index and weaviate, I am following the codes in Structured Hierarchical Retrieval - LlamaIndex 🦙 v0.10.14.
Here weaviate uses the default openai embedding models. How can I customize it with mistral-embed. From my searches, if I am not mistaken, I need to introduce schema mentioning which text2vec to be used. But in the code I keep failing how to do this.
How to adjust the code below so that it would create the embeddings based ona custom embedding model?
import weaviate
auth_config = weaviate.AuthApiKey(
api_key=“XRa15cDIkYRT7AkrpqT6jLfE4wropK1c1TGk”
)
client = weaviate.Client(
“https://llama-index-test-v0oggsoz.weaviate.network”,
auth_client_secret=auth_config,
)class_name = “LlamaIndex_docs”
from llama_index.vector_stores.weaviate import WeaviateVectorStore
from llama_index.core import VectorStoreIndex, StorageContextvector_store = WeaviateVectorStore(
weaviate_client=client, index_name=class_name
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)doc_index = VectorStoreIndex.from_documents(
docs, storage_context=storage_context
)