Mistral-embed as custom vectorizer

As a novice to llama index and weaviate, I am following the codes in Structured Hierarchical Retrieval - LlamaIndex 🦙 v0.10.14.

Here weaviate uses the default openai embedding models. How can I customize it with mistral-embed. From my searches, if I am not mistaken, I need to introduce schema mentioning which text2vec to be used. But in the code I keep failing how to do this.

How to adjust the code below so that it would create the embeddings based ona custom embedding model?

import weaviate

auth_config = weaviate.AuthApiKey(
api_key=“XRa15cDIkYRT7AkrpqT6jLfE4wropK1c1TGk”
)
client = weaviate.Client(
https://llama-index-test-v0oggsoz.weaviate.network”,
auth_client_secret=auth_config,
)

class_name = “LlamaIndex_docs”

from llama_index.vector_stores.weaviate import WeaviateVectorStore
from llama_index.core import VectorStoreIndex, StorageContext

vector_store = WeaviateVectorStore(
weaviate_client=client, index_name=class_name
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

doc_index = VectorStoreIndex.from_documents(
docs, storage_context=storage_context
)

Hi!

When using those integrations, is a good one to initiate the class beforehand an have the correct vectorizer configured.

I have not played with llamaindex yet, only with langchain, and I did exactly that in a integration in our recipes:

This will allow you run queries thru llama index as well as directly in Weaviate.

I believe you can run this model thru Hugging Face? text2vec-huggingface | Weaviate - Vector Database

Another options is using text2vec-transformers, where you can customize a container with the model you want:

Let me know if that helps.