We are currently using PyTorch and Transformers in a Vecroriser class that is using the model
VECTOR_MODEL_NAME = “sentence-transformers/all-MiniLM-L6-v2” to vectorise SQL and Python and yaml files before being sent to Weaviate Cloud.
Can this be set up in Weaviate Cloud? It currently requires a heavy PyTorch library and Transformers to manage this process.
Anyone has found a better alternative?
hi @Analitiq !!
One thing you can do, is to point your text2vec_openai to a different base_url
.
Here we have an example on how to accomplish this using Weaviate + KubeAi:
Let me know if this helps!
Thanks!
By the way, you will soon be able to run some models from within Weaviate
check this out: Weaviate Cloud | Embedding
@DudaNogueira thank you very much.
I have read this, Thanbk you. Definitely worth a closer look.
1 Like