Multiple Verba Weaviate databases on a single machine (Apple Silicon) - no Docker (update)


I am new to LLM’s and have never tried to setup a RAG server. So I must congratulate you on how easy and fast the Verba process was! My quick test case leads me to believe how useful this tool will be.

What I would like to do is create multiple local LLM small domain experts - each with its own local weaviate database. I can’t use Docker as there is no Apple silicon acceleration. Right now I’m running Ollama local with Llama3 as the LLM. I see that the running verba instance points to ~/.local/share/weaviate, hence every instance would put their domain specific documents in that one location. Would it be possible to allow putting the weaviate database in the directory from whence verba is started (or possible having a configuration option to point to weaviate database)?

Server Setup Information

  • Weaviate Server Version: 1.0.1
  • Deployment Method: pip install goldenverba
  • Multi Node? Number of Running Nodes: 1
  • Client Language and Version: Llama3
  • Multitenancy?:

Any additional Information

ollama version is 0.1.48

As a work around hack I’ve just tried to make ~/.local/share/weaviate a link to a folder local weaviate database. Unfortunately that fails on ‘verba start’ with the error about weaviate already existing. Python code appears to detect that it’s a link and not a directory.