[Verba] On sending message: "Couldn't connect to Weaviate, check your URL/API KEY"

Description

Description

I am running into inconsistent errors regarding not being able to connect to weaviate. This is when I am attempting to run locally with Ollama. The error only appears occasionally; often, regenerating the message will be enough to fix it, but it is occurring very often and is quite annoying:

INFO:     127.0.0.1:61907 - "POST /api/get_suggestions HTTP/1.1" 200 OK
✔ Received query: somequery
✔ Connecting new Client
ℹ Connecting to Weaviate Embedded
✘ Couldn't connect to Weaviate, check your URL/API KEY: Embedded DB did
not start because processes are already listening on ports http:8079 and
grpc:50050use weaviate.connect_to_local(port=8079, grpc_port=50050) to connect
to the existing instance
⚠ Query failed: Couldn't connect to Weaviate, check your URL/API KEY:
Embedded DB did not start because processes are already listening on ports
http:8079 and grpc:50050use weaviate.connect_to_local(port=8079,
grpc_port=50050) to connect to the existing instance
INFO:     127.0.0.1:61898 - "POST /api/query HTTP/1.1" 200 OK
✔ Succesfully Connected to Weaviate

(I can’t embed multiple things in a post here so here’s a link to the github issue I am mostly crossposting this from: "Couldn't connect to Weaviate, check your URL/API KEY" · Issue #335 · weaviate/Verba · GitHub)

So I don’t know why this is failing sometimes and succeeding others:


^ Example of two failed tries and one success, the only difference between them being the time of the submission

Installation

  • [ x ] pip install goldenverba
  • pip install from source
  • Docker installation

If you installed via pip, please specify the version:

Python version? 3.11
package version? Whatever’s latest on pip

Weaviate Deployment

  • [x ] Local Deployment
  • Docker Deployment
  • Cloud Deployment

Configuration

Reader: ?
Chunker: ?
Embedder: Ollama (nomic embed text)
Retriever: Advanced
Generator: Ollama (custom finetuned llama 3)

Steps to Reproduce

Pip install;
Run both embedding and generator with ollama
Upload 47 .md documents
chat a few times
experience issue

Additional context

For some reason Verba does not seem to be working when offline either, even though I assume local deployment is just that – local. What’s going on and how do I force this to 1) run locally and 2) correctly connect to the server so I don’t get this issue? I’ll make a separate post for true local running issues if need be, I put the full traceback for the second issue (not being able to run local offline) in a gist so that it doesn’t clutter this post up needlessly: offline_run_attempt · GitHub

hi @e-p-armstrong !! Welcome to our community :hugs:

Considering the error message, it seems that there you have another weaviate embedded running before starting Verba.

if you run:

ps aux | grep weaviate

you should see a running weaviate server. You need to kill that PID before starting verba, otherwise you will run in to this issue.

Let me know if I can assist you further on this.

Thanks!