'NoneType' object not iterable

Description

I installed Verba using the Docker method from the Github, but when I deploy it and type something in the chat of the local site, I get the error “Something went wrong: ‘NoneType’ object not iterable” and it stays on “retrieving chunks” unless I refresh the page. I saw the previous post on here where they fixed this by deleting Verba an redownloading it, and I tried this, but it did not fix the issue for me. I am wondering if something else could be wrong with my setup/installation. Everything else seems to be running fine (as far as I can tell) as all the environment variables appear and I can upload documents successfully.

Server Setup Information

  • Weaviate Server Version:
  • Deployment Method: Docker/Ollama
  • Multi Node? Number of Running Nodes:
  • Client Language and Version:
  • Multitenancy?:

Any additional Information

There is nothing in my .env file because changing it did not seem to affect how Verba was running at all. Previously had the Ollama_URL and Ollama_Model set.

hi @mchu249 !

I am not familiar with this error.

Can you check Weaviate logs and get some tracebacks?

Otherwise, a way to reproduce it also could help.

Thanks!

This is what I see in the logs after I send something in the chat and get the NoneType error, let me know if this is what you are looking for:

image

What I did exactly was I just downloaded git, Docker Desktop, and then following the steps on the Github to install Verba with Docker. I created the .env file in the Verba directory that was created after cloning it and added OLLAMA_MODEL = llama3 and OLLAMA_URL = http://host.docker.internal:11434. In the docker-compose.yml, I modified those two keys too but that was it. After, I ran the docker commands to start up Verba.

Hi
I had the same issue, that was a config issue in the .env file I’ve created. Using Ollama, i accidentally applied all the Params for Ollama, that made the confusion.

Wrong:
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
OLLAMA_EMBED_MODEL=mxbai-embed-large

Correct:
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3

…and all worked smooth.

hi @Andy_G !! Welcome to our community :hugs:

Can you check this forum thread?

Also, where are you running this?

I believe there is an issue if running on Docker Desktop on Windows.

I will close this thread in favor of the linked one that has more details on the troubleshooting.

Thanks!