Successfully installed Verba at port 8000, proxy 80 and 443 to 8000, but comes up blank white page?

Description

Successfully installed Verba at port 8000, proxy 80 and 443 to 8000, but comes up blank white page?

Server Setup Information

  • Weaviate Server Version: Debian Linux
  • Deployment Method:
  • Multi Node? Number of Running Nodes: 1
  • Client Language and Version: English
  • Multitenancy?: No

Any additional Information

(verbavenv) getonthis@nginx-ai-2-vm:~/weaviate$ curl http://localhost:8000 gets me the html for the start page.

Logs show: (verbavenv) getonthis@nginx-ai-2-vm:~/weaviate/Verba$ verba start
:information_source: No Ollama Model detected
:information_source: No Ollama Model detected
INFO: Will watch for changes in these directories: [‘/home/getonthis/weaviate/Verba’]
WARNING: “workers” flag is ignored when reloading is enabled.
INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
INFO: Started reloader process [10909] using WatchFiles
:information_source: No Ollama Model detected
:information_source: No Ollama Model detected
INFO: Started server process [10916]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:37698 - “GET / HTTP/1.1” 200 OK
INFO: 104.63.129.184:0 - “GET / HTTP/1.1” 200 OK
INFO: 104.63.129.184:0 - “GET /static/icon.ico HTTP/1.1” 200 OK
INFO: 104.63.129.184:0 - “GET / HTTP/1.1” 200 OK
INFO: 104.63.129.184:0 - “GET / HTTP/1.1” 200 OK
INFO: 104.63.129.184:0 - “GET / HTTP/1.1” 200 OK
So i know the server is running.
And here is the docker-compose:

       - WEAVIATE_URL_VERBA=localhost:8000
      - OLLAMA_URL=localhost:11434
      - OLLAMA_MODEL=$OLLAMA_MODEL
#      - OLLAMA_EMBED_MODEL=$OLLAMA_EMBED_MODEL
      - UNSTRUCTURED_API_KEY=$UNSTRUCTURED_API_KEY
      - UNSTRUCTURED_API_URL=$UNSTRUCTURED_API_URL

However, when I go to the URL, its a black white screen with the Verba/Weaviate icon, so I know I am hitting the server.

hi!

Can you share the entire docker compose?

here is a working example I can suggest:

---

services:
  verba:
    # build:
    #   context: ./
    #   dockerfile: Dockerfile
    image: semitechnologies/verba:latest
    ports:
      - 8000:8000
    environment:
      - WEAVIATE_URL_VERBA=http://weaviate:8080
      - OPENAI_API_KEY=$OPENAI_API_KEY
      - COHERE_API_KEY=$COHERE_API_KEY
      - OLLAMA_URL=http://ollama:11434
      - OLLAMA_URL=http://ollama:11434
      #- OLLAMA_URL=http://host.docker.internal:11434
      - OLLAMA_MODEL=deepseek-r1:1.5b
      - OLLAMA_EMBED_MODEL=mxbai-embed-large:latest
      #- UNSTRUCTURED_API_KEY=$UNSTRUCTURED_API_KEY
      #- UNSTRUCTURED_API_URL=$UNSTRUCTURED_API_URL
      #- GITHUB_TOKEN=$GITHUB_TOKEN

    volumes:
      - ./data:/data/
    depends_on:
      weaviate:
        condition: service_healthy
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8000 || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    networks:
      - ollama-docker

  weaviate:
    command:
      - --host
      - 0.0.0.0
      - --port
      - '8080'
      - --scheme
      - http
    image: semitechnologies/weaviate:1.28.4
    ports:
      - 8080:8080
      - 3000:8080
    volumes:
      - weaviate_data:/var/lib/weaviate
    restart: on-failure:0
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:8080/v1/.well-known/ready || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s
    environment:
      OPENAI_APIKEY: $OPENAI_API_KEY
      COHERE_APIKEY: $COHERE_API_KEY
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      ENABLE_MODULES: 'e'
      CLUSTER_HOSTNAME: 'node1'
    networks:
      - ollama-docker

  # Uncomment to use Ollama within the same docker compose

  ollama:
      image: ollama/ollama:latest
      ports:
        - 7869:11434
      volumes:
        - .:/code
        - ./ollama/ollama:/root/.ollama
      container_name: ollama
      pull_policy: always
      tty: true
      restart: always
      environment:
        - OLLAMA_KEEP_ALIVE=24h
        - OLLAMA_HOST=0.0.0.0
      networks:
        - ollama-docker

volumes:
  weaviate_data: {}


networks:
  ollama-docker:
    external: false
...

after running:

docker compose up -d

you should also pull the models to ollama:

docker compose exec -ti ollama ollama pull mxbai-embed-large:latest
docker compose exec -ti ollama ollama pull deepseek-r1:1.5b

then restart verba:

docker compose restart verba

Ps: Make sure that ollama has plenty of resources. If you run ollama under docker, make sure there isn’t resource constraints, otherwise it can fail at ingestion.

Let me know if this helps!

Thanks!

By the way, just found out that those env vars were being ignored: