Spinning up Docker Containers Using Different Ports

Description

I am trying to run multiple instances of the same docker image but switching ports. The default port of 8080 and 50051 for t2v module works but I wanted to emulate dev/qa with the same image running just on a different port. How would I do this?

Server Setup Information

---
version: '3.4'
services:
  weaviate:
    command:
    - --host
    - 0.0.0.0
    - --port
    - '8080'
    - --scheme
    - http
    image: cr.weaviate.io/semitechnologies/weaviate:1.24.6
    ports:
    - 8080:8080
    - 50051:50051
   ...

      TRANSFORMERS_INFERENCE_API: 'http://t2v-transformers:8080'
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'text2vec-transformers'
      ENABLE_MODULES: 'text2vec-transformers'
      CLUSTER_HOSTNAME: 'node1'
  t2v-transformers:
    image: cr.weaviate.io/semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
    environment:
      ENABLE_CUDA: '0'
volumes:
  weaviate_data:
...

Any additional Information

Hi @tstra !

Welcome to our community :hugs:

In docker, whenever you map ports, the first number is the port on your host, and the second is the port of your container.

So the first one must be unique. Meaning you cannot have two containers, even from different projects, mapped at the same port, for example, 8080.

If I understand correctly, you want multiple Weaviate servers, mapped on different ports, all using the same t2v model, right?

On that case, I believe the best approach is to run a separate network for your t2v model, and attach your weaviate servers.

Something along those lines:

First, create an attachable network for your model:

docker create network --attachable t2v

now, create a separate docker compose only for your t2v transformer, and explicitly add to your recently created network, something like this:

version: '3.4'
services:
  t2v-transformers:
    image: cr.weaviate.io/semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
    environment:
      ENABLE_CUDA: '0'
    networks:
        - t2v
        - default        
networks:
  default:
    external: false
  t2v:
    external: true

Now you have a working model, that can be attached to any other Weaviate server.

for generating a new weaviate server, you can create, for example, a folder with a docker-compose:

weaviate-prod/docker-compose.yaml

with this content:

version: '3.4'
services:
  weaviate:
    command:
    - --host
    - 0.0.0.0
    - --port
    - '8080'
    - --scheme
    - http
    image: cr.weaviate.io/semitechnologies/weaviate:1.24.6
    ports:
    - 8080:8080
    - 50051:50051
    volumes:
    - weaviate_data:/var/lib/weaviate
    restart: on-failure:0
    networks:
        - t2v
        - default       
    environment:
      TRANSFORMERS_INFERENCE_API: 'http://t2v-transformers:8080'
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'text2vec-transformers'
      ENABLE_MODULES: 'text2vec-transformers'
      CLUSTER_HOSTNAME: 'node1'     
networks:
  default:
    external: false
  t2v:
    external: true
volumes:
  weaviate_data:

now, for creating your QA Weaviate server, you can copy this very same docker compose, and only change the folder and the mapped ports.

so now you can run both weaviate-prod and weaviate-qa, side by side, on the same machine or docker swarm, using the same t2v model instance.

Let me know if this is helpful!

Thanks!