Curl localhost:8080/vectorize doesn't work

Hi @Mindaugas !!! Welcome to our community!

Nice catch!

I figured that that service will require the Content Type.

so this will work:

curl localhost:9090/vectorize -d '{"texts": ["hello"], "images":[]}'  --header 'Content-Type: application/json'

Note: this is considering the model is exposed in 9090, by adding the port mapping:

  multi2vec-clip:  # Set the name of the inference container
    ports:
      - 9090:8080
    image: cr.weaviate.io/semitechnologies/multi2vec-clip:sentence-transformers-clip-ViT-B-32-multilingual-v1
    environment:
      ENABLE_CUDA: 0 # set to 1 to enable

I had changed the docs:

Also, a nice place to check for our inference models tests is here:

Let me know if this helps :slight_smile:

1 Like