How to configure and verify what qna-openAI model is set for a WCS


I would like to know the Weaviate Client 4.x code to set a specific model for question and answer. For example, QNA-OpenAI. This is for a WCS instance. Also, what code can you use to display and show the model configured for the collection. Using a get with config for the collection I am not seeing a qna model name. Lastly, my understanding when using WCS you can only set a Vectorizer model for embeddings and then a QNA model but no generative one too, is that true? If so, then what does Weaviate use when it summarizes results on search, would that be the model set for QNA?

Server Setup Information

  • Weaviate Server Version:
  • Deployment Method: WCS
  • Multi Node? Number of Running Nodes: 1
  • Client Language and Version: Python and Weaviate Client 4.x
  • Multitenancy?: No

Any additional Information

I would like to set the qna-openai module to the Instruct model from Open AI and set the vectorizer to a different model of Open AIs Large

hi @pj812 ! Welcome to our community! :hugs:

Sorry for the delay here! Missed this one :grimacing:

Ok, so first, you can check what are the modules/integrations installed on a Weaviate instance with:


you should see something like:

'modules': {'generative-aws': {'documentationHref': '',
   'name': 'Generative Search - AWS'},
  'generative-cohere': {'documentationHref': '',
   'name': 'Generative Search - Cohere'},
  'generative-ollama': {'documentationHref': '',
   'name': 'Generative Search - Ollama'},

this is how you know if a module/integration is installed.

Now, you need to create a collection, and make use of some of those modules/integrations.

Considering you have some collection, and want to inspect the vectorizer or the generative modules/integrations that are configured, you can do:


output example:

_VectorizerConfig(vectorizer=<Vectorizers.TEXT2VEC_OLLAMA: ‘text2vec-ollama’>, model={‘apiEndpoint’: ‘http://host.docker.internal:11434’, ‘model’: ‘all-minilm’}, vectorize_collection_name=True)



output example:

_GenerativeConfig(generative=<GenerativeSearches.OLLAMA: ‘generative-ollama’>, model={‘apiEndpoint’: ‘http://host.docker.internal:11434’, ‘model’: ‘llama3’})

For the qna-openai module/integration, as long as you have the module/integration enabled, it will expose the ask property and with it’s corresponding configuration (Openai Api Key, for example), it will be able to consume those services.

so whenever you use the generate you will be using generative-openai, and only when you use the ask you will use the qna-openai

Let me know if this helps :slight_smile: