Cannot use vec. param. using OpenAI API key via GPT assistant yaml / json schema

Hi there!

Challenge: Paradoxially, I cannot give the API key from OpenAI in my yaml code that the Weaviate cluster will use in it’s query. This results in that I can only use ‘simple’ query’s and not ‘aks’ or ‘concept’ (vectorizor parameters).

GPT assistants allow to connect to an external source via a yaml or json schema (I choose yaml). Now connecting with the weaviate cluster is possible (you can give the Weaviate API key under ‘Authentication’ on the gpt configuration page). Via this yaml I can use the GET{} request and query directly in the Weaviate cluster.

However, if I want to make use of query prompts that use a text2vec (you can only select the text2vec which connects via an API, like that from OpenAI or HuggingFace), I need to fill in the API key under the header section. But you don’t have access to this header section if everything is configured via the yaml schema.

This is the error message being returned (via the gpt interface):
“message”: “explorer: get class: vectorize params: vectorize params: vectorize params: vectorize keywords: remote client vectorize: API Key: no api key found neither in request header: X-Openai-Api-Key nor in environment variable under OPENAI_APIKEY”,

Pushing ‘normal’ GraphQL queries in the yaml schema works. But not the vectorizer queries.

What could be a possible solution for this? Is er a way to pass the api-key as header completely via a yaml or json schema, as configureable in the gpt assistant’s schema?

Thanks in advance!

Hi @ThomasB91 !! Welcome to our community :slight_smile:

That’s interesting.

By default, WCS - our cloud offer - will not add OpenAI (or any other) API KEYS or default vectorizer.

As you cannot add the headers to your GQL query, one way to bypass this is to set up your own Weaviate Server, and add the OPENAI API KEY as an environment variable

Let me know if that helps!
Thanks!

Hi @DudaNogueira, thanks for your reply!

If you imply by Weaviate Server, a local server running from (e.g.) Docker file - that won’t do as I cannot connect with that from my GPT assistant instance. But please correct me here in case I’m wrong.

Would it be possible to host a weaviate server from a public IP?

Cheers

Yes, that’s the case!

You can run your Weaviate Server on a public IP and point your GPT assistant there :slight_smile:

Let me know if that helps :slight_smile: