Thank for all your help and I am at the final step to get the generative search working. As I am going through the document at Generative Search - OpenAI | Weaviate - Vector Database, I understand that weaviate will work only with openAI and OPENAI_API_KEY. Is that right?
We have locally hosted (within the firewall) LLM with front-ended REST API with basic authentication.
For an example, the following API call works:
prompt=’{“inputs”: “What is Docker?”}’
curl –silent –output -X POST $URL -H “Authentication Basic $BASIC_AUTH” -H “Content-Type: application-json” –data “$prompt”
Where URL is https:///gpt/api/v1/models/Llama-2-70b-chat-hf/generate
Is there a way in weaviate configuration to set the internal REST API URL and BASIC_AUTH while using generative search?
So far, I have been successful in hosting weaviate on OpenShift and loading data using text2vec-contextionary, and querying the data. Now I am trying to call the internally hosted REST API (LLM) with basic_auth to send the prompt and weaviate query result.
Please let me know how to accomplish this last step with Weaviate.
Thanks.