Can you please help me fix this issue? What could be causing it? Is there any configuration I can add to increase the execution time?
I have created the Weavaite index with Ollama and stored the doc. When I perform a generated search I face this issue.
this command is working
!curl myserverurl:11434/api/generate -d ‘{“model”: “llama3”,“prompt”:“What is a vector database?”, “stream”: false }’
I updated my Docker configuration to include the following port mappings:
ports:
- 11434:8080
- 11435:50051
However, I am still encountering the same issue when running the following code:
rresult = client.collections.get(collection_name)
response = result.generate.near_text(
query=prompt,
limit=1,
grouped_task=f"Answer the question: {prompt}? only using the given context in {{chunk}}"
)
print(response.generated)
I suspect the problem might be related to the models I am using. Is that possible? The vectorizer model is ‘mxbai-embed-large’ and the generative model is ‘llama3’. @DudaNogueira Thank you for your prompt response and support.