Query call with protocol GRPC batch failed with message Deadline Exceeded

Description

I’m facing a frequent issue, the “Deadline Exceeded”.

On batching insert:
“weaviate.exceptions.WeaviateBatchError: Query call with protocol GRPC batch failed with message Deadline Exceeded.”

On near_text query:
“weaviate.exceptions.WeaviateQueryError: Query call with protocol GRPC search failed with message Deadline Exceeded.”

How to solve this issue?

Server Setup Information

  • Weaviate Server Version: 1.23.10
  • Deployment Method: docker
  • Multi Node? Number of Running Nodes: 1
  • Client Language and Version: english

Any additional Information

  • python: 3.10
  • weaviate-python:4.4.4
  • os: debian 12 bookworm

“”" docker-compose config
´´´
version: ‘3.4’
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- ‘8080’
- --scheme
- http
image: semitechnologies/weaviate:1.23.10
ports:
- 8080:8080
- 50051:50051
volumes:
- weaviate_data:/var/lib/weaviate
restart: on-failure:0
environment:
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ‘true’
PERSISTENCE_DATA_PATH: ‘/var/lib/weaviate’
DEFAULT_VECTORIZER_MODULE: ‘text2vec-openai’
ENABLE_MODULES: ‘text2vec-openai,generative-openai,qna-openai’
ASYNC_INDEXING: ‘true’
LOG_LEVEL: ‘debug’
LOG_FORMAT: ‘text’
CLUSTER_HOSTNAME: ‘node1’
´´´

Hi @rlima! Welcome to our community :hugs:

Can you try initializing the client using skip_init_check=True?

Like so:

client = weaviate.connect_to_local(skip_init_checks=True)

Also, do you see any outstanding logs in Weaviate server?

This may be a symptom of an under resourced server taking too long to answer your read/writes.

How big is your dataset and how much of memory and cpu do you have for this server?

The logs on batch insert:

2024-02-21 16:22:40 time=“2024-02-21T19:22:40Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=“/v1/nodes?output=verbose”
2024-02-21 16:25:48 time=“2024-02-21T19:25:48Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/.well-known/openid-configuration
2024-02-21 16:25:48 time=“2024-02-21T19:25:48Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/meta
2024-02-21 16:25:51 time=“2024-02-21T19:25:51Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/.well-known/openid-configuration
2024-02-21 16:25:52 time=“2024-02-21T19:25:52Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/meta
2024-02-21 16:25:52 time=“2024-02-21T19:25:52Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/.well-known/openid-configuration
2024-02-21 16:25:53 time=“2024-02-21T19:25:53Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/meta

the logs on near_text.
2024-02-21 16:28:46 time=“2024-02-21T19:28:46Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/.well-known/openid-configuration
2024-02-21 16:28:47 time=“2024-02-21T19:28:47Z” level=debug msg=“received HTTP request” action=restapi_request method=GET url=/v1/meta

It is a small dataset ~10mb. It is running on compose with 7.61GB memory and all cpus available.

After your suggestion, I noted that the timeout set too low. I changed to handle longer period.

client = WeaviateClient(
            connection_params=ConnectionParams.from_params(
                http_host=,
                http_port=8080,
                grpc_host=,
                grpc_port=50051,
            ),
            additional_headers={"X-OpenAI-Api-Key": OPENAI_ACCESS_KEY},
            additional_config=AdditionalConfig(
                connection=ConnectionConfig(
                    session_pool_connections=30,
                    session_pool_maxsize=200,
                    session_pool_max_retries=3,
                ),
                **timeout=(60, 180),**
            ),
        )

Thank you for the support.

1 Like