How to support concurrent near_text search queries

Description

I did load testing on my API today, and found it had a rps about 40. This is surprising to me considering how fast each weaviate query is.

It turns out the weaviate search queries are not concurrent - anyone willing to share how you made APIs making weaviate queries more performant?

Server Setup Information

  • Weaviate Server Version:
  • Deployment Method: docker
  • Multi Node? Number of Running Nodes: 1
  • Client Language and Version: python v4

Any additional Information

I read about multi-node setup in weaviate. Would that help with this issue? what about this - QUERY_DEFAULTS_LIMIT: 25?

Hi!
Indeed, multinode is the way to go to increase your read throughput.

Have you seen this doc?

The QUERY_DEFAULTS_LIMIT, AFAIK, is the number of objects to be returned from a query by default, so depending on your usage (like limit=100) it will have no effect.

Let me know if this helps.

Thanks!