I am currently facing an issue while attempting to execute two batch operations in parallel, each with a batch size of 256. I’ve set the consistency level to “ONE” in the hopes of achieving parallel insertion. However, the response time for these parallel operations is identical to that of executing two consecutive batch operations.
Additionally, I tried explicitly setting the replicas using the following while creating the class:
As per the documentation:
If the write is not set to
ALL (possible from v1.18), writing data is asynchronous from the user’s perspective.
I’m uncertain if I am overlooking something in my approach. Any guidance or suggestions you can provide would be greatly appreciated.
hi @moaazzaki ! Welcome to our community
What version of Weaviate are you running?
Are you running a single batch with 2 workers or running two batches separately?
Increasing the replication factor will not improve import times:
What you can do to improve performance in import is:
- Use the new python v4 client, as it leverages a GRPC connection (best used with latest Weavaite Server version)
- Implement error handling, so you know what objects were not able to import
- Opt for fewer large machines rather than more small ones to minimize network latency.
- Experiement with ASYNC INDEXING (experimental)
- While importing using batch, go incrementally on the batch size and number of workers. Also monitor the CPU usage of your client.
Let me know if this helps!
Hi @DudaNogueira, Thanks for the welcoming!
I tried the above on two versions of weaviate:
More details about your questions:
My implementation is based on async aiohttp calls, as the python client is currently synchronous afaik from the disscussion here.
I send two batches (two API requests) in parallel separately.
I tried going incrementally from batch size 64 to 1024, didn’t notice much gain/loss.
I didn’t try the async indexing option, it looks promising, so I’ll give that a try and hopefully it get the issue solved, thanks for your help!