Description
Hi,
Our current weaviate cluster is deployed on Docker and the cluster contains approximately 35 million documents with 768d vectors.
According to this resource:
Memory usage = 2*(35e6*768*4B) = 215 GB
which is close to what our current system is using 186 GB
and makes sense [at rest state (no imports)].
Now when we deploy the same cluster on K8s, it is using ~285GB (95GB across each pod) [at rest state (no imports)]
Class schema and cluster configurations are unchanged. Why is there a higher memory requirement in the kubernetes one?
Any help is appreciated. Thank you.
Server Setup Information
- Weaviate Version: 1.23.7
- Deployment Method: docker and K8s
- Multi Node? Number of Running Nodes: 3
- Used Client Language and Version: python v4