Memory Requirement



Our current weaviate cluster is deployed on Docker and the cluster contains approximately 35 million documents with 768d vectors.

According to this resource:
Memory usage = 2*(35e6*768*4B) = 215 GB which is close to what our current system is using 186 GB and makes sense [at rest state (no imports)].

Now when we deploy the same cluster on K8s, it is using ~285GB (95GB across each pod) [at rest state (no imports)]

Class schema and cluster configurations are unchanged. Why is there a higher memory requirement in the kubernetes one?

Any help is appreciated. Thank you.

Server Setup Information

  • Weaviate Version: 1.23.7
  • Deployment Method: docker and K8s
  • Multi Node? Number of Running Nodes: 3
  • Used Client Language and Version: python v4

Any additional Information

Hi @vamsi

That is interesting.

Just checking, you have the same number of weaviate nodes in both deployments, right?

Are those metrics consistent thru time? How long? Meaning, right after a clean start, for example, both deployments will load from disk into memory (CPU Hungry), and after loading, they get that same reading?

What is your K8s cluster version? Are both nodes running in the same Linux OS and version?