Darshan Hiranandani : Scaling StatefulSet in Weaviate on Kubernetes – Need Help with Replica Adjustment

Hello everyone,

I’m Darshan Hiranandani, We’re facing an issue with our Weaviate deployment on Kubernetes. Our deployment was initially placed on the system node pool before we applied the necessary taints, and as a result, we ended up with 5 replicas: 2 pods on the system node pool and 3 on the user node pool.

We attempted to add taints to the system node pool and evict the Weaviate pods, but the weaviate-0 pod was scheduled on the system node pool and doesn’t seem to move. Now, we want to run only 3 replicas on the user node pool.

What are our options for scaling the StatefulSet to just 3 replicas on the user node pool without disrupting the deployment?

Server Setup Information

  • Weaviate Server Version: [Insert Version]
  • Deployment Method: Kubernetes
  • Number of Running Nodes: 5
  • Client Language and Version: [Insert Version]
  • Multitenancy: [Yes/No]

Any guidance or suggestions on how to resolve this would be greatly appreciated!

Thanks in advance!
Regards
Darshan HIranandani

hi @darshanhiranandani23 !!

Sorry for the delay here.

Do you still face this problem?

If I understood it correctly, you now have data shared across those 5 nodes, and need to regroup them into only 3 nodes, right?

I believe that the easiest solution for this scenario is to reindex all data.

We do have Dynamic Scaling on the oven, that will help here, but that is not yet available.

Let me know if that helps!

Thanks!