RAM Not Freed After Deleting ~600K Objects – Is Restarting Weaviate the Only Option?

Description

Hi everyone,

We’re running a Weaviate cluster on Kubernetes and recently deleted ~600K stale objects from our internal accounts to free up space.

Here’s a comparison before and after the deletion:

Volume

  • Before: 66.3 GiB used (22.5%)
  • After: 59.7 GiB used (20.2%)
    ~6.6 GiB of disk space freed

Inodes

  • Before: 13.24M used (67.4%)
  • After: 13.00M used (66.2%)
    So, inode usage also dropped as expected.

:floppy_disk: Memory (RAM)

  • Before: 47.37 GiB
  • After: 46.72 GiB
    Only a ~650 MiB reduction despite freeing over 6.6 GiB from disk.

From online discussions, it seems that while deleted objects are immediately removed from disk, memory might not be reclaimed immediately due to caching or background tasks.

I’ve come across two options:

  1. Wait – memory might be freed eventually.
  2. Restart the Weaviate pods to trigger memory cleanup.

My Questions:

  • Is it expected that Weaviate holds on to RAM even after significant deletions?
  • Is restarting the cluster the only reliable way to force memory to be freed?
  • Should we see a RAM drop roughly equal to deleted data (~6.6 GiB)? Or is memory management more nuanced?

Thanks in advance for your help!

Server Setup Information

  • Weaviate Server Version: 1.22.5
  • Deployment Method: k8s
  • Client Language and Version: Python, weaviate-client 3.25.3
  • Multitenancy: Yes

hi @akhilsharma !!

1.22.5 is quite an old version, and a lot have improved since, specially on the tombstone/delete management from 1.24+

Here you can have this information:

Can you check, using metrics, about any dangling tombstone?

You should look for vector_index_tombstones

Let me know if this helps!

THanks!