WAL's folder grows unlimittely

Description

We have a cluster with 3 nodes (and 3 replicas).
Every day we have ~150k vector delete/insert operations (NOT updates).
And WAL’s folder grows unlimittely… 100Gb+ per day…
Because of this, pod’s launch takes several hours.

How to limit size (file count) or historicity of append’s logs?

Server Setup Information

  • Weaviate Server Version: 1.25.6
  • Deployment Method: k8s
  • Multi Node? Number of Running Nodes: Yes, 3
  • Client Language and Version: Python3, Python Client v4
  • Multitenancy?: No

Any additional Information

PERSISTENCE_HNSW_MAX_LOG_SIZE: 4GiB

Hi @wvuser,

Thanks for the report. Could you specify what you mean by “WAL’s folder” and show a list of entries with file sizes, timestamps, etc. (Basically, ls—lah of the folder you mentioned)?

Thanks,
Etienne

Folder: main.hnsw.commitlog.d

Objects (vectors) count are relatively constant and averages 46 million.

With no ‘PERSISTENCE_HNSW_MAX_LOG_SIZE’ var defined (def value?) for 2 weeks logs allocates about 1Tb+ of diskspace (compact not working effectivelly?).
After setting ‘PERSISTENCE_HNSW_MAX_LOG_SIZE: 4Gib’ (at 05.08.2024) and restarts, files are compacted better, but grows any way…

/main.hnsw.commitlog.d # ls -lah
total 374G
56.0K Aug 12 07:38 .
122 Aug 12 07:41 …
5.8G Aug 5 10:32 1721304967.condensed
5.7G Aug 5 10:42 1721616213.condensed
5.7G Aug 5 11:03 1721701799.condensed
5.7G Aug 5 11:12 1721768839.condensed
5.6G Aug 5 11:24 1721812184.condensed
5.6G Aug 5 11:47 1721816899.condensed
5.7G Aug 5 12:01 1721821425.condensed
5.6G Aug 5 12:14 1721826313.condensed
5.7G Aug 5 12:28 1721833078.condensed
5.7G Aug 5 12:42 1721857267.condensed
5.7G Aug 5 12:54 1721889914.condensed
5.6G Aug 5 13:04 1721892801.condensed
5.6G Aug 5 13:14 1721895375.condensed
5.7G Aug 5 13:25 1721897847.condensed
5.6G Aug 5 13:37 1721900266.condensed
5.6G Aug 5 13:46 1721902840.condensed
5.7G Aug 5 13:55 1721904982.condensed
5.7G Aug 5 14:07 1721907333.condensed
5.7G Aug 5 14:17 1721929267.condensed
5.6G Aug 5 14:28 1721977300.condensed
5.6G Aug 5 14:37 1721991570.condensed
5.7G Aug 5 14:48 1722003586.condensed
5.6G Aug 5 14:59 1722060942.condensed
5.6G Aug 5 15:10 1722080272.condensed
5.6G Aug 5 15:21 1722140928.condensed
5.6G Aug 5 15:33 1722165809.condensed
5.7G Aug 5 15:45 1722223985.condensed
5.7G Aug 5 15:57 1722239502.condensed
5.7G Aug 5 16:09 1722255331.condensed
5.7G Aug 5 16:21 1722305960.condensed
5.7G Aug 5 16:34 1722325728.condensed
5.7G Aug 5 16:46 1722340285.condensed
5.7G Aug 5 16:59 1722369381.condensed
5.7G Aug 5 17:12 1722410655.condensed
5.6G Aug 5 17:23 1722425059.condensed
5.6G Aug 5 17:35 1722444431.condensed
5.6G Aug 5 17:46 1722493142.condensed
5.7G Aug 5 17:57 1722506115.condensed
5.6G Aug 5 18:08 1722521950.condensed
5.6G Aug 5 18:18 1722575516.condensed
5.7G Aug 5 18:30 1722589113.condensed
5.7G Aug 5 18:40 1722603175.condensed
5.6G Aug 5 18:51 1722793765.condensed
5.6G Aug 5 19:01 1722810983.condensed
6.5G Aug 5 20:10 1722834472.condensed
6.5G Aug 6 00:43 1722886138.condensed
6.5G Aug 6 08:06 1722904983.condensed
6.7G Aug 6 13:57 1722931570.condensed
6.6G Aug 6 17:37 1722952621.condensed
6.7G Aug 7 03:03 1722965804.condensed
6.6G Aug 7 08:08 1722999771.condensed
6.7G Aug 7 14:02 1723018055.condensed
6.5G Aug 8 06:25 1723039265.condensed
6.4G Aug 8 11:23 1723098273.condensed
6.6G Aug 9 02:56 1723116143.condensed
6.6G Aug 9 07:05 1723172167.condensed
6.6G Aug 9 10:56 1723187096.condensed
6.7G Aug 9 17:32 1723200973.condensed
6.5G Aug 10 08:22 1723224692.condensed
6.7G Aug 10 16:37 1723278109.condensed
6.6G Aug 11 10:20 1723307819.condensed
6.7G Aug 12 04:34 1723371566.condensed
5.6G Aug 12 07:38 1723437244.condensed
782.3M Aug 12 07:41 1723448296

Check. Yeah, what’s happening here is that your individual chunks are already larger than the threshold to compact further. If you have PERSISTENCE_HNSW_MAX_LOG_SIZE: 4GiB that means that a file larger than 4GiB will not be considered for compaction.

However, the HNSW commit logs are delta logs. If log 2 deletes something that was created in log 1, but log 1+2 are too big to be compacted, then the information from log 1 cannot be removed effectively.

Limiting the max size is essentially a memory vs disk space trade-off. It sounds like in your case, you are suffering from a lot of disk growth, so it might be worth considering allowing some more memory, so files can still be compacted effectively. I can’t say what the ideal values are in this case, but if you still have memory available, I would try increasing the value.

Note on all of the above: This mainly describes the current implementation and doesn’t mean this can’t be improved. We’ve discussed two options internally:

  1. Some sort of in-place compaction (where you remove redundant info from log 1 based on the fact that you know it will be present in log 2). This isn’t trivial because it’s not always clear which information has to persist and which is fully overridden. There are some commit types where it’s pretty clear (e.g. replace_links_at_level means any link set at that level previously is no longer needed)
  2. A full graph-to-disk dump (possibly at shutdown). If the deltas on disk have grown considerably larger than the actual graph, it might make sense to discard all logs and flush a perfect representation of the graph to disk to replace all historic logs.

Regardless of the ‘PERSISTENCE_HNSW_MAX_LOG_SIZE’ param’s value, condensed files are always larger:

Why? Is ‘compact’ task ignores them in next interation?

With more ‘PERSISTENCE_HNSW_MAX_LOG_SIZE’, it got better, but we will still reach disk’s ‘out of space’… little later:(

May be we can manually ‘safely’ clear logs? Тo avoid ‘READONLY’ mode and long POD startup.
Or optionally disable (not use) it at all on startup (if index data are valid/holistic)?

Interesting idea

Thanks for coming back with the details.

Since doubling once already showed a significant improvement, can you just keep doubling it until the logs become manageable?

May be we can manually ‘safely’ clear logs? Тo avoid ‘READONLY’ mode and long POD startup.

Do you have an environment where you could test that? There is a good chance that the first n logs have information that is entirely redundant. In this case, they would indeed not be needed. However, there is also a chance that some data in there is vital for good graph integrity. In that case, you might make your search results worse.

Or optionally disable (not use) it at all on startup (if index data are valid/holistic)?

If you can accept the temporary downtime to experiment, what you could do is

  • mv main.hsnw.commitlog.d main.hnsw.commitlog.d.bak
  • mkdir main.hnsw.commitlog.d
  • Then manually copy only the last n files (for example, the last 10)
  • Then startup, if you are still happy with the search quality you can be sure that any files prior to the cutoff point can be safely deleted. If the quality suffers, you might want to retry with more files.