Weaviate k8s backup out of disk space

Hey all, I currently have all of my data backed up to s3 (confirmed ~140 GB for each of my two weaviate nodes, holding around 85.8M records when previously queried). I want to manually load the data into my weaviate instance (weaviate pods running via Amazon EKS), but the backup doesn’t seem to be restoring data, just schemas. I get the following output while logging the pod during the execution of the backup:

{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-28T23:43:40Z","took":6138}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-28T23:53:41Z","took":6925}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-29T00:03:41Z","took":7064}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-29T00:13:41Z","took":6884}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-29T00:23:42Z","took":6351}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-29T00:33:42Z","took":7507}
{"action":"read_disk_use","level":"warning","msg":"disk usage currently at 80.01%, threshold set to 80.00%","path":"/var/lib/weaviate","time":"2024-01-29T00:35:02Z"}
{"action":"read_disk_use","level":"warning","msg":"disk usage currently at 80.68%, threshold set to 80.00%","path":"/var/lib/weaviate","time":"2024-01-29T00:35:33Z"}
{"action":"read_disk_use","level":"warning","msg":"disk usage currently at 83.34%, threshold set to 80.00%","path":"/var/lib/weaviate","time":"2024-01-29T00:37:33Z"}
{"level":"info","msg":"Created shard papermetadata_xVwwxf162NWy in 1.187678ms","time":"2024-01-29T00:40:28Z"}
{"action":"restore","backup_id":"backup858","class":"PaperMetadata","level":"info","msg":"successfully restored","time":"2024-01-29T00:40:28Z"}
{"action":"hnsw_vector_cache_prefill","count":1000,"index_id":"main","level":"info","limit":1000000000000,"msg":"prefilled vector cache","time":"2024-01-29T00:40:28Z","took":77284}
{"action":"restoration_status","backend":"s3","backup_id":"backup858","level":"info","msg":"","time":"2024-01-29T00:43:42Z","took":5604}
{"action":"read_disk_use","level":"warning","msg":"disk usage currently at 87.16%, threshold set to 80.00%","path":"/var/lib/weaviate","time":"2024-01-29T00:47:33Z"}

From local, I’ve been checking the state of the backup which claims to succeed, but never has records:

{'backend': 's3', 'id': 'backup858', 'path': 's3://[my s3 path]', 'status': 'SUCCESS'}
paper metadata has 0 objects

I have mounted 200 GB volumes onto the nodes, so I’m not sure how the disk issue could be problematic or why the records are never getting added.

What’s the issue?

Looks like backups require the version to be the same between creation and restore. Fixed the issue by reverting to v1.22 when I had changed the instance to v1.23

Hi @Lakshya_Bakshi Sorry for the delay here :frowning:

We are planning to write some special blog post on Backup soon.

Thanks for sharing and glad you were able to fix this :slight_smile: