Error: not enough memory mappings

Description

hello, we were running into the following issue when trying to read from one of our classes

{
    "error": [
        {
            "message": "msg:search index documenttextchunk256 code:500 err:local shard object search documenttextchunk256_CHVDD95kgRmJ: memory pressure: cannot load shard: not enough memory mappings"
        }
    ]
}

it’s not clear on what was causing this issue. We ended up deleting the class and that seemed to resolve the issue short term but we obviously wouldn’t want this to reoccur in the future

Any insight on what caused the issue above?

Server Setup Information

  • Weaviate Server Version: 1.21.8
  • Deployment Method: k8s
  • Multi Node? Number of Running Nodes:
  • Client Language and Version:
  • Multitenancy?:

hi @mmoya !!

What are the resource metrics reading for this cluster?

It may be needing more memory. :thinking:

Also, note that 1.21.8 is a fairly old version and a lot has changed since it.

We strongly suggest upgrading it to latest versions.

Thanks!

thank you for the reply. This wasn’t a cluster or version related related issue because we were able to read/write othe classes in this weaviate instance. This issue was isolated to this specific class and I’d get that error even if I did GET https://[OUR_SERVER_NAME]/v1/objects?class=DocumentTextChunk256&limit=1

I noticed that

Weaviate also uses memory-mapped files for data stored on disks. Memory-mapped files are efficient, but disk storage is much slower than in-memory storage.

Is there a way to alter this? We’re currently using EFS for storage and if I had to guess by the error itself, it’s most likely tied to Weaviate’s memory mapping

is there a way to implement a vector cache based on some type of recency or incorporate some type TTL related to weaviate’s in memory usage?

Alternatively, is there a way to autoscale memory for an hnsw index dynamically?

Oh, I see.

Looks like Weaviate is having a hard time loading this specific collection shard into memory.

AFAIK you cannot disable it, as it’s how it is used to load data from disk to memory.

While this may not be tied to a version, newever versions for sure has a lot of improvements that may solve this.

is there a way to implement a vector cache based on some type of recency or incorporate some type TTL related to weaviate’s in memory usage?

You can try reducing vectorCacheMaxObjects as described here:

Let me know if this helps.

Thanks!

how could I autoscale memory allocated to an index? Or alternatively, is there a way to incorporate some type of TTL around data stored in memory?

I see, how does the vectorCacheMaxObjects work? Is it based on recency? Thus if I set it to 100,000 will it store the most recent 100,000 vectors and anything thereafter would need to be read from disk?
Then, once it reaches 100,000 it restarts the cache?

hi!

Autoscaling would be something to be done at the k8s or docker level :thinking:

as per TTL, if you are using multitenancy, there is a new feature where you can load and offload from memory on a per tenant basis:

For now, you can activate and deactivate. So if this applies for your usecase, let’s say a user logs out from your system, you can offload that tenant.

Future versions will allow a time to auto offload the tenant, considering it’s last activity.

So, whenever a new query comes in for a deactivated tenant, Weaviate will load it on demand.

Let me know if this helps.

Thanks!

thank you, I think I’ll give setting the vectorCacheMaxObjects a try. I’m assuming this is something that is set upon class creation rather than something I change via the manifest/server correct? If so, I’m assuming it’s defined via client.schema.create_class(some_dict) where some_dict["vectorIndexConfig"]["vectorCacheMaxObjects"] = 100000?
https://weaviate-python-client.readthedocs.io/en/v3.2.2/weaviate.schema.html#module-weaviate.schema

This error tells that you are hitting Operating Systems open files limit.

This setting is OS specific setting and can be increased. Our helm charts are updating this setting already (look here). If you already have this OS setting applied then you can switch to PERSISTENCE_LSM_ACCESS_STRATEGY=pread with this Weaviate setting this error should go away.

1 Like

thank you for the reply. We tried considering PERSISTENCE_LSM_ACCESS_STRATEGY=pread but now we’re getting

{“error”:“init shard "documenttextchunk256_SsFs8E1E78j2": init shard "documenttextchunk256_SsFs8E1E78j2": init per property indices: init properties on shard ‘documenttextchunk256_SsFs8E1E78j2’: create property
‘embedding’ value index on shard ‘documenttextchunk256_SsFs8E1E78j2’: init disk segments: init segment segment-1724189418098378082.db: mmap file: invalid argument”,“level”:“error”,“msg”:“Unable to load shard SsFs8
E1E78j2: init shard "documenttextchunk256_SsFs8E1E78j2": init shard "documenttextchunk256_SsFs8E1E78j2": init per property indices: init properties on shard ‘documenttextchunk256_SsFs8E1E78j2’: create property
‘embedding’ value index on shard ‘documenttextchunk256_SsFs8E1E78j2’: init disk segments: init segment segment-1724189418098378082.db: mmap file: invalid argument”,“time”:“2024-09-06T14:06:11Z”}