I am using a standard docker setup, for text, that will store large scripts of text in an array. I have about 8Gb Ram 2Vcpu and 1/8th of an A16 (2GB GPU Ram). It seems to run okay for a few objects being uploaded but pretty much timesout like taking 5 minutes per update…
I probably have 1000 Objects and scripts for only 100.
Is there a way to understand better the possible throughput for data? I don’t think its a small machine, but perhaps I am wrong? Why would this slowly stop working? I attempted rebooting and bring docker down and back up, but no help. Is there a better way to log issues to find the problem?
version: '3.4' services: weaviate: command: - --host - 0.0.0.0 - --port - '8080' - --scheme - http image: semitechnologies/weaviate:1.20.3 ports: - 8080:8080 restart: unless-stopped volumes: - /mnt/data-storage/weaviate:/var/lib/weaviate deploy: resources: reservations: devices: - capabilities: [gpu] environment: TRANSFORMERS_INFERENCE_API: 'http://t2v-transformers:8080' SPELLCHECK_INFERENCE_API: 'http://text-spellcheck:8080' QUERY_DEFAULTS_LIMIT: 25 AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true' PERSISTENCE_DATA_PATH: '/var/lib/weaviate' DEFAULT_VECTORIZER_MODULE: 'text2vec-transformers' ENABLE_MODULES: 'text2vec-transformers,text-spellcheck' CLUSTER_HOSTNAME: 'node1' LOG_LEVEL: 'debug' t2v-transformers: image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1 environment: ENABLE_CUDA: '1' NVIDIA_VISIBLE_DEVICES: 'all' deploy: resources: reservations: devices: - capabilities: - 'gpu'