"multi2vec-clip" performance vs "text2vec-openai"

We are beginning to try out multimode as we will need to soon upload images. For now we only have text. We have simply replaced the text vec with multi but the performance is really really slow. Text takes around 2-3 sec for our test data, but multi takes about 20s - should this be the case?

Hi @systemz - and welcome!

The one big difference is that text2vec-openai sends data to OpenAI to get vectors, but multi2vec-clip is using a local inference container (i.e. generating vectors locally). This is probably why you’re seeing a big difference in performance.

If you have an nVidia GPU you should be able to get a pretty sizeable boost with multi2vec by enabling CUDA. (see multi2vec-clip | Weaviate - vector database)

Thank you @jphwang

We have a managed instance - how can I tell if its GPU enabled?

Z

Hey @systemz sorry I didn’t see this.

I suspect you mean managed instance as in someone else is running your Weaviate cluster, as WCS doesn’t support multi2vec-clip. Are you able to ask whomever is running that cluster whether it has a GPU?