BachelorThesis on on-prem RAG

Hello everyone,

I am currently working on my bachelor thesis, which involves implementing an on-premise solution for Retrieval-Augmented Generation (RAG), particularly for handling confidential government documents.

I successfully installed Verba and it functions well with the OpenAI API. However, I am now exploring the possibility of using Verba in a completely local environment, disconnected from any external Large Language Models like those provided by OpenAI.

Is there a way to host and operate Verba locally without relying on external LLMs? Any guidance or suggestions for achieving this would be greatly appreciated, as it is crucial for maintaining the confidentiality of the documents in our government context.

Thank you for your time and assistance.

Best,
Lewin

Hi @Lewiin !

Welcome to our community :hugs:

That’s really interesting :slight_smile: Congrats on your thesis.

You can use Weviate with local LLMs, such as text2vec-transformers and qna-transformers:

and

However, I don’t think Verba support this local transformers. It wouldn’t be too complex to add, specially now that Verba has a modular design:

Let me know if that helps.

This will

Hi Duda,

Thank you very much for your response.

Just to clarify:

As it currently stands, there isn’t an out-of-the-box solution to connect Verba, for example, with a locally hosted LLama2, right?

So, I would need to dive into the source code and modify it accordingly.

Do you happen to know of any projects where this issue has already been addressed? I’m relatively new to this field and am not sure if I could manage this on my own.

Once again, thank you for your response and best regards,
Lewin