I am currently working on my bachelor thesis, which involves implementing an on-premise solution for Retrieval-Augmented Generation (RAG), particularly for handling confidential government documents.
I successfully installed Verba and it functions well with the OpenAI API. However, I am now exploring the possibility of using Verba in a completely local environment, disconnected from any external Large Language Models like those provided by OpenAI.
Is there a way to host and operate Verba locally without relying on external LLMs? Any guidance or suggestions for achieving this would be greatly appreciated, as it is crucial for maintaining the confidentiality of the documents in our government context.
As it currently stands, there isn’t an out-of-the-box solution to connect Verba, for example, with a locally hosted LLama2, right?
So, I would need to dive into the source code and modify it accordingly.
Do you happen to know of any projects where this issue has already been addressed? I’m relatively new to this field and am not sure if I could manage this on my own.
Once again, thank you for your response and best regards,
Lewin