We are currently looking to integrate LLM powered chatbot to our SaaS platform which is build with following stack
- React + Typescript front end
- .net 6 backend with MY SQL DB.
I had done a basic search and understood that Verba also use react for the UI on the Top of FastAPI, as we don’t have much experience using FastAPI, will it be good to go with verba or to go with something that provides REST API on the top of FastAPI?
Hi! Welcome to our community
You can have a different backend, instead of FastAPI.
Considering you have the data ingested, Weaviate will give you a graphql endpoint where you can run your queries and generate the answers.
Your backend would then just receive the query, build the RAG query using Weaviate graphql, and forward the answer back to your backend.
Here is a simple recipe in python on how to to RAG (generative search):
Let me know if this helps