Hello! When I log in using the local method, it shows “Failed to connect to Weaviate Couldn’t connect to Weaviate, check your URL/API KEY: [Errno 30] Read-only file system: ‘/home/mjzheng/.cache/weaviate-embedded’.” I know this is because the ‘/home/mjzheng/.cache’ path I am using is read-only since I am using the school’s server. I want to change the cache path to ‘/data2/mjzheng/.cache’. What should I do specifically?
hi @nan_dou !!
Welcome to our community
Can you try defining a different environment variable for XDG_DATA_HOME
right before running Verba?
According to here: Embedded Weaviate | Weaviate
it should define the persistence path as {XDG_DATA_HOME}/weaviate/
Thanks!
oh, thanks. I’ve logged on the front-end page of verba. and i started ollama serve and ollama run llama3 in the background, but I couldn’t get feedback from the model. like this:
“Query failed: 500, message=‘Internal Server Error’,
url=URL(‘http://localhost:11434/api/embed’)”
Can you paste the entire error stack?
This seems a 500 error in ollama.
Do you see any outstanding errors in
~/.ollama/logs/server.log
?
I still haven’t solved the problem. ollama serve turned like :
INFO [main] model loaded | tid=“139740393578496” timestamp=1730437064
time=2024-11-01T12:57:44.375+08:00 level=INFO source=server.go:626 msg=“llama runner started in 5.02 seconds”
time=2024-11-01T12:57:44.424+08:00 level=INFO source=server.go:992 msg=“llm encode error: 500 Internal Server Error\nn_Map_base::at”,
and i upload documents to verba failed ,verba log:
Succesfully retrieved document: 0 documents
INFO: 127.0.0.1:41544 - “POST /api/get_all_documents HTTP/1.1” 200 OK
INFO: 127.0.0.1:41534 - “POST /api/get_labels HTTP/1.1” 200 OK
Succesfully retrieved document: 0 documents
should i do to adjust ?
That is strange.
Could we do a screen share session? Feel free to join us in our slack so we can align this:
One thing we could try doing is entering Weaviate container and try vectorizing something in ollama from there, so we can try getting more from this error message.
By the way, this message you got from ollama logs, right?
This seems an ollama issue. Are you running the latest ollama version?
Can you run a docker on that computer? That could make things easier…
Dear, it is very regrettable that my server does not have permission to use Docker. I plan to use Start an Embedded Weaviate instance—Python Client v3. However, after executing the Python code, an error occurred as follows. My confusion is that after using the XDG variable for the persistence path, the problem still hasn’t been solved. Is there an error in the way I declare environment variables? Another confusion is how can I deploy Weaviate to my local server? Is there any tutorial?By the way, I am using the local deployment method.
Traceback (most recent call last):
File “”, line 1, in
File “/data_temp/mjzheng/env/temp_rag/lib/python3.10/site-packages/weaviate/client.py”, line 268, in init
url, embedded_db = self.__parse_url_and_embedded_db(url, embedded_options)
File “/data_temp/mjzheng/env/temp_rag/lib/python3.10/site-packages/weaviate/client.py”, line 302, in __parse_url_and_embedded_db
embedded_db = EmbeddedV3(options=embedded_options)
File “/data_temp/mjzheng/env/temp_rag/lib/python3.10/site-packages/weaviate/embedded.py”, line 60, in init
self.ensure_paths_exist()
File “/data_temp/mjzheng/env/temp_rag/lib/python3.10/site-packages/weaviate/embedded.py”, line 129, in ensure_paths_exist
Path(self.options.binary_path).mkdir(parents=True, exist_ok=True)
File “/data_temp/mjzheng/env/temp_rag/lib/python3.10/pathlib.py”, line 1175, in mkdir
self._accessor.mkdir(self, mode)
OSError: [Errno 30] Read-only file system: ‘/home/mjzheng/.cache/weaviate-embedded’
This has worked for me.
try this:
mkdir -p /tmp/nan-temp/data
cd /tmp/nan-temp
export XDG_DATA_HOME=/tmp/nan-temp/data
echo "import weaviate
client = weaviate.connect_to_embedded()
client.close()" > app.py
python3 app.py
ls data/
this should be the output:
{“action”:“startup”,“build_git_commit”:“ab0312d5d”,“build_go_version”:“go1.23.1”,“build_image_tag”:“localhost”,“build_wv_version”:“1.26.6”,“default_vectorizer_module”:“none”,“level”:“info”,“msg”:“the default vectorizer modules is set to "none", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer”,“time”:“2024-11-01T11:22:15-03:00”}
…
some logs here…
…
{“build_git_commit”:“ab0312d5d”,“build_go_version”:“go1.23.1”,“build_image_tag”:“localhost”,“build_wv_version”:“1.26.6”,“level”:“info”,“msg”:“closing raft-rpc server …”,“time”:“2024-11-01T11:22:19-03:00”}
classifications.db migration1.22.fs.hierarchy schema.db
migration1.19.filter2search.skip.flag modules.db
migration1.19.filter2search.state raft
Let me know if this works.
This will basically set the correct data path, create the path, run the server locally, close the connection, and list the provided data path
hi @nan_dou !!
Let me know if this worked for you.
Also, if necessary, happy to jump in a call
Have a great day!
Thank you very much for your help. I have successfully used cloud weaviate, and now I want to deploy Embedded Weaviate locally, but I don’t seem to see the command for downloading in Embedded Weaviate | Weaviate
hi @nan_dou ! Glat to hear that!
You run Embedded mode from “within” the client.
It will first then download the go binary and run it for you.
This is all you need for running with python:
import weaviate
client = weaviate.connect_to_embedded()
print(client.get_meta().get("version"))
now you content will be stored, by default, at ~/.local/share/weaviate/
(unless you specify otherwise)
Let me know if that helps!
Thanks!