Cannot bind to existing PVC

For testing purposes, I’m trying to deploy weaviate onto a docker desktop instance of kubernetes

I’ve set up a PVC to be used, and confirmed that it works with other pods, before resetting my cluster and trying again.

The PV file (with the directory edited)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: hostpath
  capacity:
    storage: 32Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /run/desktop/mnt/host/.../weaviate_pv

the PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi

and the storage section in values.yaml

storage:
  fullnameOverride: task-pv-claim

I’ve trying deploying the PV and the PVC to both the default namespace and the weaviate namespace, but I can’t seem to get weaviate to use the PVC as opposed to creating a new claim.

EDIT:

these are the standard commands I use to get things going:

kubectl create namespace weaviate
kubectl apply -f pv.yaml --namespace "weaviate"
kubectl apply -f pvc.yaml --namespace "weaviate"
helm upgrade --install "weaviate" weaviate/weaviate --namespace "weaviate" --values ./values.yaml

also running kubectl get pvc --namespace "weaviate" gives me:

NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim              Bound    task-pv-volume                             32Gi       RWO            hostpath       2s
weaviate-data-weaviate-0   Bound    pvc-d5690f50-9822-4369-a4b0-eb584140a7f6   32Gi       RWO            hostpath       2s

Hi @chirag-phlo !

Sorry for the delay here. Missed this one :frowning:

Were you able to solve this? This seems more a K8s question instead of a Weaviate one.

I am no K8s expert (still learning it) but I may be able to proxy this to some more experienced colleagues.

Thanks!

@chirag-phlo Did you manage to get it to work? I am also trying this locally.

macOS + docker + minikube + helm (very similar to Achieve Zero-Downtime Upgrades with Weaviate’s Multi-Node Setup | Weaviate) where I want to retry replication = 3. For each node, I want their /var/lib/weaviate to map to an external disk with different folders named like /Volumes/My_Disk/weaviate-node-0 , …/weaviate-node-1, …/weaviate-node-2

here’s the section I changed for values.yaml:

# The Persistent Volume Claim settings for Weaviate. If there's a
# storage.fullnameOverride field set, then the default pvc will not be
# created, instead the one defined in fullnameOverride will be used
storage:
  size: 50Gi
  storageClassName: ""
  existingClaim: true
  fullnameOverride: "weaviate-pvc"  # Explicitly disable the default PVC creation

# Add the extraVolumes and extraVolumeMounts sections to use your manually created PVCs
extraVolumes:
  - name: weaviate-data-0
    persistentVolumeClaim:
      claimName: weaviate-pvc-0
  - name: weaviate-data-1
    persistentVolumeClaim:
      claimName: weaviate-pvc-1
  - name: weaviate-data-2
    persistentVolumeClaim:
      claimName: weaviate-pvc-2

extraVolumeMounts:
  - name: weaviate-data-0
    mountPath: /var/lib/weaviate
    subPath: node-0
  - name: weaviate-data-1
    mountPath: /var/lib/weaviate
    subPath: node-1
  - name: weaviate-data-2
    mountPath: /var/lib/weaviate
    subPath: node-2

and my pvs.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: weaviate-pv-0
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/My_Disk/weaviate-node-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: weaviate-pv-1
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/My_Disk/weaviate-node-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: weaviate-pv-2
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/My_Disk/weaviate-node-2"

and pvcs.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: weaviate-pvc-0
  namespace: weaviate
spec:
  storageClassName: ""  # Add this line
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  volumeName: weaviate-pv-0
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: weaviate-pvc-1
  namespace: weaviate
spec:
  storageClassName: ""  # Add this line
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  volumeName: weaviate-pv-1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: weaviate-pvc-2
  namespace: weaviate
spec:
  storageClassName: ""  # Add this line
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  volumeName: weaviate-pv-2

note that the only diff is I added the namespace in pvcs.yaml (I also tried without it, and other variations in other parts). And like @chirag-phlo, it also resulted in that weaviate-data-weaviate-0 PV getting created and assigned, which isnt right.

kubectl get pvc -n weaviate

NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
weaviate-data-weaviate-0   Bound    pvc-f42cdf50-e227-4339-90e9-0afc929ac2b2   50Gi       RWO            standard       <unset>                 4m5s
weaviate-data-weaviate-1   Bound    pvc-a556ed20-9455-4712-a155-a06cc25d9fdd   50Gi       RWO            standard       <unset>                 4m5s
weaviate-data-weaviate-2   Bound    pvc-2b88f11e-fc9c-49c1-8719-473fdc673879   50Gi       RWO            standard       <unset>                 4m5s
weaviate-pvc-0             Bound    weaviate-pv-0                              50Gi       RWO                           <unset>                 5m48s
weaviate-pvc-1             Bound    weaviate-pv-1                              50Gi       RWO                           <unset>                 5m48s
weaviate-pvc-2             Bound    weaviate-pv-2                              50Gi       RWO                           <unset>                 5m48s

I think something is still wrong in my values.yaml, esp. about fullnameOverride, which I have low confidence I did the right thing.

if you have any information, please do share. Maybe if everything is figured out and great, I can help put together a quick blog if this helps others.