Increasing number of replicas on K8 Nodes are not aware of each other

I was testing increasing the number of Pod replicas from 1 to 2 on an existing deployment. The existing deployment all ready had data populated with multiple class schemas created. When I portforward to the new pod and try getting the schema definitions it says they do not exists. Wondering why this is?

On possibility I am wondering about is whether it because the the original schema ad replciationConfig set to 1. Although I would still expect the schema to persist between nodes

The other possibility I am wondering about as maybe maybe I have to connect to the first pod and not the second pod as it acts as the controller node itself on on the controller are schema node kept

Would appreciated any further info people can provide.

Further details I tried port forwarding to the first replica with historic data and updating the existing classes replication config to two:

Get the following message:
replication config: cannot scale to 2 replicas, cluster has only 1 nodes

There are clearly two pods in the stateful set so not sure what this is about.

Note I have also tried adjust the CLUSTER_JOIN ENV variable as described in the solution here: Cluster nodes not aware of eachother - #3 by Lewiky but it also did not resolve the issue.

It appears it is the same type of problem though that the nodes are not of aware of each other. When I use the nodes endpoint it clearly show only one each time

I wonder if it has something to do with the clusters istio. Can anyone speak to configurations I might need to add to make the nodes aware of each other in isito?

I’ve tried adding the Headless service as a service entry similar to what is described here: kubernetes - Istio ingress not working with headless service - Stack Overflow. But nodes still dont seem to be aware of each other.

I can’t even seem to find anything within logs regarding this issue.

Hi @Landon_Edwards !

Were you able to solve this?

It appears it had something to do with istio Sidecar proxy. By disabling the proxy the distributed functionality started working.


Excluding the ports from the istio mesh does indeed fix this issue. However, since this is not really a permanent solution, I was wondering if you were able to find any other way to solve this?

Unfortunately I was never able to find a solution to get it to work with the the isito proxy.

My understanding is that the nodes try to communicate between each other through the gossip ports on the created headless services.

However is the headless service isn’t being included in the proxy for some reason so when it tries to use that name it doesn’t get resolved to the static IP associated to each of the nodes through the headless service.

Somehow the proxy config needs to be adjusted so that in addition to the regular service created an entry or config mapping is also included that allows the headless service name to be resolved to its IP values.

I’m really not sure how to accomplish this though .

Gotcha! Yeah it seems that there is an issue with headless services and istio Istio not working with headless service · Issue #7495 · istio/istio · GitHub