Is this a request for help?: also can be feature request or bag (new one in redis).
I am not sure that it is a bug report ( it also can be a feature request, I am a new one in redis):
Version of Helm and Kubernetes: v1.11.8
Which chart: stable/redis-ha
What happened: In my code, I've connected to Redis service: "redis-ha:6379". After first setup, it works fine.
But after first failover when the master was changed to another instance I've received an error READONLY You can't write against a read-only replica every time when creating a new connection from the application and Redis ( I've received it only once per new connection).
What you expected to happen: I supposed that when I connect to main redis-ha service ( in my case it is redis-ha-qa ClusterIP None <none> 6379/TCP,26379/TCP ) It should always connect me with correct service which belongs to master pod.
redis-ha-qa ClusterIP None <none> 6379/TCP,26379/TCP 14m
redis-ha-qa-announce-0 ClusterIP 100.64.213.98 <none> 6379/TCP,26379/TCP 14m
redis-ha-qa-announce-1 ClusterIP 100.67.194.44 <none> 6379/TCP,26379/TCP 14m
redis-ha-qa-announce-2 ClusterIP 100.67.88.115 <none> 6379/TCP,26379/TCP 14m
Anything else we need to know: Could you confirm from your side that redis-ha doesn't work on such way and I should follow to this guide https://redis.io/topics/sentinel-clients? Or maybe there is a chance that it should work like this.
Same for me. k8s version 1.12.5
After 4 retries, it's ok.
/data $ redis-cli -h khaki-sasquatch-redis-ha.default.svc.cluster.local set v888 6555555
(error) READONLY You can't write against a read only replica.
/data $ redis-cli -h khaki-sasquatch-redis-ha.default.svc.cluster.local set v888 6555555
(error) READONLY You can't write against a read only replica.
/data $ redis-cli -h khaki-sasquatch-redis-ha.default.svc.cluster.local set v888 6555555
(error) READONLY You can't write against a read only replica.
/data $ redis-cli -h khaki-sasquatch-redis-ha.default.svc.cluster.local set v888 6555555
OK
afaik that's not how sentinel works. You should use the redis-ha-qa service to connect to one sentinel instance to get the ip of the current master. And then connect to that aktive master.
Looks like a duplicate of https://github.com/helm/charts/issues/8988
From what I understood, proposed solution is to use something like https://github.com/prodriguezdefino/charts/tree/master/incubator/haproxy-redis
Is it still the way to go or is there something else available ?
@mmack what do you mean by "redis-ha-qa service" ?
The only service I can find in the stable chart is exposing all redis nodes, would they be master or slave.
Thanks for your help
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
What would be the correct way to connect through redis-rb. Their documentation on sentinel is here: https://github.com/redis/redis-rb#sentinel-support
When we get a good answer here, let's add it to the readme as well
I am having the same issue. Is the HA Proxy the best way to go? My issue hasn't even failed over (AFAIK) but I still get the "READONLY You can't write against a read only replica." about 80% of the time.
Clearly something that I am not understanding but is putting HA_Proxy out in front of it the recommended solution?
And FWIW - I am running a standard "helm install stable/redis-ha" and attempting to connect over service name (redis-redis-ha:6379)
Figured it out, the configuration should look something like this (note this is terraform syntax):
client = {
url = "redis://mymaster"
sentinels = [
{
host = "redis-${var.staging ? "staging" : "production"}-redis-ha-announce-0"
port = "26379"
},
{
host = "redis-${var.staging ? "staging" : "production"}-redis-ha-announce-1"
port = "26379"
},
{
host = "redis-${var.staging ? "staging" : "production"}-redis-ha-announce-2"
port = "26379"
},
]
}
I'm seeing a lot of suggested solutions here, with very little information or steps on how they fixed it. Can someone open a PR if they have fixed it.
I'm experiencing this, in my case we're using LB service, which is basically iterating over all pods assigned to the service, so for 3 nodes, 66% of the time I'm getting a READ-ONLY Slave/Replica.
I'm taking a look at what might solve this, I'll get back with some details if/when I figure it out.
If anyone above can offer some more steps on how they fixed this though, that would be perfect.
From my perspective, either:
I don't know what's feasible of these two / more native. There are probably more options as well like a proxy the latter.
I've looked a little more at this, I have a feeling that this chart is supposed to only support local kubernetes access by default.
For example, in this chart, we have 3x sentinels, 3x redis clients.
What's expected from the client is, you use the sentinel module/package to determine the master, then you connect to the server using the master credentials given by the Sentinel instance.
The workflow for the client would look something like: Service (26379) -> Fetch Master Creds -> Connect Master. This workflow assumes your client is on the same k8 cluster as the Redis server.
If you want to support this externally via an addressable IP (LoadBalancer, NodePort), we'd need an extra layered proxy in order to achieve this. I'm checking what's actually available.
Feel free to comment/tell me I'm wrong everyone,
Okay, so a follow up, basically the only way to do this is with HAProxy. I've added HAProxy stuff to the chart, but the healthchecks need to be better. I'm almost done, but when it is done and the PR is merged, you can enable the HAProxy with as many replicas as you want, and only the delegated master should be used by the HAProxy service.
Annnnd hopefully the above PR will fix this for everyone. It tries to mimic some of the leg work a client would perform on sentinel. I've had 100% success with it so far.
It deploys a HAProxy in place of the existing service if you enable it, to basically replace the common workflow for establishing with Redis server you connect against.
I was getting the same error when writing to my new master after the failover. My issue was fixed when I replaced replicaof localhost 6379 with replicaof 127.0.0.1 6379 in the original replica configuration.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Hi everyone, I have developed a controller that fetching event and making changes on k8s service basically. https://github.com/WoodProgrammer/redis-sentinel-k8s-extension This repo is containing sentinel-helm chart for externally.
In my case i set REDIS_HOST to the service redis-headless. I reconfigured it to the k8s cluster svc redis and it worked.
Most helpful comment
I'm seeing a lot of suggested solutions here, with very little information or steps on how they fixed it. Can someone open a PR if they have fixed it.
I'm experiencing this, in my case we're using LB service, which is basically iterating over all pods assigned to the service, so for 3 nodes, 66% of the time I'm getting a READ-ONLY Slave/Replica.
I'm taking a look at what might solve this, I'll get back with some details if/when I figure it out.
If anyone above can offer some more steps on how they fixed this though, that would be perfect.
From my perspective, either:
I don't know what's feasible of these two / more native. There are probably more options as well like a proxy the latter.