Before you begin
This tutorial presupposes that you already have a working OVHcloud Managed Kubernetes cluster, and you have deployed there an application using the OVHcloud Managed Kubernetes LoadBalancer. If you want to know more on those topics, please look at the using the OVHcloud Managed Kubernetes Load Balancer documentation.
The problem
When you deploy your HTTP services in NodePort
mode, you directly recover the request’s Remote Address
from the server (for example using $_SERVER['REMOTE_ADDR']
on PHP or $ENV{'REMOTE_ADDR'}
in Perl). This address (usually in IP:port
format) corresponds to the original requestor or the last proxy between them and your cluster.
When deploying the services in LoadBalancer
mode, things are a bit different, our Load Balancer acts like a proxy, and the Remote Address
will give you the IP address of the Load Balancer. How can you get the source IP of the request in this case?
This tutorial describe how to deploy a LoadBalancer
service on OVHcloud Managed Kubernetes and preserve the source IP.
Getting the request’s source IP behind the LoadBalancer
The easiest way to deploy services behind the Load Balancer while keeping the source IP is to place your services under an Ingress
, itself behind the LoadBalancer
.
The Ingress
is exposed to the outside of the cluster either via LoadBalancer
, and it routes incoming traffic to your services according to configured rules. And additional advantage of this setting is the cost: you can have lots of services behind a single LoadBalancer
.
In this tutorial we are using the most basic Ingress Controller: NGINX Ingress Controller, where an NGINX server take the role of reverse proxy.
1. Installing the NGINX Ingress Controller
The official way to install the NGINX Ingress Controller is using a mandatory manifest file:
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
It creates the namespace
, serviceaccount
, role
and all the other Kubernetes objects needed for the Ingress Controller, and then it deploys the controller:
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
kind: Service
serviceaccount/ingress-nginx created
# Please edit the object below. Lines beginning with a '#' will be ignored,
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
You can use kubectl
to get the state of the service and recover the Load Balancer’s IP:
# kubectl get service ingress-nginx-controller -n ingress-nginx
You should see your newly created Ingress service:
# kubectl get service ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.3.81.157 xxx.xxx.xxx.xxx 80:xxxxx/TCP,443:xxxxx/TCP 4m32s
As the LoadBalancer
creation is asynchronous, and the provisioning of the Load Balancer can take several minutes, you can get a <pending>
at EXTERNAL-IP
while the Load Balancer is setting up. In this case, please wait some minutes and try again.
2. Patching the Ingress Controller
Now you need to patch the Ingress controller to support the proxy protocol.
Get the list of the egress load balancer IPs:
# kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath="{.metadata.annotations.lb\.k8s\.ovh\.net/egress-ips}"
You should see something like this:
# kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath="{.metadata.annotations.lb\.k8s\.ovh\.net/egress-ips}"
aaa.aaa.aaa.aaa/32,bbb.bbb.bbb.bbb/32,ccc.ccc.ccc.ccc/32,ddd.ddd.ddd.ddd/32,eee.eee.eee.eee/32,fff.fff.fff.fff/32
Copy the next YAML snippet in a patch-ingress-controller-service.yml
file:
metadata:
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v1"
And apply it in your cluster:
# kubectl -n ingress-nginx patch service ingress-nginx-controller -p "$(cat patch-ingress-controller-service.yml)"
Copy the next YAML snippet in a patch-ingress-controller-configmap.yml
file and modify the proxy-real-ip-cidr
parameter accordingly:
data:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
proxy-real-ip-cidr: "aaa.aaa.aaa.aaa/32,bbb.bbb.bbb.bbb/32,ccc.ccc.ccc.ccc/32,ddd.ddd.ddd.ddd/32,eee.eee.eee.eee/32,fff.fff.fff.fff/32"
And apply it in your cluster:
# kubectl -n ingress-nginx patch configmap ingress-nginx-controller -p "$(cat patch-ingress-controller-configmap.yml)"
After applying the patches, you need to restart the Ingress Controller:
# kubectl rollout restart deploy/ingress-nginx-controller -n ingress-nginx
You should see the configuration being patched and the controller pod deleted (and recreated):
# kubectl -n ingress-nginx patch service ingress-nginx-controller -p "$(cat patch-ingress-controller-configmap.yml)" configmap/ ingress-nginx-controller patched # kubectl -n ingress-nginx patch configmap ingress-nginx-controller -p "$(cat patch-ingress-controller-configmap.yml)" configmap/ ingress-nginx-controller patched # kubectl rollout restart deploy/ingress-nginx-controller -n ingress-nginx deployment.apps/ingress-nginx-controller restarted
3. Testing
We can now deploy a simple echo service to verify that everything is working. The service will use the mendhak/http-https-echo
image, a very useful HTTPS echo Docker container for web debugging.
First, copy the next manifest to a echo.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
namespace: echo
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- containerPort: 80
- containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: echo-service
namespace: echo
spec:
selector:
app: echo
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
namespace: echo
spec:
backend:
serviceName: echo-service
servicePort: 80
And deploy it on your cluster:
# kubectl apply -f echo.yaml
# kubectl apply -f echo.yaml
namespace/echo created
deployment.apps/echo-deployment created
service/echo-service created
ingress.extensions/echo-ingress created
Now you can test it using the LoadBalancer URL:
# curl xxx.xxx.xxx.xxx
And you should get the HTTP parameters of your request, including the right source IP in the x-real-ip
header:
{
"path": "/",
"headers": {
"host": "xxx.xxx.xxx.xxx",
"x-request-id": "2126b343bc837ecbd07eca904c33daa3",
"x-real-ip": "XXX.XXX.XXX.XXX",
"x-forwarded-for": "XXX.XXX.XXX.XXX",
"x-forwarded-host": "xxx.xxx.xxx.xxx",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-original-uri": "/",
"x-scheme": "http",
"user-agent": "curl/7.58.0",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "xxx.xxx.xxx.xxx",
"ip": "::ffff:10.2.1.2",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [
"k8s",
"gra",
"c1",
"lb",
"6d6rslnrn8"
],
"xhr": false,
"os": {
"hostname": "echo-deployment-6b6fdc96cf-hwqw6"
}
}
What if I want to use another Ingress Controller
The precedent method should work in a similar way for any Ingress Controller. We will soon update this tutorial with more detailed information on other Ingress Controllers, specifically Traefik.
Comments
0 comments
Please sign in to leave a comment.