I\'m trying to change the client_max_body_size
value, so my nginx ingress will not return 413 error.
I\'ve tested few solutions.
Here is my test c
To set it globally, this configmap.md documentation might be helpful. Turns out the variable to set is proxy-body-size
, not client-max-body-size
.
When you deploy the helm chart, you can set --set-string controller.config.proxy-body-size="4m"
.
You can use the annotation nginx.ingress.kubernetes.io/proxy-body-size
to set the max-body-size option right in your Ingress object instead of changing a base ConfigMap.
Here is the example of usage:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
...
I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I'm working with someone from their support. They said its not on them since it appears to be a nginx config issue.
The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change.
Below is my configmap:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
client-max-body-size: 0m
proxy-connect-timeout: 10s
proxy-read-timeout: 10s
kind: ConfigMap
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-nginx-ingress-controller-7b9bff87b8-vxv8q","leaseDurationSeconds":30,"acquireTime":"2020-03-10T20:52:06Z","renewTime":"2020-03-10T20:53:21Z","leaderTransitions":1}'
creationTimestamp: "2020-03-10T18:34:01Z"
name: ingress-controller-leader-nginx
namespace: ingress-nginx
resourceVersion: "23928"
selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
uid: b68a2143-62fd-11ea-ab45-d67902848a80
After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx
Grepping the nginx ingress controller pod to query the value now reveals:
kubectl exec -n ingress-nginx nginx-nginx-ingress-controller-7b9bff87b8-p4ppw cat nginx.conf | grep client_max_body_size
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 21m;
Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically.......this value above never changes.
I am able to fix it by redeploying the nginx-ingress controller after modifying the configmap.
kubectl patch deployment your_deployment -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"
Update:
I have been experiencing the same problem and no solutions were working. After reading through countless blogs and docs that all had the same suggested solution I found that they have changed the naming convention.
It is no longer denoted by "proxy-body-size" or this just never works for me.
link below shows that the correct configmap variable to use is "client-max-body-size"
https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/configmap-resource/