Server sent events stopped work after enabling ssl on proxy

旧街凉风 提交于 2021-02-06 09:27:09

问题


I made web project, that based on Tomcat and Nginx in front of him.
Had to work hard to make it work without errors.
However, when I added ssl to nginx. Stopped working server sent events.
If i acess to backend server directly - it works, so problem somewhere whith nginx.
Have someone with such a problem?
Here is relative parts of configuration

My nginx.conf (im not using sites-enabled yet, and placed my app configured here too. Basic settings at the end of conf). /SecurConfig/api/tutorial/listen - is source of events

user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
worker_connections 768;
# multi_accept on;
}

http {

root /data/;

server{
listen 80;
#   server_name ajaxdemo.in.ua;
#   proxy_set_header Host ajaxdemo.in.ua;
location / {
rewrite ^(.*)$ https://ajaxdemo.in.ua$1 permanent;
}
}

server {
#listen 80;
listen 443 default ssl;

#ssl on;
    ssl_certificate /etc/nginx/ssl/server.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key; 



  proxy_set_header  Host               $host;
  proxy_set_header  X-Real-IP          $remote_addr;
  proxy_set_header  X-Forwarded-For    $proxy_add_x_forwarded_for;
  proxy_set_header            X-Forwarded-Proto $scheme;

location / {
    root /data/www;

    add_header 'Access-Control-Allow-Origin' *;
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET';

    if ($http_cookie ~* "jsessionid=([^;]+)(?:;|$)") {
        set $co "jsessionid=$1";
        }
    #proxy_set_header Cookie "$co";

    proxy_pass http://127.0.0.1:1666/SecurConfig/;
    #proxy_pass http://88.81.229.142:1666/SecurConfig/;
    add_before_body /header.html;
    add_after_body /footer.html;
}


location /SecurConfig/api/tutorial/listen {

    add_header 'Access-Control-Allow-Origin' *;
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET';

    ##Server sent events set
    proxy_set_header Connection '';
    proxy_http_version 1.1;
    chunked_transfer_encoding off;

    proxy_connect_timeout 300;
   proxy_send_timeout 300;
   proxy_read_timeout 300;

    proxy_buffering on;
    proxy_buffer_size 8k;
    #proxy_cache off;
    ##


    if ($http_cookie ~* "jsessionid=([^;]+)(?:;|$)") {
        set $co "jsessionid=$1";
        }
    #proxy_set_header Cookie "$co";
    proxy_pass http://127.0.0.1:1666/;
    #proxy_pass http://88.81.229.142:1666/;

}


location /SecurConfig/ {
    root /data/www;

    add_header 'Access-Control-Allow-Origin' *;
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET';

    if ($http_cookie ~* "jsessionid=([^;]+)(?:;|$)") {
        set $co "jsessionid=$1";
        }
    #proxy_set_header Cookie "$co";

    proxy_pass http://127.0.0.1:1666/;
    #proxy_pass http://88.81.229.142:1666/;
    add_before_body /header.html;
    add_after_body /footer.html;
}

location ~ \.css$ {
root /data/css/;
}

location /header.html {
root  /data/www;
}
location /footer.html {
root  /data/www;
}

location ~ \.(gif|jpg|png|jpeg)$ {
    root /data/images;
}

}


##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 100m;

include /etc/nginx/mime.types;
default_type application/octet-stream;

##
# Logging Settings
##

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

##
# Gzip Settings
##

gzip on;
gzip_disable "msie6";


##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
#   include /etc/nginx/sites-enabled/*;
}

There are no error entries in error logs of nginx. But there is also mention about acess to /SecurConfig/api/tutorial/listen in acess logs. Code 200 means "all is ok"

"GET /SecurConfig/api/tutorial/listen HTTP/1.1" 200 187 "https://ajaxdemo.in.ua/SecurConfig/api/tutorial/map/11111111" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0"

Tomcat log shows acess to /SecurConfig/api/tutorial/listen as usual(e.g. checks security acess, accepts it, and resends to controller).
If i run my page in chrome developer mode, i see this error

GET https://ajaxdemo.in.ua/SecurConfig/api/tutorial/listen net::ERR_EMPTY_RESPONSE


UPD


Okay. While i searched info on internet, i left my page with SSE opened. And after 10 minutes, i saw, that my data appeared just as it must be. I commented all settings according to buffering
    #proxy_buffering on;
    #proxy_buffer_size 8k;
    #proxy_cache off;

And i commented also parameters

     #proxy_connect_timeout 300;
    #proxy_send_timeout 300;
    #proxy_read_timeout 300;

So this parameters returned top its default (about 20 sec) And all my data appeared after ~20 sec. So i setted

  proxy_connect_timeout 2;
    proxy_send_timeout 2;
    proxy_read_timeout 2;

And data appeared faster. But its.s still in one piece(3-4 events at once), before enabling ssl events showed one by one.
Still ned your help, and explanation, where i was wrong.


UPD


Here is conhiguration from my server, when i "turn off" ssl and SSE works.
ssl is off - sse works
80 port redirecting to 443 ssl - sse not works

回答1:


For SSE to work properly, you must make sure nothing is getting cached or buffered: not in your script (e.g. in PHP we have the @ob_flush();@flush() idiom), not at your web server, and not at any intermediate proxies or firewalls.

You say you commented out all the nginx commands to do with buffering, but commenting out means it will use the defaults. E.g. the default for proxy_buffering is on. I would suggest explictly specifying them to make sure all buffering and caching is switched off.

proxy_buffering off;
proxy_buffer_size 0;
proxy_cache off;

I would also consider explicitly setting the timeouts high, rather than commenting them out. The definition of "high" depends on your application. E.g. if it is always sending data every couple of seconds, the defaults will be fine. But if you are using SSE for irregular data, and there might sometimes be half an hour between messages, make sure the timeouts are more than half an hour.

UPDATE: Apparently (see the comments) adding response.addHeader("X-Accel-Buffering", "no"); (i.e. to the server-side process, not to the proxy config) fixes the problem. This makes sense, as that was added specifically for SSE and similar HTTP streaming, see the nginx documentation. It does imply the above Nginx configuration should also work (the OP has reported that it does not). But, on the other hand, using a header to disable buffering on a per-connection basis feels like a better solution anyway.




回答2:


UPDATE: this was my answer before the OP gathered more information (in particular that if he waited long enough the data arrived, i.e. that it was a buffering issue: see my other answer). I've decided to leave it here as it might be useful troubleshooting ideas for someone else. (But if you disagree leave a comment or flag for deleting.)


When planning "Data Push Apps with HTML5 SSE", using proxy servers was unfortunately the other side of where we drew the line; if you've read chapter 9 you'll know a seemingly simple standard can still get very complicated. So, I'm very interested to hear if and how you get this working.

The first thing that comes to mind is you are using self-signed SSL certs. They won't work with SSE with Chrome. Ajax won't either. (See the bug report, but it was opened in 2011, so don't hold your breath.) However you said it works with Firefox, so this is unlikely to be it.

The next thought is that you are using Access-Control-Allow-Origin:*, and Access-Control-Allow-Credentials:true, so I assume that means you need to for CORS (I.e. your html page origin and your SSE script origin are different in some way), and that cookies are involved. Are you setting { withCredentials: true } as the 2nd parameter to your EventSource constructor in your JavaScript? Even if so, be aware that Access-Control-Allow-Credentials:true does not work with Access-Control-Allow-Origin:*. You cannot specify *, and instead you have to explicitly say which origin is allowed.

If that is the problem, you can use your server-side script to make the header dynamically, based on the client's origin. (The book shows code to do this in PHP.) If nginx can use environmental variables from the user's request, you should be able to do it there too.

However, my understanding was that this would trip up with http too. I don't think it should just be an https problem. (If I'm wrong on that, let me know.)

(Oh, if your working http SSE requests were coming from http://example.com, and are still coming from http://example.com, even though the SSE request is now going to https://example.com, then everything makes sense - you did not get CORS failures before because the origin was the same; now you do have a CORS problem, and you are not handling it correctly.)

My third guess, is that the browser is sending a preflight OPTIONS request, but only doing it with https requests. (Preflight requests wildly from browser to browser, but it is not impossible that current versions of Chrome and Firefox behave the same way.) When you get an OPTIONS request you need to send back Access-Control-Allow-Headers: Last-Event-ID, Origin, X-Requested-With, Content-Type, Accept, Authorization. You also need to send back the Access-Control-Allow-Origin:* header.

You can confirm or refute this third guess by packet sniffing to see what exactly is being sent back and forth. And if all the above ideas turned up nothing, you should do that anyway.




回答3:


Briefly, nginx detects if connection is alive with backend application and detection may not work correct in case of SSL, as described here

http://mailman.nginx.org/pipermail/nginx/2013-March/038120.html



来源:https://stackoverflow.com/questions/27898622/server-sent-events-stopped-work-after-enabling-ssl-on-proxy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!