I have encountered one bug in my webapp that has been working for more than a year before, and when I switched to UWSGI on a new instance to speed things up a bit, I encountered this.
My app has "quick add" modal window which allows user to add a new customer to the database, and immediately go to the shopping cart for that user. So, the module makes a POST
request to /customers/quick_create/
, which does the redirection to /cart/10000
, where 10000
is the ID of the customer. Then the fun starts.
As there is a check on that /cart
to see whether there is a customer with that ID or not, I noticed that check gets activated and when that request is made, the user is redirected to the fallback link, to to the actual cart. This is the code that performs the check:
q = Customer.query.filter_by(id=cust).first()
if q is None:
return redirect('/customers/')
The check is there because someone might get to that stage not only via that modal. And, sometimes the user gets to the fallback url, sometimes to the /cart
. In all cases, the customer is actually created, I can go later to it and see it in the database, so for some reason, this SQL query doesn't find a customer with that ID.
I have checked the USGI logs and this is a short excerpt:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py:324: Warning: Data truncated for column 'refill_date' at row 1
cursor.execute(statement, parameters)
[pid: 5197|app: 0|req: 1/1] 123.123.123.123 () {54 vars in 1285 bytes} [Tue Feb 3 14:34:59 2015] POST /customers/quick_create/ => generated 237 bytes in 43 msecs (HTTP/1.1 302) 4 headers in 421 bytes (2 switches on core 0)
Tue Feb 3 14:34:59 2015 - ...The work of process 5197 is done. Seeya!
[pid: 5200|app: 0|req: 1/2] 123.123.123.123 () {48 vars in 1118 bytes} [Tue Feb 3 14:35:00 2015] GET /cart/16198/ => generated 229 bytes in 42 msecs (HTTP/1.1 302) 4 headers in 417 bytes (1 switches on core 0)
Tue Feb 3 14:35:00 2015 - ...The work of process 5200 is done. Seeya!
Tue Feb 3 14:35:00 2015 - worker 1 killed successfully (pid: 5197)
Tue Feb 3 14:35:00 2015 - Respawned uWSGI worker 1 (new pid: 5218)
Tue Feb 3 14:35:00 2015 - worker 4 killed successfully (pid: 5200)
Tue Feb 3 14:35:00 2015 - Respawned uWSGI worker 4 (new pid: 5219)
Tue Feb 3 14:35:00 2015 - mapping worker 4 to CPUs: 0
Tue Feb 3 14:35:00 2015 - mapping worker 1 to CPUs: 0
Tue Feb 3 14:35:03 2015 - WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x1dd1630 pid: 5219 (default app)
Tue Feb 3 14:35:03 2015 - mounting uwsgi on /
Tue Feb 3 14:35:03 2015 - WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x1dd1630 pid: 5218 (default app)
Tue Feb 3 14:35:03 2015 - mounting uwsgi on /
[pid: 5199|app: 0|req: 1/3] 123.123.123.123 () {48 vars in 1110 bytes} [Tue Feb 3 14:35:00 2015] GET /customers/ => generated 3704543 bytes in 3402 msecs (HTTP/1.1 200) 3 headers in 370 bytes (18 switches on core 0)
Tue Feb 3 14:35:03 2015 - ...The work of process 5199 is done. Seeya!
Tue Feb 3 14:35:04 2015 - worker 3 killed successfully (pid: 5199)
Tue Feb 3 14:35:04 2015 - Respawned uWSGI worker 3 (new pid: 5226)
Tue Feb 3 14:35:04 2015 - mapping worker 3 to CPUs: 0
Tue Feb 3 14:35:05 2015 - WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x1dd1630 pid: 5226 (default app)
Tue Feb 3 14:35:05 2015 - mounting uwsgi on /
This is my UWSGI config:
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/example.com/example.com.socket</socket>
<pythonpath>/srv/www/example.com/application/</pythonpath>
<app mountpoint="/">
<script>uwsgi</script>
</app>
<master/>
<callable>app</callable>
<module>app</module>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
<lazy-apps/>
</uwsgi>
It really seems strange to me that UWSGI kills that worker just after the request. When there is a request for some static files, there are new PIDs for each request.
This is not happening on mod_wsgi with Apache instance. The CPU spikes to 100% from time to time, but the average load is OK, 0.25, 0.22, 0.15
at the time of writing, and the RAM usage is about 300 out of 900 MB.
Can anyone point me to a right direction? Thanks
Ran into this issue: uwsgi cheaper killing workers processing requests
Bug fixed & merged in August 2016.
https://github.com/unbit/uwsgi/issues/1288
https://github.com/unbit/uwsgi/commit/f5dd0b855d0d21be62534dc98375fae2cd7c13f0
来源:https://stackoverflow.com/questions/28307401/uwsgi-killing-workers-too-fast