I\'d like to send a local REST request in a flask app, like this:
from flask import Flask, url_for, request
import requests
app = Flask(__name__)
@app.rout
I have had the same issue with post method, in general my post method was doing nothing, thats why this issues came
return _socket.socket.send(self._sock, data, flags) urllib3.exceptions.ProtocolError:
('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
if request.method == 'POST':
print(len(request.data))
return 'dummy'
this print
did the trick
There are several things at play here, and I'll try to address them one-at-a-time.
First, you're probably using the toy development server. This server has many limitations; chiefly among these limitations is that it can only handle one request at a time. When you create a second request during your first request, you are locking up your application: The requests.post()
function is waiting for Flask to respond, but Flask itself is waiting for post()
to return! The solution to this particular problem is to run your WSGI application in a multithreaded or multiprocess environment. I prefer http://twistedmatrix.com/trac/wiki/TwistedWeb for this, but there are several other options.
With that out of the way... This is an antipattern. You almost certainly don't want to invoke all of the overhead of an HTTP request just to share some functionality between two views. The correct thing to do is to refactor to have a separate function that does that shared work. I can't really refactor your particular example, because what you have is very simple and doesn't really even merit two views. What did you want to build, exactly?
Edit: A comment asks whether multithreaded mode in the toy stdlib server would be sufficient to keep the deadlock from occurring. I'm going to say "maybe." Yes, if there aren't any dependencies keeping both threads from making progress, and both threads make sufficient progress to finish their networking tasks, then the requests will complete correctly. However, determining whether two threads will deadlock each other is undecidable (proof omitted as obtuse) and I'm not willing to say for sure that the stdlib server can do it right.
Run your flask app under a proper WSGI server capable of handling concurrent requests (perhaps gunicorn or uWSGI) and it'll work. While developing, enable threads in the Flask-supplied server with:
app.run(threaded=True)
but note that the Flask server is not recommended for production use. As of Flask 1.0, threaded
is enabled by default, and you'd want to use the flask
command on the command line, really, to run your app.
What happens is that using requests you are making a second request to your flask app, but since it is still busy processing the first, it won't respond to this second request until it is done with that first request.
Incidentally, under Python 3 the socketserver implementation handles the disconnect more gracefully and continues to serve rather than crash.
The bug that caused the crash was fixed in Version 0.12, Released on December 21st 2016. Yeah! This is an important fix that many have been waiting for.
From the Flask changelog:
- Revert a behavior change that made the dev server crash instead of returning a Internal Server Error (pull request #2006).