问题
I need to expose an async REST api for c++ clients, that internally uses boost::beast for sending REST requests / receiving responses.
The starting point is http_client_async.cpp example.
Now the client will pass a callback function using this async api, that needs to be called at the end of the REST operation from the on_read() handler[http_client_async.cpp], passing the full response back to the caller.
How can i achieve this?
回答1:
but is there any way to invoke thie _callback through asio's io_context? I would like to call this callback in async fashion since this callback , which is provided by the user could block, and thus block the io_context's thread aswell? Similar to the way the other handlers like the on_read(), on_write() etc are scheduled in the io_context?
Yes. What you're after is the async_result protocol. I have some examples of that in other answers (e.g. How can I get a future from boost::asio::post?).
Here's the building blocks:
Store a handler
In your "session" (let's rename it http_request_op
and hide it in some detail namespace), you want to remember a completion handler.
Don't worry, nobody has to come up with the such a handler. We will add an initiaing function async_http_request
that will make it for you.
The end-user might use a future or a coroutine (yield_context). Of course, they can supply a plain vanilla callback if they prefer.
using Response = http::response<http::string_body>;
template <typename Handler>
class http_request_op : public std::enable_shared_from_this<http_request_op<Handler> > {
// ...
Response res_;
Handler handler_;
// ...
public:
template <typename Executor>
explicit http_request_op(Executor ex, Handler handler)
: resolver_(ex),
stream_(ex),
handler_(std::move(handler))
{ }
Now in your final step you invoke that handler_
. To keep it simple I made the fail
helper into a member function and called it complete
:
void complete(beast::error_code ec, char const* what) {
if (ec && what) {
// TODO: A better idea would to make a custom `Response` type that
// has room for "fail stage"
res_.reason(what);
}
post(stream_.get_executor(), [this, ec, self=this->shared_from_this()] {
handler_(ec, std::move(res_));
});
}
All the places that check ec
and used fail
before now call complete
with the same ec
. In addition, in on_read
we add an unconditional completion:
void on_read(beast::error_code ec, size_t /*bytes_transferred*/) {
if (ec)
return complete(ec, "read");
stream_.socket().shutdown(tcp::socket::shutdown_both, ec);
// unconditional complete here
return complete(ec, "shutdown");
}
Initiating function (async_http_request
)
template <typename Context, typename Token>
auto async_http_request(Context& ctx, beast::string_view host, beast::string_view port, beast::string_view target, int version, Token&& token) {
using result_type = typename net::async_result<std::decay_t<Token>, void(beast::error_code, Response)>;
using handler_type = typename result_type::completion_handler_type;
handler_type handler(std::forward<Token>(token));
result_type result(handler);
std::make_shared<detail::http_request_op<handler_type> >
(make_strand(ctx), std::move(handler))
->start(host, port, target, version);
return result.get();
}
You see this creates an async result, which crafts a "handler" from the token passed, kicks of the http_request_op
and returns the async result.
What is returned depends on what token is passed. See the usages:
Usage
I'll show various ways in which end-users can choose to use this async_http_request
initiating function:
Using a future
auto future = async_http_request(ioc.get_executor(), host, port, target, version, net::use_future);
ioc.run();
std::cout << future.get() << "\n";
The return type is std::future<Response>
.
The creation of the promise and setting the return value/exception information is magically handled by Asio.
Using a coroutine/yield context:
net::spawn(ioc, [&ioc,args](net::yield_context yield) {
try {
auto host = args[0];
auto port = args[1];
auto target = args[2];
int version = args[3]=="1.0"? 10 : 11;
Response res = async_http_request(
ioc,
host, port, target, version,
yield);
std::cout << res << std::endl;
} catch (boost::system::system_error const& se) {
// no way to get at response here
std::cout << "There was an error: " << se.code().message() << std::endl;
}
});
ioc.run();
The return type is just Response
here. Note that exceptions are raised if an error condition is reported. Alternatively, pass an error_code variable:
beast::error_code ec;
Response res = async_http_request(
ioc,
host, port, target, version,
yield[ec]);
std::cout << ec.message() << "\n" << res << std::endl;
Still using a callback
/*void*/ async_http_request(ioc, host, port, target, version,
[](beast::error_code ec, Response const& res) {
std::cout << ec.message() << "\b" << res << "\n";
});
The return value ends up being simply void
.
Full Demo Code
No live demo because no online compiler supports network requests and also it exceeds compilation limits (e.g. here)
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/use_future.hpp>
#include <boost/beast/core.hpp>
#include <boost/beast/http.hpp>
#include <boost/beast/version.hpp>
#include <iostream>
#include <memory>
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = boost::asio::ip::tcp;
using Response = http::response<http::string_body>;
namespace detail {
template <typename Handler>
class http_request_op : public std::enable_shared_from_this<http_request_op<Handler> > {
tcp::resolver resolver_;
beast::tcp_stream stream_;
beast::flat_buffer buffer_;
http::request<http::empty_body> req_;
Response res_;
Handler handler_;
template <typename F>
auto bind(F ptmf) { return beast::bind_front_handler(ptmf, this->shared_from_this()); }
void complete(beast::error_code ec, char const* what) {
if (ec && what) {
// TODO: A better idea would to make a custom `Response` type that
// has room for "fail stage"
res_.reason(what);
}
post(stream_.get_executor(), [this, ec, self=this->shared_from_this()] {
handler_(ec, std::move(res_));
});
}
public:
template <typename Executor>
explicit http_request_op(Executor ex, Handler handler)
: resolver_(ex),
stream_(ex),
handler_(std::move(handler))
{ }
void start(beast::string_view host, beast::string_view port, beast::string_view target, int version) {
req_.version(version);
req_.method(http::verb::get);
req_.target(target);
req_.set(http::field::host, host);
req_.set(http::field::user_agent, BOOST_BEAST_VERSION_STRING);
resolver_.async_resolve(host.to_string(), port.to_string(),
bind_executor(stream_.get_executor(), bind(&http_request_op::on_resolve)));
}
private:
void on_resolve(beast::error_code ec, tcp::resolver::results_type results) {
if (ec)
return complete(ec, "resolve");
stream_.expires_after(std::chrono::seconds(30));
stream_.async_connect(results, bind(&http_request_op::on_connect));
}
void on_connect(beast::error_code const& ec, tcp::endpoint const&) {
if (ec)
return complete(ec, "connect");
stream_.expires_after(std::chrono::seconds(30));
http::async_write(stream_, req_, bind(&http_request_op::on_write));
}
void on_read(beast::error_code ec, size_t /*bytes_transferred*/) {
if (ec)
return complete(ec, "read");
stream_.socket().shutdown(tcp::socket::shutdown_both, ec);
// unconditional complete here
return complete(ec, "shutdown");
}
void on_write(beast::error_code ec, size_t /*bytes_transferred*/) {
if (ec)
return complete(ec, "write");
http::async_read(stream_, buffer_, res_, bind(&http_request_op::on_read));
}
};
}
template <typename Context, typename Token>
auto async_http_request(Context& ctx, beast::string_view host, beast::string_view port, beast::string_view target, int version, Token&& token) {
using result_type = typename net::async_result<std::decay_t<Token>, void(beast::error_code, Response)>;
using handler_type = typename result_type::completion_handler_type;
handler_type handler(std::forward<Token>(token));
result_type result(handler);
std::make_shared<detail::http_request_op<handler_type> >
(make_strand(ctx), std::move(handler))
->start(host, port, target, version);
return result.get();
}
int main(int argc, char** argv) {
std::vector<beast::string_view> args{argv+1, argv+argc};
if (args.size() == 3) args.push_back("1.1");
if (args.size() != 4) {
std::cerr << "Usage: http-client-async <host> <port> <target> [<HTTP "
"version: 1.0 or 1.1(default)>]\n"
<< "Example:\n"
<< " http-client-async www.example.com 80 /\n"
<< " http-client-async www.example.com 80 / 1.0\n";
return 255;
}
auto host = args[0];
auto port = args[1];
auto target = args[2];
int version = args[3]=="1.0"? 10 : 11;
net::io_context ioc;
net::spawn(ioc, [=,&ioc](net::yield_context yield) {
try {
Response res = async_http_request(
ioc,
host, port, target, version,
yield);
std::cout << "From coro (try/catch): " << res.reason() << std::endl;
} catch (boost::system::system_error const& se) {
// no way to get at response here
std::cout << "coro exception: " << se.code().message() << std::endl;
}
});
net::spawn(ioc, [=,&ioc](net::yield_context yield) {
beast::error_code ec;
Response res = async_http_request(
ioc,
host, port, target, version,
yield[ec]);
std::cout << "From coro: " << ec.message() << ", " << res.reason() << "\n";
});
/*void*/ async_http_request(ioc, host, port, target, version,
[](beast::error_code ec, Response const& res) {
std::cout << "From callback: " << ec.message() << ", " << res.reason() << "\n";
});
auto future = async_http_request(ioc, host, port, target, version, net::use_future);
ioc.run();
try {
std::cout << "From future: " << future.get().reason() << "\n";
} catch (boost::system::system_error const& se) {
std::cout << "future exception: " << se.code().message() << std::endl;
}
}
Output for a successful and failing requests:
$ ./sotest www.example.com 80 / 1.1
From callback: Success, OK
From coro: Success, OK
From coro (try/catch): OK
From future: OK
$ ./sotest www.example.com 81 / 1.1
From callback: The socket was closed due to a timeout, connect
coro exception: The socket was closed due to a timeout
From coro: The socket was closed due to a timeout, connect
From future: future exception: The socket was closed due to a timeout
$ ./sotest www.example.cough 80 / 1.1
From callback: Host not found (authoritative), resolve
coro exception: Host not found (authoritative)
From coro: Host not found (authoritative), resolve
From future: future exception: Host not found (authoritative)
$ ./sotest www.example.com rhubarb / 1.1
From callback: Service not found, resolve
coro exception: Service not found
From coro: Service not found, resolve
From future: future exception: Service not found
Note that the timeout example of course runs in ~30s total, because everything runs asynchronously.
回答2:
Referencing this example as you call out:
Modify session
constructor to take a callback that accepts an http status integer and a body string.
typedef std::function<void(unsigned int, const std::string&)> CALLBACK;
CALLBACK callback_;
explicit
session(net::io_context& ioc, CALLBACK& callback)
: resolver_(net::make_strand(ioc))
, stream_(net::make_strand(ioc))
, _callback(callback)
{
}
Modify session::on_read
to invoke the callback.
void
on_read(
beast::error_code ec,
std::size_t bytes_transferred)
{
if(ec)
{
_callback(0, "");
}
else
{
_callback(_res.result_int(), _res.body());
}
}
来源:https://stackoverflow.com/questions/62169701/how-do-i-return-the-response-back-to-caller-asynchronously-using-a-final-callbac