The goal is to introduce a transport and application layer protocol that is better in its latency and network throughput. Currently, the applic
gRPC is not faster than REST over HTTP/2 by default, but it gives you the tools to make it faster. There are some things that would be difficult or impossible to do with REST.
As nfirvine said, you will see most of your performance improvement just from using Protobuf. While you could use proto with REST, it is very nicely integrated with gRPC. Technically, you could use JSON with gRPC, but most people don't want to pay the performance cost after getting used to protos.
I am not an expert on this by any means and I have no data to back any of this up.
The "binary feature" you're talking about is the binary representation of HTTP/2 frames. The content itself (a JSON payload) will still be UTF-8. You can compress that JSON and set Content-Encoding: gzip
, just like HTTP/1.
But gRPC does gzip compression as well. So really, we're talking about the difference between gzip-compressed JSON vs gzip-compressed protobufs.
As you can imagine, compressed protobufs should beat compressed JSON in every way, or else protobufs have failed at their goal.
Besides the ubiquity of JSON vs protobufs, the only downside I can see to using protobufs is that you need the .proto to decode them, say in a tcpdump situation.