Biggest differences of Thrift vs Protocol Buffers?

前端 未结 15 1605
無奈伤痛
無奈伤痛 2020-11-30 15:59

What are the biggest pros and cons of Apache Thrift vs Google\'s Protocol Buffers?

相关标签:
15条回答
  • 2020-11-30 16:32

    Protocol Buffers seems to have a more compact representation, but that's only an impression I get from reading the Thrift whitepaper. In their own words:

    We decided against some extreme storage optimizations (i.e. packing small integers into ASCII or using a 7-bit continuation format) for the sake of simplicity and clarity in the code. These alterations can easily be made if and when we encounter a performance-critical use case that demands them.

    Also, it may just be my impression, but Protocol Buffers seems to have some thicker abstractions around struct versioning. Thrift does have some versioning support, but it takes a bit of effort to make it happen.

    0 讨论(0)
  • 2020-11-30 16:33

    I think most of these points have missed the basic fact that Thrift is an RPC framework, which happens to have the ability to serialize data using a variety of methods (binary, XML, etc).

    Protocol Buffers are designed purely for serialization, it's not a framework like Thrift.

    0 讨论(0)
  • 2020-11-30 16:37

    There are some excellent points here and I'm going to add another one in case someones' path crosses here.

    Thrift gives you an option to choose between thrift-binary and thrift-compact (de)serializer, thrift-binary will have an excellent performance but bigger packet size, while thrift-compact will give you good compression but needs more processing power. This is handy because you can always switch between these two modes as easily as changing a line of code (heck, even make it configurable). So if you are not sure how much your application should be optimized for packet size or in processing power, thrift can be an interesting choice.

    PS: See this excellent benchmark project by thekvs which compares many serializers including thrift-binary, thrift-compact, and protobuf: https://github.com/thekvs/cpp-serializers

    PS: There is another serializer named YAS which gives this option too but it is schema-less see the link above.

    0 讨论(0)
  • 2020-11-30 16:43

    Another important difference are the languages supported by default.

    • Protocol Buffers: Java, Android Java, C++, Python, Ruby, C#, Go, Objective-C, Node.js
    • Thrift: Java, C++, Python, Ruby, C#, Go, Objective-C, JavaScript, Node.js, Erlang, PHP, Perl, Haskell, Smalltalk, OCaml, Delphi, D, Haxe

    Both could be extended to other platforms, but these are the languages bindings available out-of-the-box.

    0 讨论(0)
  • 2020-11-30 16:44

    For one, protobuf isn't a full RPC implementation. It requires something like gRPC to go with it.

    gPRC is very slow compared to Thrift:

    http://szelei.me/rpc-benchmark-part1/

    0 讨论(0)
  • 2020-11-30 16:45

    ProtocolBuffers is FASTER.
    There is a nice benchmark here:
    http://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking

    You might also want to look into Avro, as Avro is even faster.
    Microsoft has a package here:
    http://www.nuget.org/packages/Microsoft.Hadoop.Avro

    By the way, the fastest I've ever seen is Cap'nProto;
    A C# implementation can be found at the Github-repository of Marc Gravell.

    0 讨论(0)
提交回复
热议问题