问题
We are using Google Cloud Bigtable, accessing it from GCE instances using the Go library to access it. For some ReadRow queries we get the following error:
rpc error: code = 13 desc = "server closed the stream without sending trailers"
It is noteworthy that these are consistent. In other words if we retry the same query (we wait ~15 minutes between attempts) we (almost?) always get the same error again. So it does not appear to simply be a transient error, but instead is probably somehow related to the data being fetched. Here is the specific query we are running:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
Could this just mean "you are trying to fetch too much"?
回答1:
From Engineering - re: too much: Yes, in theory it could. Currently if the client tries to read more than 256MB from a row at a time we will kill the read with an error. That error should get passed to the client though. It is possible the Go client is not passing through those error messages. (either the bigtable go client, or the grpc go library?)
The current work-around for too much data in a single row would be to read it a few columns (or column families) at a time so that the total size read from the row at a single time is always less than 256MB. We are working on relaxing that limit, but that fix is still at least a few weeks out.
回答2:
For anyone following along at home, there was actually a BigTable bug that was causing this error. To be clear, trying to read too much (> 256MB) might also cause this error, but that was not the only error condition. I was able to reproduce this error on rows well under 256MB. With this information, the BigTable team identified the bug and recently (~Feb 12) rolled it out to production. After the release I confirmed that these errors disappeared from my application logs.
来源:https://stackoverflow.com/questions/34001199/bigtable-from-go-returns-server-closed-the-stream-without-sending-trailers