Issue with Azure chunked upload to fileshare via Azure.Storage.Files.Shares library

不想你离开。 提交于 2020-05-24 04:53:05

问题


I'm trying to upload files to an Azure fileshare using the library Azure.Storage.Files.Shares.

If I don't chunk the file (by making a single UploadRange call) it works fine, but for files over 4Mb I haven't been able to get the chunking working. The file is the same size when downloaded, but won't open in a viewer.

I can't set smaller HttpRanges on a large file as I get a 'request body is too large' error, so I'm splitting the filestream into multiple mini streams and uploading the entire HttpRange of each of these

        ShareClient share = new ShareClient(Common.Settings.AppSettings.AzureStorageConnectionString, ShareName());
        ShareDirectoryClient directory = share.GetDirectoryClient(directoryName);

        ShareFileClient file = directory.GetFileClient(fileKey);
        using(FileStream stream = fileInfo.OpenRead())
        {
            file.Create(stream.Length);

            //file.UploadRange(new HttpRange(0, stream.Length), stream);

            int blockSize = 128 * 1024;

            BinaryReader reader = new BinaryReader(stream);
            while(true)
            {
                byte[] buffer = reader.ReadBytes(blockSize);
                if (buffer.Length == 0)
                    break;

                MemoryStream uploadChunk = new MemoryStream();
                uploadChunk.Write(buffer, 0, buffer.Length);
                uploadChunk.Position = 0;

                file.UploadRange(new HttpRange(0, uploadChunk.Length), uploadChunk);
            }

            reader.Close();
        }

The code above uploads without error, but when downloading the image from Azure it is corrupt.

Does anyone have any ideas? Thanks for any help you can provide.

cheers

Steve


回答1:


I was able to reproduce the issue. Basically the problem is with the following line of code:

new HttpRange(0, uploadChunk.Length)

Essentially you're always setting the content at the same range and that's why the file is getting corrupted.

Please try the code below. It should work. What I did here is defined the HTTP range offset and moving it constantly with number of bytes already written to the file.

        using (FileStream stream = fileInfo.OpenRead())
        {
            file.Create(stream.Length);

            //file.UploadRange(new HttpRange(0, stream.Length), stream);

            int blockSize = 1 * 1024;
            long offset = 0;//Define http range offset
            BinaryReader reader = new BinaryReader(stream);
            while (true)
            {
                byte[] buffer = reader.ReadBytes(blockSize);
                if (buffer.Length == 0)
                    break;

                MemoryStream uploadChunk = new MemoryStream();
                uploadChunk.Write(buffer, 0, buffer.Length);
                uploadChunk.Position = 0;

                HttpRange httpRange = new HttpRange(offset, buffer.Length);
                var resp = file.UploadRange(httpRange, uploadChunk);
                offset += buffer.Length;//Shift the offset by number of bytes already written
            }

            reader.Close();
        }


来源:https://stackoverflow.com/questions/61001985/issue-with-azure-chunked-upload-to-fileshare-via-azure-storage-files-shares-libr

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!