decoding

Reading double to platform endianness with union and bit shift, is it safe?

末鹿安然 提交于 2019-12-05 20:13:39
All the examples I've seen of reading a double of known endianness from a buffer to the platform endianness involve detecting the current platform's endianess and performing byte-swapping when necessary. On the other hand, I've seen another way of doing the same thing except for integers that uses bit shifting ( one such example ). This got me thinking that it might be possible to use a union and the bitshift technique to read doubles (and floats) from buffers, and a quick test implementation seemed to work (at least with clang on x86_64): #include <stdio.h> #include <stdint.h> #include

Why are the lengths different when converting a byte array to a String and then back to a byte array?

穿精又带淫゛_ 提交于 2019-12-05 12:31:49
I have the following Java code: byte[] signatureBytes = getSignature(); String signatureString = new String(signatureBytes, "UTF8"); byte[] signatureStringBytes = signatureString.getBytes("UTF8"); System.out.println(signatureBytes.length == signatureStringBytes.length); // prints false Q: I'm probably misunderstanding this, but I thought that new String(byte[] bytes, String charset) and String.getBytes(charset) are inverse operations? Q: As a follow up, what is a safe way to transport a byte[] array as a String? Not every byte[] is valid UTF-8. By default invalid sequences gets replaced by a

Buffer error in avcodec_decode_audio4()

时光毁灭记忆、已成空白 提交于 2019-12-05 10:51:07
I have upgraded my ffmpeg version to the latest commit and Now I can see that the audio decoding funciton avcodec_decode_audio3 has been deprecated and when I use the new function avcodec_decode_audio4 , as per the changes required in it, I get the error as [amrnb @ 003a5000] get_buffer() failed. I am not able to find what causes this error. Anyone has a sample example of usng this new function: avcodec_decode_audio4((AVCodecContext *avctx, AVFrame *frame,int *got_frame_ptr, AVPacket *avpkt); Check decoding_encoding.c example from ffmpeg source. It uses function avcodec_decode_audio4 . 来源:

Base64 Encoding: Illegal base64 character 3c

左心房为你撑大大i 提交于 2019-12-05 08:50:20
I am trying to decode data in an xml format into bytes base64 and I am having an issues. My method is in java which takes a String data and converts it into bytes like as bellow. String data = "......"; //string of data in xml format byte[] dataBytes = Base64.getDecoder().decode(data); Which failed and gave the exception like bellow. java.lang.IllegalArgumentException: Illegal base64 character 3c at java.util.Base64$Decoder.decode0(Base64.java:714) at java.util.Base64$Decoder.decode(Base64.java:526) at java.util.Base64$Decoder.decode(Base64.java:549) at XmlReader.main(XmlReader.java:61) Is the

Try to understand Before marking duplicate: InvalidKeyException: Illegal key size [duplicate]

て烟熏妆下的殇ゞ 提交于 2019-12-05 07:18:52
问题 This question already has answers here : InvalidKeyException Illegal key size (5 answers) Closed 2 years ago . Actually I am getting InvalidKeyException: Illegal key size, but same code is working in production. When I am trying to run this code locally, I am facing key size issue while decoding in below line: cipher.init(2, new SecretKeySpec(secretKey, "AES"), new IvParameterSpec(initVector)); In above line I am getting following exception: public byte[] getPageByteStream(String fileName)

Java - How to decode a Base64 encoded Certificate

狂风中的少年 提交于 2019-12-05 06:53:03
Below is my requirement: Program will have an xml file as input with 3 tags: , and . All these data are Base64 encoded. Note: Program is using BC jars Program needs to decode them and verify the data for its authenticity using the signature and certificate Verified data should be Base64 decoded and written into another file Below is my code which tries to decode the certificate: public void executeTask(InputStream arg0, OutputStream arg1) throws SomeException{ try{ BufferedReader br = null; br = new BufferedReader(new InputStreamReader(arg0)); String orgContent = "", splitData = "",

Decoding mJPEG with libavcodec

回眸只為那壹抹淺笑 提交于 2019-12-05 06:29:08
问题 I am creating video conference application. I have discovered that webcams (at least 3 I have) provide higher resolutions and framerates for mJPEG output format. So far I was using YUY2, converted in I420 for compression with X264. To transcode mJPEG to I420, I need to decode it first. I am trying to decode images from webcam with libavcodec. This is my code. Initialization: // mJPEG to I420 conversion AVCodecContext * _transcoder = nullptr; AVFrame * _outputFrame; AVPacket _inputPacket;

Encode base64 and Decode base64 using delphi 2007

故事扮演 提交于 2019-12-05 05:56:01
问题 I have to encode an array of bytes to a base64 string (and decode this string) on an old Delphi 2007. How could I do? Further Informations: I've tried synapse (As suggested here Binary to Base64 (Delphi)). 回答1: Indy ships with Delphi, and has TIdEncoderMIME and TIdDecoderMIME classes for handling base64. For example: uses ..., IdCoder, IdCoderMIME; var Bytes: TIdBytes; Base64String: String; begin //... Bytes := ...; // array of bytes //... Base64String := TIdEncoderMIME.EncodeBytes(Bytes); //

How to decode nvarchar to text (SQL Server 2008 R2)?

…衆ロ難τιáo~ 提交于 2019-12-05 05:50:15
I have a SQL Server 2008 R2 table with nvarchar(4000) field. Data that stores this table look like '696D616765206D61726B65643A5472' or '303131' ("011") . I see that each char is encoding to hex. How can I read those data from table? I don't want write decoding function, I mean that simpler way exists. P.S. Sorry for my English. SQL Server 2008 actually has a built-in hex-encoding and decoding feature! Sample (note the third parameter with value "1" when converting your string to VarBinary): DECLARE @ProblemString VarChar(4000) = '54657374' SELECT Convert(VarChar, Convert(VarBinary, '0x' +

UnicodeDecodeError: 'ascii' codec can't decode

独自空忆成欢 提交于 2019-12-05 05:20:20
I'm reading a file that contains Romanian words in Python with file.readline(). I've got problem with many characters because of encoding. Example : >>> a = "aberație" #type 'str' >>> a -> 'abera\xc8\x9bie' >>> print sys.stdin.encoding UTF-8 I've tried encode() with utf-8, cp500 etc, but it doesn't work. I can't find which is the right Character encoding I have to use ? thanks in advance. Edit: The aim is to store the word from file in a dictionnary, and when printing it, to obtain aberație and not 'abera\xc8\x9bie' Claudiu What are you trying to do? This is a set of bytes: BYTES = 'abera\xc8