bit-manipulation

What is 0xFF and why is it shifted 24 times?

|▌冷眼眸甩不掉的悲伤 提交于 2020-07-04 07:21:21
问题 #define SwapByte4(ldata) \ (((ldata & 0x000000FF) << 24) | \ ((ldata & 0x0000FF00) << 8) | \ ((ldata & 0x00FF0000) >> 8) | \ ((ldata & 0xFF000000) >> 24)) What does that 0x000000FF represent? I know that decimal 15 is represented in hex as F, but why is it << 24? 回答1: Here is a hex value, 0x12345678, written as binary, and annotated with some bit positions: |31 24|23 16|15 8|7 bit 0| +---------------+---------------+---------------+---------------+ |0 0 0 1 0 0 1 0|0 0 1 1 0 1 0 0|0 1 0 1 0 1

What is 0xFF and why is it shifted 24 times?

≡放荡痞女 提交于 2020-07-04 07:21:11
问题 #define SwapByte4(ldata) \ (((ldata & 0x000000FF) << 24) | \ ((ldata & 0x0000FF00) << 8) | \ ((ldata & 0x00FF0000) >> 8) | \ ((ldata & 0xFF000000) >> 24)) What does that 0x000000FF represent? I know that decimal 15 is represented in hex as F, but why is it << 24? 回答1: Here is a hex value, 0x12345678, written as binary, and annotated with some bit positions: |31 24|23 16|15 8|7 bit 0| +---------------+---------------+---------------+---------------+ |0 0 0 1 0 0 1 0|0 0 1 1 0 1 0 0|0 1 0 1 0 1

How to set first three bytes of integer? in C++

别来无恙 提交于 2020-06-27 12:46:27
问题 I want to set first three bytes of an integer to 0 in C++. I tried this code, but my integer variable a is not changing, output is always -63. What am I doing wrong? #include <iostream> #include <string> int main() { int a = 4294967233; std::cout << a << std::endl; for(int i = 0; i< 24; i++) { a |= (0 << (i+8)); std::cout << a << std::endl; } } 回答1: Just use bitwise and ( & ) with a mask, there is no reason for loop: a &= 0xFF000000; // Drops all but the third lowest byte a &= 0x000000FF; //

Check value of least significant bit (LSB) and most significant bit (MSB) in C/C++

≡放荡痞女 提交于 2020-06-24 03:13:08
问题 I need to check the value of the least significant bit (LSB) and most significant bit (MSB) of an integer in C/C++. How would I do this? 回答1: //int value; int LSB = value & 1; Alternatively (which is not theoretically portable, but practically it is - see Steve's comment) //int value; int LSB = value % 2; Details: The second formula is simpler. The % operator is the remainder operator. A number's LSB is 1 iff it is an odd number and 0 otherwise. So we check the remainder of dividing with 2.

How can I tell if a number is a multiple of four using only the logic operator AND?

梦想的初衷 提交于 2020-06-22 08:08:05
问题 I'm messing with assembly language programming and I'm curious how I could tell if a number is a multiple of 4 using the logic operator AND? I know how to do it using "div" or "remainder" instructions but I'm trying to do this with bit manipulation of number/word. Can anyone point me in the right direction? I'm using MIPs but a Language agnostic answer is fine. 回答1: Well, to detect if a number is a multiple of another, you simply need to do x MOD y . If the result is 0 , then it is an even

What's the most efficient way of getting position of least significant bit of a number in javascript?

岁酱吖の 提交于 2020-06-17 04:56:31
问题 I got some numbers and I need to get how much they should be shifted for their lower bit to be at position 0. ex: 0x40000000 => 30 because 0x40000000 >> 30 = 1 768 = 512+256 => 8 This works if (Math.log2(x) == 31) return 31; if (Math.log2(x) > 31) x = x & 0x7FFFFFFF; return Math.log2(x & -x) Is there any more efficient or elegant way (builtin ?) to do this in javascript ? 回答1: You cannot get that result immediately with a builtin function, but you can avoid using Math.log2 . There is a little

Count bits 1 on an integer as fast as GCC __builtin__popcount(int)

独自空忆成欢 提交于 2020-06-14 07:38:24
问题 I write a algorithm (taken from "The C Programming Language") that counts the number of 1-bits very fast: int countBit1Fast(int n) { int c = 0; for (; n; ++c) n &= n - 1; return c; } But a friend told me that __builtin__popcount(int) is a lot faster, but less portable. I give it a try and was MANY times faster! Why it's so fast? I want to count bits as fast as possible, but without stick to a particular compiler. EDIT: I may use it on PIC micro-controllers and maybe on non-intel processors,

Why does JDK use shifting instead of multiply/divide?

為{幸葍}努か 提交于 2020-06-11 20:57:49
问题 I have the following question: If asked whether to use a shift vs a multiply or divide for example the answer would be, let the JVM optimize. Example here: is-shifting-bits-faster-than-multiplying Now I was looking at the jdk source, for example Priority Queue and the code uses only shifting for both multiplication and division (signed and unsigned). Taking for granted that the post in SO is the valid answer I was wondering why in jdk they prefer to do it by shifting? Is it some subtle detail

C# Enum.HasFlag vs. Bitwise AND Operator Check

不想你离开。 提交于 2020-05-27 15:39:32
问题 If you have an enum that is used for bit flags, i.e., [Flags] internal enum _flagsEnum : byte { None = 0, //00000000 Option1 = 1, //00000001 Option2 = 1 << 1, //00000010 Option3 = 1 << 2, //00000100 Option4 = 1 << 3, //00001000 Option5 = 1 << 4, //00010000 Option6 = 1 << 5, //00100000 Option7 = 1 << 6, //01000000 Option8 = 1 << 7, //10000000 All = Byte.MaxValue,//11111111 } _flagsEnum myFlagsEnum = _flagsEnum.None; Is it faster to do.. bool hasFlag = myFlagsEnum.HasFlag(_flagsEnum.Option1);

How to perform DB bitwise queries in Django?

微笑、不失礼 提交于 2020-05-25 04:47:26
问题 How can I perform bitwise queries on the DB with Django? I haven't found anything about it in the docs. Should I retrieve a queryset and then filter programmically? If you're interested, I use bitwise ops as an alternative to IN() statements in very large and complex queries, in order to improve performance. I have a DB containing millions of items (records). Some fields use binary representation of an item property. For example: the Color field can have multiple values, so it is structured