c11

Which processor to test C++11/C11 acquire release semantic

五迷三道 提交于 2019-12-07 02:09:25
I am looking for a processor that performs read acquire/store release with the same semantic as specified in the C11/C++11 standards. x86 processor synchronization is much too strong so that it is impossible to test a lock-free algorithm using acquire/release semantic. The same seems to apply to ARM processor because this architecture offers either stronger or weaker read/store synchronizations. Maybe ARMv8.3 may offer the right semantic but I believe there are no ARMv8.3 processor on the market. On which processor or architecture should I test a lock-free algorithm using acquire-release

Why is there no ASCII or UTF-8 character literal in C11 or C++11?

放肆的年华 提交于 2019-12-06 17:10:49
问题 Why is there no UTF-8 character literal in C11 or C++11 even though there are UTF-8 string literals? I understand that, generally-speaking, a character literal represents a single ASCII character which is identical to a single-octet UTF-8 code point, but neither C nor C++ says the encoding has to be ASCII. Basically, if I read the standard right, there's no guarantee that '0' will represent the integer 0x30, yet u8"0" must represent the char sequence 0x30 0x00. EDIT: I'm aware not every UTF-8

Possible to use C11 fences to reason about writes from other threads?

天涯浪子 提交于 2019-12-06 14:22:58
Adve and Gharachorloo's report , in Figure 4b, provides the following example of a program that exhibits unexpected behavior in the absence of sequential consistency: My question is whether it is possible, using only C11 fences and memory_order_relaxed loads and stores, to ensure that register1, if written, will be written with the value 1. The reason this might be hard to guarantee in the abstract is that P1, P2, and P3 could be at different points in a pathological NUMA network with the property that P2 sees P1's write before P3 does, yet somehow P3 sees P2's write very quickly. The reason

What is the correct definition of size_t? [duplicate]

冷暖自知 提交于 2019-12-06 03:15:41
问题 This question already has answers here : What is size_t in C? (12 answers) Closed 4 years ago . First of all, what do I mean, by 'correct definition`? For example, K&R in "C Programming Language" 2nd ed. , in section 2.2 Data Types and Sizes , make very clear statements about integers: There are short , int and long for integer types. They are needed to repesent values of different boundaries. int is a "naturally" sized number for a specific hardware, so also probably the most fastest. Sizes

Why are structs not allowed in equality expressions in C? [duplicate]

筅森魡賤 提交于 2019-12-06 01:18:10
问题 This question already has answers here : Why doesn't C provide struct comparison? (5 answers) Closed 2 years ago . The unavailability of structs as comparison operands is one of the more obvious things in C that don't make too much sense (to me). structs can be passed by value and copied via assignments but == is not specified for them. Below are the relevant parts of the C11 standard (draft) that define the constraints of the equality operators ( == and != ) and the simple assignment

Can you cast a “pointer to a function pointer” to void*

主宰稳场 提交于 2019-12-05 21:53:33
Inspired by comments to my answer here . Is this sequence of steps legal in C standard (C11)? Make an array of function pointers Take a pointer to the first entry and cast that pointer to function pointer to void* Perform pointer arithmetic on that void* Cast it back to pointer to function pointer and dereference it. Or equivalently as code: void foo(void) { ... } void bar(void) { ... } typedef void (*voidfunc)(void); voidfunc array[] = {foo, bar}; // Step 1 void *ptr1 = array; // Step 2 void *ptr2 = (char*)ptr1 + sizeof(voidfunc); // Step 3 voidfunc bar_ptr = *(voidfunc*)ptr2; // Step 4 I

A bug in GCC implementation of bit-fields

柔情痞子 提交于 2019-12-05 20:25:52
问题 Working in C11, the following struct: struct S { unsigned a : 4; _Bool b : 1; }; Gets layed out by GCC as an unsigned (4 bytes) of which 4 bits are used, followed by a _Bool (4 bytes) of which 1 bit is used, for a total size of 8 bytes. Note that C99 and C11 specifically permit _Bool as a bit-field member. The C11 standard (and probably C99 too) also states under §6.7.2.1 'Structure and union specifiers' ¶11 that: An implementation may allocate any addressable storage unit large enough to

Undefined reference to memcpy_s

六月ゝ 毕业季﹏ 提交于 2019-12-05 17:59:21
I'm trying to fix an undefined reference to memcpy_s() error. I've included string.h in my file and the memcpy() function works okay, and I've also tried including memory.h . I'm on x64 Windows 7 and using gcc 4.8.1 to compile. #include <stdlib.h> #include <stdio.h> #include <string.h> void doMemCopy(char* buf, size_t buf_size, char* in, int chr) { memcpy_s(buf, buf_size, in, chr); } memory for buf has been allocated in the main function, which calls doMemCpy(buf, 64, in, bytes) . in is a string read from standard input Exact error from cmd terminal: undefined reference to "memcpy_s" collect2

Is (uint64_t)-1 guaranteed to yield 0xffffffffffffffff?

浪子不回头ぞ 提交于 2019-12-05 12:13:20
问题 I know, that it is well defined by the C standard that (unsigned)-1 must yield 2^n-1, i. e. an unsigned integer with all its bits set. The same goes for (uint64_t)-1ll . However, I cannot find something in the C11 standard that specifies how (uint64_t)-1 is interpreted. So, the question is: Is there any guarantee in the C standard, which of the following holds true? (uint64_t)-1 == (uint64_t)(unsigned)-1 //0x00000000ffffffff (uint64_t)-1 == (uint64_t)(int64_t)-1 //0xffffffffffffffff 回答1: Yes.

Standard way in C11 and C++11 to convert UTF-8?

▼魔方 西西 提交于 2019-12-05 10:21:38
问题 C11 and C++11 both introduce the uchar.h / cuchar header defining char16_t and char32_t as explicitly 16 and 32 bit wide characters, added literal syntax u"" and U"" for writing strings with these character types, along with macros __STDC_UTF_16__ and __STDC_UTF_32__ that tell you whether or not they correspond to UTF-16 and UTF-32 code units. This helps remove the ambiguity about wchar_t , which on some platforms was 16 bit and generally used to hold UTF-16 code units, and on some platforms