size-t

What is the limit on malloc parameter of type size_t in C? Docs say it has an upper limit of UINT_MAX but I can't go beyond INT_MAX

本小妞迷上赌 提交于 2019-12-09 19:00:33
问题 I want to allocate a 2.9GB char array with database = (char*) malloc((2900 * 1000000 * sizeof(char))); This gives an integer overflow warning and the malloc returns NULL . The malloc parameter is of type size_t which according to documentation is of type unsigned int . So the max should be UINT_MAX which is at least 2.9GB. However, if I try to allocate more than MAX_INT the malloc fails. Does this mean size_t on my system is of type int? How do I check this? I looked through /usr/include

Objective-C Runtime: What to put for size & alignment for class_addIvar?

久未见 提交于 2019-12-09 13:20:47
问题 The Objective-C Runtime provides the class_addIvar C function: BOOL class_addIvar(Class cls, const char *name, size_t size, uint8_t alignment, const char *types) What do I put for size and alignment ? I'm adding an instance variable of type UITextPosition * , but no UITextPosition object is in scope. For size , can I just do sizeof(self) , where self is a subclass of UITextField ? I.e., can I assume that a UITextPosition object is the same size as a UITextField object? How do I get alignment

Adding or assigning an integer literal to a size_t

落花浮王杯 提交于 2019-12-09 02:56:29
问题 In C I see a lot of code that adds or assigns an integer literal to a size_t variable. size_t foo = 1; foo += 1; What conversion takes place here, and can it ever happen that a size_t is "upgraded" to an int and then converted back to a size_t ? Would that still wraparound if I was at the max? size_t foo = SIZE_MAX; foo += 1; Is that defined behavior? It's an unsigned type size_t which is having a signed int added to it (that may be a larger type?) and the converted back to a size_t . Is

size_t can not be found by g++-4.1 or others on Ubuntu 8.1

风流意气都作罢 提交于 2019-12-08 17:03:49
问题 This has happened before to me, but I can't remember how I fixed it. I can't compile some programs here on a new Ubuntu install... Something is awry with my headers. I have tried g++-4.1 and 4.3 to no avail. g++ -g -frepo -DIZ_LINUX -I/usr/include/linux -I/usr/include -I/include -c qlisttest.cpp /usr/include/libio.h:332: error: ‘size_t’ does not name a type /usr/include/libio.h:336: error: ‘size_t’ was not declared in this scope /usr/include/libio.h:364: error: ‘size_t’ has not been declared

std::size_t or std::vector<Foo>::size_type?

て烟熏妆下的殇ゞ 提交于 2019-12-08 03:27:55
问题 When I loop on a std::vector<Foo> (or every container having random access iterator) I use an unsigned integer variable i . If I want to respect the norm, should I use std::size_t or the type given by the container itself : std::vector<Foo>::size_type ? If I chose std::size_t (for readability reasons), can I be sure that every implementation of every container in std namespace uses std::size_t as size_type ? Note : I use C++98 only (for compatibility reasons). 回答1: It is not necessarily true

Why are size_t and unsigned int slower than int?

偶尔善良 提交于 2019-12-07 04:31:20
问题 I was experimenting with different integer types in Visual Studio project in Windows using a simple exchange sort algorithm below. The processor is Intel. The code was compiled in Release x64. The optimization setting is "Maximize Speed (/O2)". The command line corresponding to the compilation settings is /permissive- /GS /GL /W3 /Gy /Zc:wchar_t /Zi /Gm- /O2 /sdl /Fd"x64\Release\vc141.pdb" /Zc:inline /fp:precise /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc

C size_t and ssize_t negative value

北慕城南 提交于 2019-12-06 19:25:41
问题 size_t is declared as unsigned int so it can't represent negative value. So there is ssize_t which is the signed type of size_t right? Here's my problem: #include <stdio.h> #include <sys/types.h> int main(){ size_t a = -25; ssize_t b = -30; printf("%zu\n%zu\n", a, b); return 0; } why i got: 18446744073709551591 18446744073709551586 as result? I know that with size_t this could be possible because it is an unsigned type but why i got a wrong result also with ssize_t ?? 回答1: In the first case

What is the correct definition of size_t? [duplicate]

冷暖自知 提交于 2019-12-06 03:15:41
问题 This question already has answers here : What is size_t in C? (12 answers) Closed 4 years ago . First of all, what do I mean, by 'correct definition`? For example, K&R in "C Programming Language" 2nd ed. , in section 2.2 Data Types and Sizes , make very clear statements about integers: There are short , int and long for integer types. They are needed to repesent values of different boundaries. int is a "naturally" sized number for a specific hardware, so also probably the most fastest. Sizes

Why does fgets accept an int instead of a size_t?

戏子无情 提交于 2019-12-05 16:23:46
问题 Functions such as strcpy() , malloc() , strlen() and various others accept their arguments or return values as a size_t instead of a int or an unsigned int for obvious reasons. Some file functions such as fread() and fwrite() use size_t as well. By extension, it would be expected that char* fgets (char *str, int num, FILE *stream) should use a size_t and not an int as argument for its buffer size. However, fgets() uses an int . Is there any objective explanation why? 回答1: The original K&R

It there an equivalent to size_t in llvm

依然范特西╮ 提交于 2019-12-05 04:17:41
Some system libraries like malloc strlen want or return size_t as parameter. What is the right choice in LLVM IR to interact with these functions? Is the selection the task for the compiler? Does LLVM IR have a size_t type? At the LLVM level, size_t doesn't exist. It is a construct for the benefit of the developer that is typedef'd to a native type. The native types have a fixed size for the target architecture and that is how the compiler represents them in LLVM bit code. So on x86, size_t might by viewed by the front end as unsigned long, which it then writes to LLVM as i32 (since LLVM