This is what I found during my learning period:
#include
using namespace std;
int dis(char a[1])
{
int length = strlen(a);
char c = a
The length of the first dimension is ignored, but the length of additional dimensions are necessary to allow the compiler to compute offsets correctly. In the following example, the foo
function is passed a pointer to a two-dimensional array.
#include <stdio.h>
void foo(int args[10][20])
{
printf("%zd\n", sizeof(args[0]));
}
int main(int argc, char **argv)
{
int a[2][20];
foo(a);
return 0;
}
The size of the first dimension [10]
is ignored; the compiler will not prevent you from indexing off the end (notice that the formal wants 10 elements, but the actual provides only 2). However, the size of the second dimension [20]
is used to determine the stride of each row, and here, the formal must match the actual. Again, the compiler will not prevent you from indexing off the end of the second dimension either.
The byte offset from the base of the array to an element args[row][col]
is determined by:
sizeof(int)*(col + 20*row)
Note that if col >= 20
, then you will actually index into a subsequent row (or off the end of the entire array).
sizeof(args[0])
, returns 80
on my machine where sizeof(int) == 4
. However, if I attempt to take sizeof(args)
, I get the following compiler warning:
foo.c:5:27: warning: sizeof on array function parameter will return size of 'int (*)[20]' instead of 'int [10][20]' [-Wsizeof-array-argument]
printf("%zd\n", sizeof(args));
^
foo.c:3:14: note: declared here
void foo(int args[10][20])
^
1 warning generated.
Here, the compiler is warning that it is only going to give the size of the pointer into which the array has decayed instead of the size of the array itself.
It's a fun feature of C that allows you to effectively shoot yourself in the foot if you're so inclined.
I think the reason is that C is just a step above assembly language. Size checking and similar safety features have been removed to allow for peak performance, which isn't a bad thing if the programmer is being very diligent.
Also, assigning a size to the function argument has the advantage that when the function is used by another programmer, there's a chance they'll notice a size restriction. Just using a pointer doesn't convey that information to the next programmer.
To tell the compiler that myArray points to an array of at least 10 ints:
void bar(int myArray[static 10])
A good compiler should give you a warning if you access myArray [10]. Without the "static" keyword, the 10 would mean nothing at all.
C will not only transform a parameter of type int[5]
into *int
; given the declaration typedef int intArray5[5];
, it will transform a parameter of type intArray5
to *int
as well. There are some situations where this behavior, although odd, is useful (especially with things like the va_list
defined in stdargs.h
, which some implementations define as an array). It would be illogical to allow as a parameter a type defined as int[5]
(ignoring the dimension) but not allow int[5]
to be specified directly.
I find C's handling of parameters of array type to be absurd, but it's a consequence of efforts to take an ad-hoc language, large parts of which weren't particularly well-defined or thought-out, and try to come up with behavioral specifications that are consistent with what existing implementations did for existing programs. Many of the quirks of C make sense when viewed in that light, particularly if one considers that when many of them were invented, large parts of the language we know today didn't exist yet. From what I understand, in the predecessor to C, called BCPL, compilers didn't really keep track of variable types very well. A declaration int arr[5];
was equivalent to int anonymousAllocation[5],*arr = anonymousAllocation;
; once the allocation was set aside. the compiler neither knew nor cared whether arr
was a pointer or an array. When accessed as either arr[x]
or *arr
, it would be regarded as a pointer regardless of how it was declared.