Currently I want to compare the speed of Python and C when they\'re used to do string stuff. I think C should give better performance than Python will; however, I got a total co
Accumulated comments (mainly from me) converted into an answer:
memmove()
or memcpy()
instead of strcpy()
and strcat()
? (I note that the strcat()
could be replaced with strcpy()
with no difference in result — it might be interesting to check the timing.) Also, you didn't include
(or
) so you're missing any optimizations that
might provide!Marcus: Yes,
memmove()
is faster thanstrcpy()
and faster than Python, but why? Doesmemmove()
do a word-width copy at a time?
Marcus: But
memmove()
is still working well even after I makeL=L-13
, andsizeof(s)
gives outL+1024-13
. My machine has asizeof(int)==4
.
memmove()
is highly optimized assembler, possibly inline (no function call overhead, though for 100KiB of data, the function call overhead is minimal). The benefits are from the bigger moves and the simpler loop condition.Marcus: So does Python use
memmove()
as well, or something magic?
memmove()
or memcpy()
(the difference being that memmove()
works correctly even if the source and destination overlap; memcpy()
is not obliged to work correctly if they overlap). It is relatively unlikely that they've got anything faster than memmove/memcpy
available.I modified the C code to produce more stable timings for me on my machine (Mac OS X 10.7.4, 8 GiB 1333 MHz RAM, 2.3 GHz Intel Core i7, GCC 4.7.1), and to compare strcpy()
and strcat()
vs memcpy()
vs memmove()
. Note that I increased the loop count from 1000 to 10000 to improve the stability of the timings, and I repeat the whole test (of all three mechanisms) 10 times. Arguably, the timing loop count should be increased by another factor of 5-10 so that the timings are over a second.
#include
#include
#include
#include
#define L (100*1024)
char s[L+1024];
char c[2*L+1024];
static double time_diff( struct timeval et, struct timeval st )
{
return 1e-6*((et.tv_sec - st.tv_sec)*1000000 + (et.tv_usec - st.tv_usec ));
}
static int foo(void)
{
strcpy(c,s);
strcat(c+L,s);
return 0;
}
static int bar(void)
{
memcpy(c + 0, s, L);
memcpy(c + L, s, L);
return 0;
}
static int baz(void)
{
memmove(c + 0, s, L);
memmove(c + L, s, L);
return 0;
}
static void timer(void)
{
struct timeval st;
struct timeval et;
int i;
memset(s, '1', L);
foo();
gettimeofday(&st,NULL);
for( i = 0 ; i < 10000; i++ )
foo();
gettimeofday(&et,NULL);
printf("foo: %f\n", time_diff(et,st));
gettimeofday(&st,NULL);
for( i = 0 ; i < 10000; i++ )
bar();
gettimeofday(&et,NULL);
printf("bar: %f\n", time_diff(et,st));
gettimeofday(&st,NULL);
for( i = 0 ; i < 10000; i++ )
baz();
gettimeofday(&et,NULL);
printf("baz: %f\n", time_diff(et,st));
}
int main(void)
{
for (int i = 0; i < 10; i++)
timer();
return 0;
}
That gives no warnings when compiled with:
gcc -O3 -g -std=c99 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes \
-Wold-style-definition cp100k.c -o cp100k
The timing I got was:
foo: 1.781506
bar: 0.155201
baz: 0.144501
foo: 1.276882
bar: 0.187883
baz: 0.191538
foo: 1.090962
bar: 0.179188
baz: 0.183671
foo: 1.898331
bar: 0.142374
baz: 0.140329
foo: 1.516326
bar: 0.146018
baz: 0.144458
foo: 1.245074
bar: 0.180004
baz: 0.181697
foo: 1.635782
bar: 0.136308
baz: 0.139375
foo: 1.542530
bar: 0.138344
baz: 0.136546
foo: 1.646373
bar: 0.185739
baz: 0.194672
foo: 1.284208
bar: 0.145161
baz: 0.205196
What is weird is that if I forego 'no warnings' and omit the
and
headers, as in the original posted code, the timings I got are:
foo: 1.432378
bar: 0.123245
baz: 0.120716
foo: 1.149614
bar: 0.186661
baz: 0.204024
foo: 1.529690
bar: 0.104873
baz: 0.105964
foo: 1.356727
bar: 0.150993
baz: 0.135393
foo: 0.945457
bar: 0.173606
baz: 0.170719
foo: 1.768005
bar: 0.136830
baz: 0.124262
foo: 1.457069
bar: 0.130019
baz: 0.126566
foo: 1.084092
bar: 0.173160
baz: 0.189040
foo: 1.742892
bar: 0.120824
baz: 0.124772
foo: 1.465636
bar: 0.136625
baz: 0.139923
Eyeballing those results, it seems to be faster than the 'cleaner' code, though I've not run a Student's t-Test on the two sets of data, and the timings have very substantial variability (but I do have things like Boinc running 8 processes in the background). The effect seemed to be more pronounced in the early versions of the code, when it was just strcpy()
and strcat()
that was tested. I have no explanation for that, if it is a real effect!
Followup by mvds
Since the question was closed I cannot answer properly. On a Mac doing virtually nothing, I get these timings:
(with headers)
foo: 1.694667 bar: 0.300041 baz: 0.301693
foo: 1.696361 bar: 0.305267 baz: 0.298918
foo: 1.708898 bar: 0.299006 baz: 0.299327
foo: 1.696909 bar: 0.299919 baz: 0.300499
foo: 1.696582 bar: 0.300021 baz: 0.299775
(without headers, ignoring warnings)
foo: 1.185880 bar: 0.300287 baz: 0.300483
foo: 1.120522 bar: 0.299585 baz: 0.301144
foo: 1.122017 bar: 0.299476 baz: 0.299724
foo: 1.124904 bar: 0.301635 baz: 0.300230
foo: 1.120719 bar: 0.300118 baz: 0.299673
Preprocessor output (-E
flag) shows that including the headers translates strcpy
into builtin calls like:
((__builtin_object_size (c, 0) != (size_t) -1) ? __builtin___strcpy_chk (c, s, __builtin_object_size (c, 2 > 1)) : __inline_strcpy_chk (c, s));
((__builtin_object_size (c+(100*1024), 0) != (size_t) -1) ? __builtin___strcat_chk (c+(100*1024), s, __builtin_object_size (c+(100*1024), 2 > 1)) : __inline_strcat_chk (c+(100*1024), s));
So the libc version of strcpy outperforms the gcc builtin. (using gdb
it is easily verified that a breakpoint on strcpy
indeed doesn't break on the strcpy()
call, if the headers are included)
On Linux (Debian 5.0.9, amd64), the differences seem to be negligible. The generated assembly (-S
flag) only differs in debugging information carried by the includes.