I cannot find any info on agner.org on the latency or throughput of the RDRAND instruction. However, this processor exists, so the information must be out there.
Edi
Here are some performance figures I get with rdrand: http://smackerelofopinion.blogspot.co.uk/2012/10/intel-rdrand-instruction-revisited.html
On a i5-3210M (2.5GHz) Ivybridge (2 cores, 4 threads) I get a peak of ~99.6 million 64 bit rdrands per second with 4 threads which equates to ~6.374 billion bits per second.
An 8 threaded i7-3770 (3.4GHz) Ivybridge (4 cores, 8 threads) I hit a peak throughput of 99.6 million 64 bit rdrands a second on 3 threads.
I wrote librdrand. It's a very basic set of routines to use the RdRand instruction to fill buffers with random numbers.
The performance data we showed at IDF is from test software I wrote that spawns a number of threads using pthreads in Linux. Each thread pulls fills a memory buffer with random numbers using RdRand. The program measures the average speed and can iterate while varying the number of threads.
Since there is a round trip communications latency from each core to the shared DRNG and back that is longer than the time needed to generate a random number at the DRNG, the average performance obviously increases as you add threads, up until the maximum throughput is reached. The physical maximum throughput of the DRNG on IVB is 800MBytes/s. A 4 core IVB with 8 threads manages something of the order of 780Mbytes/s. With fewer threads and cores, lower numbers are achieved. The 500MB/s number is somewhat conservative, but when you're trying to make honest performance claims, you have to be.
Since the DRNG runs at a fixed frequency (800MHz) while the core frequencies may vary, the number of core clock cycles per RdRand varies, depending on the core frequency and the number of other cores simultaneously accessing the DRNG. The curves given in the IDF presentation are a realistic representation of what to expect. The total performance is affected a little by core clock frequency, but not much. The number of threads is what dominates.
One should be careful when measuring RdRand performance to actually 'use' the RdRand result. If you don't, I.E. you did this.. RdRand R6, RdRand R6,....., RdRand R6 repeated many times, the performance would read as being artificially high. Since the data isn't used before it is overwritten, the CPU pipeline doesn't wait for the data to come back from the DRNG before it issues the next instruction. The tests we wrote write the resulting data to memory that will be in on-chip cache so the pipeline stalls waiting for the data. That is also why hyperthreading is so much more effective with RdRand than with other sorts of code.
The details of the specific platform, clock speed, Linux version and GCC version were given in the IDF slides. I don't remember the numbers off the top of my head. There are chips available that are slower and chips available that are faster. The number we gave for <200 cycles per instruction is based on measurements of about 150 core cycles per instruction.
The chips are available now, so anyone well versed in the use of rdtsc can do the same sort of test.
I have done some preliminary throughput tests on an actual Ivy Bridge i7-3770 using Intel's "librdrand" wrapper and it generates 33-35 million 32bit numbers per second on a single core.
This 70M number from Intel is about 8 cores; for one they report only about 10M, so my test is over 3x better :-/
You'll find some relevant information at Intel Digital Random Number Generator (DRNG) Software Implementation Guide.
A verbatim quote follows:
Measured Throughput:
Up to 70 million RDRAND invocations per second 500+ million bytes of random data per second Throughput ceiling is insensitive to the number of contending parallel threads