The RDTSC instruction on x86 CPUs is supposed to read a counter that is incremented (by one) every clock cycle.

However, it doesn't actually do this. Instead, the counter increases at a fixed frequency, independent of the actual clock speed of your CPU, which varies based on load, temperature etc..

This does not seem to be documented anywhere. I found this out the hard way, after measuring the frequency of rdtsc myself, and noticing it differs from that given by CPU-z. The CPU-z value seems greatly more likely to be the correct value, since it correlates with temperature values of the CPU (reported with speedfan) and actual amount of work done by the CPU (speed of execution of a program).

On my i7 920, which has a 'on the box' clock speed of 2.67 GHz, RDTSC increments at ~2.664 GHZ. The actualclock speed varies from 1.5 GHz at idle, to ~2.8GHz under load (measured with CPU-z).

There's a great summary of why RDTSC does this in this stack overflow answer http://stackoverflow.com/a/19942784 by 'Brendan'.

In summary: some idiot at some point got confused by changing clock speeds, and complained as it broke their dodgy program, so AMD and Intel changed the semantics of RDTSC to run at a fixed freq.

Meanwhile I found out completely by accident after several years of using RDTSC.

To me this is good example of the terrible situation modern computer and software 'engineering' is in. Changes of semantics like this are not documented properly, or even at all, and you only find out years later by accident, and can only confirm it and find an explanation in some random stack overflow post.