There are various measures.
MIPS - Millions of Instructions Per Second. AKA 'Meaningless indication of processor speed'. Different instructions can take more or less time.
(mega, giga, tera) FLOPS - FLoating point Operations Per Second. More specific than MIPS, but only tests one type of operation. Popular for supercomputers, since floating point operations are most of what they so.
There has been an increase over time in instructions per clock cycle, which has become more pronounced now clock speeds have levelled off at around 3 - 3.5 Gigahertz.
For a mass-market desktop processor, some top-end chips reach 100 - 150 thousand MIPS. Graphics cards can reach the teraflops range in certain specific calculations, and supercomputers are well into the petaflops.