MEASURING CPU SPEED

The main measurement quoted by manufacturers as a supposed indication of processing speed is the clock speed of the chip measured in hertz. The higher the number of mega or gigahertz, the faster the processor. However comparing raw oscillator speeds is rarely a good comparison between chips, especially those produced by different manufacturers. Counting how many instructions are processed per second is a better measurement. Most chips today can process more than one instruction per cycle on the ocillator clock. By multiplying the number of instructions processed each cycle times the number of cycles per second, one can determine roughly how many instructions are processed each second (the exact calculation process requires specific knowledge of the chip's architecture).

There are two schools of thought on how to measure the speed of a computer chip. Floating point math operations are deemed the best determinant by one school and raw instructions by the other. These discussions revolve around whether the chip in question has special math-related. Whenever a chip is rated in terms of the number of floating point operations per second, it is rated in megaflops, gigaflops or teraflops (so far). When measuring the speed using raw instructions, the speed is listed in terms of millions of instructions per second or MIPS.

More CPU Tutorials

All CPU tutorials  | CPU Components Diagram (image)CPU instruction setsCPU InstructionsCPU SymbolsCPU microcodeCPU processing speedsCPU Manufacturers List

 


Bookmark this page and SHARE:  

Search

Support InetDaemon.Com

Get Tutorials in your INBOX!

Free Training

Free Training