Computer CPU's, central processing units, have discreet components:
- Arithmetic Logic Unit - ALU
- External Bus
- Core (alt. V-Core)
- The central core of the processor is designed for general purpose functions and typically runs faster than the rest of the chip where cache memory or other chip functions are located. A core consists of a series of logic gates producing specific outputs for a given set of inputs.
- Arithmatic Logic Unit (Math Co-Processor)
- The main processor is designed for general purpose instruction processing. The math co-processor or ALU is a specialized co-processor which takes over the workload of intensive floating point math calculations, thereby freeing the main processor for performing other tasks. The math co-processor is specially designed to process floating point math faster than the main processor could.
- External Bus
- This is the interface to the main data bus on the system board.
- This is the storage space for instructions and temporary computational data produced by executed instructions. Instructions are written to and read from these registers as well as pointers to locations in memory indicating where the next batch of instructions are located. The size of a register is measured in bits in multiples of 8. The size of the registers determines the size of the instructions that can be processed.
- Flags are located on the chip as are the registers and indicate the current state of various functions and operations. Setting or clearing a flag indicates a state change or signals an event.
- This is super-fast memory that has been integrated into the processor to increase performance. As of 2003, processors have up to three levels of cache integrated into them. Level 1 is the fastest and closest to the core, level 2 is slightly slower and is farther away, level 3 cache is slower than level 1 and level 2 and is the farthest from the processor core. The cache memory is of a type that is faster than RAM and is placed much closer to the processing core in order to speed up processing by reducing the time required to fetch the next instruction. By placing an ammount of super-fast memory optimized for the chip as closely as possible to the core, the system runs faster as it does not have to wait as long while fetching instructions from memory. The cache fetches the instructions in blocks from RAM and passes them to the processor. Both AMD and Intel chips now include predictive pre-fetch processing algorithms designed to predict which blocks of instructions will be used next.
Bookmark this page and SHARE: