Thursday, 31 March 2016

Types of Network

Types of Network:

Peer-to-peer:
  • All connected computers have equal status- there is no centralised management  
  • Can all share files and peripherals
  • Each peer computer may be accessed by any other peer
  • Prone to data collisions…
  • Really good to share files and work together on projects
  • Network speed is reduced
  • Often implemented over the internet
  • Used to facilitate file-sharing (both legal and illegal) and BitCoin payments.
It is very hard to block this type of network as there is no central management or server.

Client server:
  • Most common architecture
  • Dedicated, high-spec machine is the Server:
    • centralised storage of data
    • processing of shared files
    • printing
    • internet access etc
    • Security
  • Clients request services from the Servers.
  • File server, web server, print server etc
  • Data Centres = multiple servers stacked together
  • Virtualised servers:#
    • efficiency ++
    • energy consumption --
  • The types of servers:
    • Print
    • Email
    • File
    • Database 
    • Website














The 7 layer OSI Model:

  • OSI ( Open System Interconnection
  • Lots of different networks, OS & protocols
  • No standard means of communication / sharing data between different type devices
  • OSI model evolved to create a standard
  • Describes a set of protocols that allow computers with different architectures to be linked together so that they can communicate & share data
The 7 stages:
























Seven Layers of OSI Model:
•    Application LayerNetwork aware applications. It is the software that the user interacts with.examples of this are email, web browsers, print servers, network drives and social media

•    Presentation Layer -Converts data between formats required for applications & transmission over network. It is responsible for encryption, compression and translation where necessary.

•    Session Layer-Converts data between formats required for applications & transmission over network. Examples of this are managing connection sessions, login rights, File/Folder Permissions and access rights.

•    Transport Layer-
Send the data from the source address to the destination address, it Guarantees end-to-end delivery of data. Handles error checking and correct transmission of sent / received data:
-Check that destination address exists
-Make sure it’s in the correct order
-Make sure it’s all there


A data packet broken down into sections:





Sequence number =data is in the correct order
FLAGS= 
Source port= the IP address of where the data is coming from
Destination port= The IP address of where the data is going
Checksum= To check the data is correct 
DATA= the actual data itself

A data packet's journey can be viewed using traceroute

•    Network Layer- Responsible for transmitting and routing data packets via the shortest possible path across the network to their destination (shortest time not distance!).

•    Data Link Layer- 
Decides whose turn it is to send / receive data (“Bus arbitration”). Finds the physical device on the network generally a switch.

•    Physical Layer-  
This holds the 
Physical properties of the network examples are:
-Cables
-Voltages & Frequencies
-Bit encoding
-Transfer rates

TCP/IP:

  • Transmission control protocol/internet protocol
  • A suite of protocols
  • Describe how data is communicated over a network.
There are 4 layers:
Application

Transport
Internet
Link (Network Access / Network Interface)

Overview of TCP/IP












PDU:

  • Protocol Data unit
  • Term used to describe the information on any given layer of the TCP/IP stack
  • Each layer has an associated PDU:
    • Application layer ; DATA
    • Transport layer; SEGMENT (TCP)  DATAGRAM (UDP)
    • Internet layer: DATAGRAM
    • Network Access layer: FRAME (data link layer) Bits (Physical layer)
  • Terminology is not strictly followed… often the term ‘packet’ is used at any layer!

Layer addressing:
Each layer has its own method of identifying the source address & destination address of dataAt the application layer there is just data, no addressing. At the transport layer data is broken into segments that use ports to identify services. At the Internet layer each network device has a unique IP address. IP datagrams use IP addresses to reach the correct destination. IPv4 - 192.168.0.1. (4 8 bit values)
IPv6 is implemented to give a wider range of IP addresses.
At the link layer Ethernet splits data packets into frames. Frames use the physical address of the device
the MAC address - uses 6 x 2 digit hex values: 88-CB-87-E4-17-4F.
MAC address can never be changed and it is unique to a device

MAC (Medium Access Control) Address 

Physical Layer
  • Devices that extend the physical network
  • Help relay bits from A to B
Internet / Network Layer
  • Devices examine data packets and make decisions based on IP address of sender / recipient
Transport Layer
  • Devices examine segments and make decisions based on Port number

Datagram:

  • Self-contained / self-sufficient unit of data
  • Contains source & destination addresses in the header
  • Primarily used in wireless communications
  • Data sent to destination without any pre-defined route
  • No guarantee of delivery
  • No confirmation upon successful delivery
  • Order of sending / receiving datagrams is not considered
  • Supports a maximum of 65,535 bytes at a time





Thursday, 24 March 2016

Networks

What is a network?
A network is a linked set of computer systems which may be capable of sharing computer power and resources such as printers and databases.

Network topology:
A topology is the theoretical arrangement of components of a network.
Actual arrangements are determined by physical factors
Topologies will affect:
- Cost
-Performance
-Ease of installation

The types of topology:

Star:
  • Shared link to server(s)
  • Central node is the Hub
  • Few data collisions
  • Fast, robust and cost effective
  • Can  set up independent segments
  • the hub can be another node or switch
  • The Hub has a separate connection  to each node



Bus :
  • This uses one common linking cable (bus)
  • Cheapest network design
  • Network will slow down when there is heavy traffic
  • Network is prone to lots of data collisions
  • Breakage to the bus will affect the whole network.
  • Only a limited distance can be covered.
  • Terminators are required for this topology, which denote the start and end point of the bus line, to detect when data has not reached its destination.



Ring:
  • One direction traffic
  • fast performance
  • Every node is required to have a network interface controller which allows it to communicate across a network using a series of protocol.
  • Data will pass through the NIC of each node



Mesh:
  • Most common type of network
  • Using decentralised design
  • can be wired or wireless
  • no single point of failure
  • each node connects to 2+ other nodes
  • Nodes communicate directly with one another without needing internet connection.





All Types of network topology:



















LAN:

  • Local Area Network
  • Networked computers are located fairly near to one another geographically
  • An example is all the computers in a school or office.
  • Each devices is called a node.
  • The entire infrastructure is owned by the organisation who own the LAN.
  • All infrastructure is the responsibility and maintained by the organisation or individual
  • Some equipment can be leased form external companies- in this case the companies is responsible and in control of repairing it. Usually to do with routers or wireless access points
Advantages:
  • allows communication between workers
  • allows data/files/information to be shared
  • peripherals can be shared e.g. printers
  • computers (software) can be updated/upgraded more easily (also virus scans)
  • Log on from any connected machine
  • Distributed processing where a program can be run simultaneously on may nodes
LAN Hardware:
  • NIC- needed to connect to a LAN, allows computer to communicate over a network by providing physical access to a network and provides a unique address for each the node, the MAC (Media Access Control) address.
  • Router:
    • forwards data packets across many networks so are different to switches. They reseieve packets read the address information and use a routing table to forward thje packet to the next network
  • Switch:
    •  Allows network segments to be created and reduces data collisions and is hardware within the network for internal communication, a router forwards data packets across many networks so are different
Wireless access point (WAP):
  • Allows wireless devices to connect to a wired network
  • uses WI-FI, Bluetooth or related standards
  • Usually connects to a router via a wired network.
  • can relay data between the wireless and wired data 

WAN:
  • Wide area Network
  • Computers are located in various distant locations geographically 
  • WAN is the result of joining two or more geographically separate LANs via satellite, fibre-optic cables, telephone lines or a combination of these.
  • The infrastructure may be provided by telecoms companies.
  • The largest WAN in existence is the Internet.#

SAN:
  • Storage Area Network
  • Dedicated network used for large scale storage of data in data centres.
  • Common uses of a SAN include email servers, databases, and high usage file servers
PAN:
  • Personal Area Network
  • Used for data transmission among devices such as computers, smartphones & tablets. Can be used for communication between personal devices or to connect to a higher level network and / or Internet.















The Cloud:
  • Data storage and services moved off site
  • 3rd party manages maintenance, security, backups etc.
Advantages:
– No in-house maintenance
– Cheaper (less staff)

Disadvantages:
–loss of control / Security
–Relies on an internet connection

Networking


Networking

Networking:
- A stand alone computers is one that is separate and unconnected from others
- A network is a set of connected computers that share data or resources
- The Internet is a network

Pros: 
  • sharing of resources and peripherals,
  • share data and work together, 
  • easy backups
Cons: 
  • Can become dependent on the network,
  • downtime- Could lose money in a company.
  • low levels of security 
  • There are higher rates of traffic which reduces performance
- Protocols are needed in order to allow all computers can communicate with different hardware
- A protocol is a set of rues relating to the communication between devices
- TCP/IP (Transmission Control Protocol Internet Protocol) used to transfer all packets across the Internet and error free packet switching
- FTP (File Transfer Protocol), or HTTP, or SMTP


TCP/IP, DNS, Layering:
- Protocols have layers which each have a different job
- OSI model has 7 models on how to structure a protocol. Physical layer, Data link, network, transport, session, presentation, application
- Data will pass through all layers of a protocol
- TCP/IP does not have a session or presentation layer
- DNS allows resources to be named on a network and is built into TCP/IP
- DNS replaces IP addresses with a user friendly name eg google.co.uk and when a resource is requested the DNS converts the URL to the IP address


LAN and WAN:
- Local area network is over a small geographical area and owned by a company and maintained by a company
- Wide area network is over a large geographical area
- You can connect to a network over wireless or wired connections
- A NIC is needed for a computer to connect and communicate with a network each with a unique MAC address which is fixed
- An IP address can change so is a logical address

Client and Peer to Peer:
- In a client server model, the clients request services from the server, used in school
- In a peer to peer model all computers have equal status and communicate with each other, very cheap to implement
- Peer to Peer allows easy file sharing eg bit torrent

Packet Switching:
- Method of transmitting data over a network
- In circuit switching a fixed route is established between two nodes before data transmission begins, all data will be transmitted along this route before releasing the connection, this is an old way of exchanging data and insecure
- In packet switching the data is broken up into packets, which are sent along individual random routes to the destination and may arrive out of sequence so needs to be reassembled at the destination. this is more efficient and more secure

Tuesday, 22 March 2016

Translators

Translators;
A translator is any program that takes source code and turns it into machine code.
There are three different types of translators:

Compilers:
Both compilers and interpreters will compile source code into machine code. They both have a 1 to many relationship, a single line of source code may become many lines of binary equivalent machine code.
Compilers will take all of the source code and convert it all into machine code, this can take a long time

Advantages:
Disadvantages:
  • All errors need to be fixed before the program can run
  • This way can be very time consuming


Interpreters:
Both compilers and interpreters will compile source code into machine code. They both have a 1 to many relationship, a single line of source code may become many lines of binary equivalent machine code.
An interpreter takes one line of high level source code and convert it directly into machine code and run it

Advantages:
  • Starts running the program immediately
Disadvantages:
  • As all source code is required every time there are security issues.
  • These are slower than compilers


Assemblers:
Assemblers will convert low level assembly code into machine code, usually in a 1 to 1 conversion, one line corresponds to a machine code instruction.

Advantages:
Disadvantages:

Summary of Translators:

 






Compilers Vs Interpreters:


















Stacks and Queues Algorithms


Stacks:
  • Last in first out
  • The most recent piece of data added to the list is the first piece of data to be removed.
  • These are good for linear data
  • These operate on a LIFO data structure (Last in, First Out)
  • Add data from the top and remove from the top (Pancake stack on a plate, the last on the plate: Last cooked, is the first eaten!)
  • Adding data to the stack is called Pushing
  • Removing data from the stack is called Popping
    • These terms are used with the Little Man computer
  • Stacks are implemented using an array
  • A stack within computers memory system is implemented using pointers.
  • Pointers are only used to located the top of the stack
  • Stack pointers use a base 0 number system.
  • Pushing or popping always takes place at the top of the stack
  • If you have a pop instruction the stack pointer moves by -1
  • If you have a push instruction the stack pointer moves by +1
  • Errors can occur at the bottom or the top of the stack;
    • if the stack is full and you try to push another piece of data there will be a stack overflow error
    • if the stack is empty and you try to pop another piece of data there will be an error
 
 





Queues:
  • First in first out
  • The most recent piece of data to the queue is the last piece of data to be taken out
  • There are two pointers:
    • One to monitor the back of the queue where data is added
    • One for the front of the queue where data will be removed from
    • When a queue is empty the two pointers have the same value.
  • The data within a queue does not physically move forward in the queue (this is because it will be inefficient), instead the two pointers are used to denote the front (data removed) and back (data added) of the queue, and these pointers move.
  • The pointers are designated ranges so they know where to move.
  • These are a circular data structure- unlike stacks.
  • The end pointer can be 'before' the start pointer, but as the queue is a circular data structure it will read from the start, then go back to the beginning to read to the end point




































 
 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Thursday, 17 March 2016

Other Architectures and factors affecting performance.

Types of Architectures:

Von Neumann-
  • the computer consisted of a CPU, memory and I/O devices. The program is stored in the memory. The CPU fetches an instruction from the memory at a time and executes it.
  • Instructions are executed sequentially- 5this makes the process VERY SLOW
  • instruction are executed sequentially as controlled by a program counter
  • To increase the speed, parallel processing of computer have been developed in which serial CPU’s are connected in parallel to solve a problem
  • uses a processing unit and a single separate storage structure to hold both instructions and data.
  • Instructions and data share the same memory space.
  • Instructions and data are stored in the same format
  • A single control unit follows a linear fetch decode execute cycle.
  • One instruction at a time.
  • Registers are used as fast access to instructions and data
von Neumann Architecture, the computer consisted of a CPU, memory and I/O devices. The program is stored in the memory. The CPU fetches an instruction from the memory at a time and executes itvon neumann architecture

Harvard-
  •  Splits memory into two parts - one for data and the other for instructions.
  • The problem with the Harvard architecture is complexity and cost.
  • Instead of one data bus there are now two
  • Having two buses  means more pins on the CPU, a more complex motherboard and doubling up on RAM chips as well as more complex cache design. This is why it is rarely used outside the CPU.
  • sometimes used within the CPU to handle its caches.
  • Instructions and data are stored in separate memory units.
  • each instruction and piece of data has its own bus.
  • Reading and writing data can be done at the same time as fetching an instruction
  • Used by RISC processors

harvard architecture

CISC Vs RISC:

RISC- Reduced instruction set computers
CISC - Complex instruction set computers

There are two types of fundamental CPU architecture: CISC and RISC

CISC:
CISC is the most prevalent and established microprocessor architecture
  • A large number of complicated instructions
  • Can take many clock cycles for a single instruction
  • Tend to be slower than RISC
  • Wider variety of instructions and addressing modes
  •  Less reliant on registers
  • CISC cannot support pipelining

RISC:
RISC is a relative new architecture
  • RISC architectures only support a small number o0f instructions
  • All instructions are simple
  • When executed these instructions can be completed in 1 clock cycle
  • More instructions are needed to complete a given task but each instruction is executed extremely quickly
  • More efficient at processing as there is no unused instructions
  • RISC need a greater number of registers to provide faster access to data when programs are being executed.- quick access to data as the instructions are processed.
  • RISC can support pipelining
CISC
RISC
Large instruction set
Compact instruction set
Complex, powerful instructions
Simple hard-wired machine code and control unit
Instruction sub-commands micro-coded in on board ROM
Pipelining of instructions
Compact and versatile register set
Numerous registers
Numerous memory addressing options for operands
Compiler and IC developed simultaneously


COMPARISON:

CISCRISC
Has more complex hardwareHas simpler hardware
More compact software codeMore complicated software code
Takes more cycles per instructionTakes one cycle per instruction
Can use more RAM to store intermediate resultsCan use less RAM movements as there are more registers available
Compiler can use a single machine instructionCompiler has to use a number of instructions
Uses less RAM to store a programHas to use more RAM as it uses more instructions
Legacy - there is about 40 years worth of coding that still needs to run on a cpu.The historic need to run previously coded software is much less.


CPU Time (Execution Span) = (number of instructions)*(average clocks per instruction)*(seconds per cycle)

RISC attempts to reduce execution time by reducing average clock cycles per instruction.
CISC attempts to reduce execution time by reducing the number of instructions per second.

RISC
CISC
Simple instructions
Complex instructions (often made up of many simpler instructions)
Fewer instructions
A large range of instructions
Fewer addressing modes
Many addressing modes
Only LOAD and STORE instructions can access memory
Many instructions can access memory


Multicore Systems:
  • The classic von Neumann architecture uses only a single processor to execute instructions. In order to improve the computing power of processors, it was necessary to increase the physical complexity of the CPU. Traditionally this was done by finding new, ingenious ways of fitting more and more transistors onto the same size chip.
  • Moore’s Law, predicted that the number of transistors that could be placed on a chip would double every two years.
  • However, as computer scientists reached the physical limit of the number of transistors that could be placed on a silicon chip, it became necessary to find other means of increasing the processing power of computers. One of the most effective means of doing this came about through the creation of multicore systems (computers with multiple processors)
  • Multicore-processors are now very common and popular, the processor will have several cores allowing for multiple programs or threads top be run at once.
Advantages and Disadvantages of Multicore-processors:
Advantages
Disadvantages
More jobs can be done in a shorter time because they are executed simultaneously.
It is difficult to write programs for multicore systems, ensuring that each task has the correct input data to work on.
Tasks can be shared to reduce the load on individual processors and avoid bottlenecks.
Results from different processors need to be combined at the end of processing, which can be complex and adds to the time taken to execute a program.
Not all tasks can be split across multiple processors.

Parallel systems:
  • One of the most common types of multicore system
  • In parallel processing, two or more processors work together to perform a single task.
  • They tend to be referred to as dual-core (two processors) or quad-core (four processors) computers.
  • The task is split into smaller sub-tasks (threads).
  • These tasks are executed simultaneously by all available processors (any task can be processed by any of the processors).
  • This hugely decreases the time taken to execute a program, but software has to be specially written to take advantage of these multicore systems.

Parallel computing systems are generally placed in one of three categories:
  • Multiple instruction, single data (MISD) systems have multiple processors, with each processor using a different set of instructions on the same set of data.

  • Single instruction, multiple data (SIMD) computers have multiple processors that follow the same set of instructions, with each processor taking a different set of data. Essentially SIMD computers process lots of different data concurrently, using the same algorithm.

  • Multiple instruction, multiple data (MIMD) computers have multiple processors, each of which is able to process instructions independently of the others. This means that a MIMD computer can truly process a number of different instructions simultaneously.

MIMD is probably the most common parallel computing architecture.
  • All the processors in a parallel processing system act in the same way as standard single-core (von Neumann) CPUs, loading instructions and data from memory and acting accordingly.
  • However, the different processors in a multicore system need to communicate continuously with each other in order to ensure that if one processor changes a key piece of data (for example, the players’ scores in a game), the other processors are aware of the change and can incorporate it into their calculations.
  • There is a huge amount of additional complexity involved in implementing parallel processing, because when each separate core (processor) has completed its own task
  • the results from all the cores need to be combined to form the complete solution to the original problem.
  • Complexity may mean that processing is slowed down
  • The additional time taken to co-ordinate communication between processors and combine their results into a single solution was greater than the time saved by sharing the workload.




Buses

Communication between components in  the CPU and input and output devices is via buses.
 A bus is a path down which information can pass.
The system bus also known as the (front side bus) - contains 3 different types of bus.:
(When in the CPU, these are the descriptions)
- Control- This carries control signals to help manage the whole process. Has constant contact with the control unit on the CPU
-Data- Constant contact with the memory data register
-Address- Constant contact with the memory address register, contains the location of where the data will be going.

The data and address bus carry data in tandem, as the data goes along the data bus the address bus contains the address of where the data will be going.

A bus is a path down which information can pass. The system bus or front side bus is made up of 3 other buses: control. data and address bus. Buses are used throughout the whole system or register to register. The data (communicates with MDR)and address bus (communicates with MAR) will run in tandem, one carries the data, one carries the address of where it is going. The control bus (communicates with the control unit) will send commands to the different components of the system carrying control signals from the control unit.

Monday, 14 March 2016

Sorting Algorithms (Bubble and Shuttle)

Sorting algorithms are needed in order to produce a list in numerical order. this improves efficiency for searching the algorithm

Google sorts webpages by page links (this is how many references a page gets on websites and social media). This process allows webpages to be stored by relevancy.

Bubble Sort
 
Summary
 It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The algorithm gets its name from the way smaller elements “bubble” to the top of the list. This is the least useful sorting algorithm
One final sweep is needed of the whole list so that the list can be checked to see if it is in the right order.
- Has a quadratic order - this is for time complexity of O(n^2)

Advantage – simple to implement.

Disadvantage – inefficient for large lists. The big O time complexity for bubble sort is n^2, this means it is extremely slow.
 

Description of how a bubble sort works

·         Start from the first element to the last

·         Compare each pair of elements and swap their positions if necessary

·         This process is repeated as many times as necessary, until the array is sorted (We know when the array is sorted because there will have been no swaps in the last comparison of all pairs of elements.)

·         After the first pass the last element of the list is sorted so no need to compare again.

·         After the second pass - last two elements sorted, third pass – last three etc…
 
The larger value will always end up at the end of the list, they will be moved there individually
One final sweep is needed of the whole list so that the list can be checked to see if it is in the right order.
 

Algorithm for Bubble sort:
 
Subprogram BubbleSort(aList)
        exchanges = True
        passnum = length of aList – 1
        while passnum > 0 and exchanges == True
           exchanges = False
           for index = 0 to passnum
                if aList[index] > aList[index+1]
                     exchanges = True
                     temp = aList[index]
                     aList[index] = aList[index+1]
                     aList[index+1] = temp
           passnum = passnum – 1
 
Insertion Sort
 
Summary
We can divide a list into a sorted part and unsorted part. Initially the first element is the only
 element in the sorted part then as you work through sorting the second to the last element the sorted part grows. An insertion sort works by taking elements from the unsorted part and inserting them in their correct position into the sorted part. It is similar to the bubble sort however we do not mover step by step, you gather one element and then sort it to its exact place through swapping if needs be.

Advantages – Insertion sort is a simple sorting algorithm and relatively efficient for small lists and mostly-sorted lists. It is faster than the bubble sort.  It’s relatively fast for adding a small number of new items periodically. Large saving compared to the bubble sort.
Reasonably simple code. Good for smaller lists. Works very well when the lists is nearly sorted
Very memory efficient - it only needs one extra storage location to make room for the moving items.

Disadvantage – inefficient for large lists.

The Big O time complexity of insertion sort is O(n^2)
 
Description of how an insertion sort works:

It works the way you might sort a hand of playing cards:

·         We start with an empty left hand [sorted list] and the cards face down on the table [unsorted list].

·         Then remove one card [element] at a time from the table [unsorted list], and insert it into the correct position in the left hand [sorted list].

·         To find the correct position for the card, we compare it with each of the cards already in the hand, from right to left.

Note that at all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table.
 
Quicksort 
 
Summary
 A quicksort is a recursive sorting algorithm. The idea behind quicksort is to divide and conquer. First split the list into two parts, one part containing values less than a pivot value, the other containing values greater than this pivot value. In the algorithm below the first item in the unordered list is used as the pivot value. The same process is now applied to each part list, repeatedly, until the parts contain just one value. Joining the part lists back together produces the ordered list. This is a divide and conquer algorithm!

Works by repeated partitioning the list, it chops up the list based on a pivot (a element/ point in the list)
The best pivot is the middle value however that takes a very long time so usually in quick sort it is a random value.
When the pivot is selected all the elements on the left will be less than the pivot and all the elements on the right will be greater than the pivot.
Work in from each end of the list to put the numbers on the correct side of the pivot. 

Advantages – can be much faster than bubble sort and insertion sort.
Can work well on sorting through large lists.

Disadvantage – This process is inefficient in terms of memory for very large lists due to recursion (the stack can grow large with all the return addresses, variables etc. that have to be stored for each recursive call). Sometimes it can grow too large causing a stack overflow (out of stack memory) error.
 
Time complexity of Big O is O(nlogn)
 
Description of how a quicksort works:

·         If the first pointer is less than the last pointer (if there are elements in the list) then:

·         Pick an item within the list to act as a ‘pivot’. The left-most item is a popular choice.

·         Split the list into two parts – one list with items equal to or larger than the pivot item and the other list contains items smaller than the pivot.

·         Repeat this process of splitting each list into two parts until all the parts contain just one value.


This is where recursion is used to quicksort until there is only one element left in the list.

 Generally recursive the algorithm will stop when a value is 0 or 1 in the list.

Thursday, 10 March 2016

CPU Architecture

Components of a CPU:
Based upon the Von-Neuman machine-stored program approach. Where both the data and the programs are stored in the main memory



The Fetch- Decode- Execute cycle:

Fetch - The next instruction is fetched from main memory
Decode- The instruction gets decoded and signals produced to control other components such as the ALU
Execute - The instruction gets executed (carried out)

Registers:
A register is a small block of memory usually around 8 bytes which is used as temporary storage for instructions as they are being processed, these will run at the same speed as the processor. Machine code instructions can only work if they are loaded into registers.

General purpose register are used as programs run to enable calculations to be made or can be used for any purpose the programmer requires. Special purpose registers are crucial to how the processor works. The number of these registers in a CPU varies depending on the architecture.
The most important of these are:
  • Program counter- This holds the address of the next instruction to be fetched decoded and executed. It's value will be automatically incremented by 1 as the current instruction is being decoded. If a branch command is used the program counter will not be incremented by 1 as these commands are used to create loops.
  • Memory address register- This holds of address of the current instruction that is to be executed. It points to the relevant location in memory where the required instruction is. This value is simply copied from the program counter.
  • Memory data register- This can hold both instructions and data. At this stage the instruction has been fetched and is being stored here. The instruction is copied from the memory location pointed to by the memory address register. e.g.. LDA 103
  • Current address register-
    This is used to store the current instruction that is to be decoded and executed, and is copied from the MDR. As this instruction is being executed, the next instruction is being fetched into the MDR. e.g. LDA 103. As this is being decoded, as soon as it is being decoded the program counter is increased/incremented. The example instruction
    is telling the processor to load the value in the memory address 103 into the accumulator.
  • Accumulator- is used by instructions that require a calculation and may update or use the value in the accumulator. For example adding two values will make use of the accumulator. Results of calculations in the accumulator may be used as part of the next calculation.
The Control Unit:
The control unit is in control.
It coordinates all of the fetch decode execute activities. At each clock pulse, it will control the movement of data and instructions between main memory and the CPU etc. Some instructions may take less time than a single clock cycle but the next instruction will only start when the processor executes the next clock cycle.

The Status Register (SR):
Stores a combination of bits to indicate the result of an instruction. For example an overflow error which will be set the value of a bit to indicate a negative result. It also indicates whether an interrupt has been received.

The Arithmetic Logic Unit:
The Arithmetic Logic Unit carried out comparisons or mathematical operations required by an instruction that is executed. Calculations include floating point multiplication and integer division while logic operations include comparison tests such as greater than or less than required by instructions that are executed.