CSC303: Computer Organization and Architecture 6 credits (35-10-15)

Objectives

This course teaches the design and operation of a digital computer.

Contents

Recall of computer structure and organisation; Mention input/output and communication: interfaces, 1/O bus and interface units; serial/parallel communications.

Assembly Level Machine Organization Review of von Neumann machine instruction cycle. Instruction sets and types (data manipulation, control, I/O); RISC and CISC and example instruction sets. Assembly/machine language programming: Instruction formats; Addressing modes; Subroutine call and return mechanisms; I/O and interrupts; Heap vs. Static vs. Stack vs. Code segments; Shared memory multiprocessors/multicore organization; Introduction to SIMD vs. MIMD and the Flynn Taxonomy

Memory system organization and architecture Storage systems and their technology. Characteristics of memory (e.g. static/dynamic, destructive read, random access, capacity); Memory hierarchy: importance of temporal and spatial locality. Main memory organization and operations. Latency, cycle time, bandwidth, and interleaving. Cache memories (address mapping, block size, replacement and store policy). Multiprocessor cache consistency/Using the memory system for inter-core synchronization/atomic memory operations. Virtual memory (page table, TLB). Fault handling and reliability. Error coding, data compression, and data integrity.

Functional Organization Implementation of simple datapaths, including instruction pipelining, hazard detection and resolution. Control unit: hardwired realization vs. microprogrammed realization. Instruction pipelining. Introduction to instruction-level parallelism (ILP)

State-State Transition-State Machines Digital vs. Analog/Discrete vs. Continuous Systems. Simple logic gates, logical expressions, Boolean logic simplification including the use of Karnaugh maps in building/analysing: combinational circuits. Clocks, State, Sequencing. Sequential Logic, Registers, Memories, counters.

Circuit technologies: PLA, CMOS, etc.; Arithmetic circuits: adders (serial, parallel, etc.);

Evaluation Technology trends. CPI equation (Execution time = # of instructions * cycles/instruction * time/cycle) as tool for understanding tradeoffs in the design of instruction sets; processor pipelines; and memory system organizations. Amdahl’s Law: the part of the computation that cannot be sped up limits the effect of the parts that can

Prerequisite:

CSC205