Difference between revisions of "Winter 2020 SPO600 Weekly Schedule"

From CDOT Wiki
Jump to: navigation, search
(Created page with "Category:Fall 2019 SPO600 This is the schedule and main index page for the SPO600 ''Software Portability and Optimization'' course for Winter 2020. <!-- {{Admon/import...")
 
 
(61 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[Category:Fall 2019 SPO600]]
+
[[Category:Winter 2020 SPO600]]
 
This is the schedule and main index page for the [[SPO600]] ''Software Portability and Optimization'' course for Winter 2020.
 
This is the schedule and main index page for the [[SPO600]] ''Software Portability and Optimization'' course for Winter 2020.
 
<!-- {{Admon/important|It's Alive!|This [[SPO600]] weekly schedule will be updated as the course proceeds - dates and content are subject to change. The cells in the summary table will be linked to relevant resources and labs as the course progresses.}} -->
 
<!-- {{Admon/important|It's Alive!|This [[SPO600]] weekly schedule will be updated as the course proceeds - dates and content are subject to change. The cells in the summary table will be linked to relevant resources and labs as the course progresses.}} -->
{{Admon/important|Content being Updated|This page is in the process of being updated from a previous semester's content. It is not yet updated for the current semester. Do not rely on the accuracy of this information until this warning is removed.}}
+
<!-- {{Admon/important|Content being Updated|This page is in the process of being updated from a previous semester's content. It is not yet updated for the current semester. Do not rely on the accuracy of this information until this warning is removed.}} -->
<!-- {{Admon/obsolete|[[Current SPO600 Weekly Schedule]]}} -->
+
{{Admon/obsolete|[[Current SPO600 Weekly Schedule]]}}
  
 
== Schedule Summary Table ==
 
== Schedule Summary Table ==
Line 16: Line 16:
 
|1||Jan 06||[[#Week 1 - Class I|Introduction to the Course / Introduction to the Problem / How is code accepted into an open source project? (Homework: Lab 1)]]||[[#Week 1 - Class II|Computer architecture basics / Binary Representation of Data / Introduction to Assembly Language]]||[[#Week 1 Deliverables|Set up for the course]]
 
|1||Jan 06||[[#Week 1 - Class I|Introduction to the Course / Introduction to the Problem / How is code accepted into an open source project? (Homework: Lab 1)]]||[[#Week 1 - Class II|Computer architecture basics / Binary Representation of Data / Introduction to Assembly Language]]||[[#Week 1 Deliverables|Set up for the course]]
 
|-
 
|-
|2||Jan 13||[[#Week 2 - Class I|6502 Assembly Basics Lab (Lab 2)]]||[[#Week 2 - Class II|Assembly language conventions and Examples / Project Selection]]||[[#Week 2 Deliverables|Lab 1 and 2]]
+
|2||Jan 13||[[#Week 2 - Class I|6502 Assembly Basics Lab (Lab 2)]]||[[#Week 2 - Class II|Math, Assembly language conventions, and Examples]]||[[#Week 2 Deliverables|Lab 1 and 2]]
 
|-
 
|-
|3||Jan 20||[[#Week 3 - Class I|6502 Strings and Memory Lab (Lab 3)]]||[[#Week 3 - Class II|Computer math / Building Code]]||[[#Week 3 Deliverables|Lab 3, Selected Project]]
+
|3||Jan 20||[[#Week 3 - Class I|6502 Math Lab (Lab 3)]]||[[#Week 3 - Class II|Addressing Modes]]||[[#Week 3 Deliverables|Lab 3]]
 
|-
 
|-
|4||Jan 27||[[#Week 4 - Class I|6502 Math Lab (Lab 4)]]||[[#Week 4 - Class II|System routines / Sysadmin for Devs]]||[[#Week 4 Deliverables|Lab 4, Project Build]]
+
|4||Jan 27||[[#Week 4 - Class I|Continue with Lab 3]]||[[#Week 4 - Class II|System routines / Building Code]]||[[#Week 4 Deliverables|Lab 3]]
 
|-
 
|-
|5||Feb 03||[[#Week 5 - Class I|6502 Application Lab (Lab 5)]]||[[#Week 5 - Class II|Introduction to x86_64 and AArch64 architectures]]||[[#Week 5 Deliverables|Lab 5]]
+
|5||Feb 03||[[#Week 5 - Class I|6502 String Lab (Lab 4)]]||[[#Week 5 - Class II|Introduction to x86_64 and AArch64 architectures]]||[[#Week 5 Deliverables|Lab 4]]
 
|-
 
|-
|6||Feb 10||[[#Week 6 - Class I|64-bit Assembly Language Lab (Lab 6)]]||[[#Week 6 - Class II|Profiling]]||[[#Week 6 Deliverables|Lab 6]]
+
|6||Feb 10||[[#Week 6 - Class I|6502 String Lab (Lab 4) Continued]]||[[#Week 6 - Class II|x86_64 and AArch64 Assembly]]||[[#Week 6 Deliverables|Lab 4]]
 
|-
 
|-
|7||Feb 17||[[#Week 7 - Class I|Profiling Lab (Lab 7)]]||[[#Week 7 - Class II|Project Stage 1]]||[[#Week 7 Deliverables|Lab 7, Project Profiling]]
+
|7||Feb 17||style="background:#f0f0ff"|Family Day Holiday||[[#Week 7 - Class II|64-bit Assembly Language Lab (Lab 5)]]||[[#Week 7 Deliverables|Lab 5]]
 
|-
 
|-
 
|Reading||Feb 24||style="background: #f0f0ff" colspan="5" align="center"|Reading Week
 
|Reading||Feb 24||style="background: #f0f0ff" colspan="5" align="center"|Reading Week
 
|-
 
|-
|8||Mar 02||[[#Week 8 - Class I|Algorithm Selection Lab (Lab 8)]]||[[#Week 8 - Class II|SIMD and Vectorization]]||[[#Week 8 Deliverables|Project Stage 1 due]]
+
|8||Mar 02||[[#Week 8 - Class I|Lab 5 Continued]]||[[#Week 8 - Class II|Projects / Changing an Algorithm]]||[[#Week 8 Deliverables|Lab 5, Project Blogs]]
 
|-
 
|-
|9||Mar 09||[[#Week 9 - Class I|SIMD Lab (Lab 9)]]||[[#Week 9 - Class II|Identifying Optimization Opportunities]]||[[#Week 9 Deliverables|Lab 9]]
+
|9||Mar 09||[[#Week 9 - Class I|Algorithm Selection Lab (Lab 6)]]||[[#Week 9 - Class II|Compiler Optimizations / SIMD and Vectorization]]||[[#Week 9 Deliverables|Lab 6]]
 
|-
 
|-
|10||Mar 16||[[#Week 10 - Class I|Atomics]]||[[#Week 10 - Class II|Project Discussion]]||[[#Week 10 Deliverables|Project Work]]
+
|Switchover||Mar 16||style="background: #f0f0ff" colspan="5" align="center"|Online Switchover Week
 
|-
 
|-
|11||Mar 23||[[#Week 11 - Class I|Project Hacking]]||[[#Week 11 - Class II|Memory Ordering, Synchronization, and Barriers]]||[[#Week 11 Deliverables|Project Stage 2 due]]
+
|10||Mar 23||[[#Week 10 - Class I|Online Startup / Project Stage 1]]||[[#Week 10 - Class II|Review for Stage 1]]||[[#Week 10 Deliverables|Project Blogging]]
 
|-
 
|-
|12||Mar 30||[[#Week 12 - Class I|ifunc]]||[[#Week 12 - Class II|Emerging directions in system architecture]]||[[#Week 12 Deliverables|Project Work]]
+
|11||Mar 30||[[#Week 11 - Class I|<span style="background: #ffff00;">Quiz</span> / Profiling]]||[[#Week 11 - Class II|SIMD Part 1 - Autovectorization]]||[[#Week 11 Deliverables|Project Stage 1 due April 1, 11:59 pm / Blog about your project as you start Stage 2]]
 
|-
 
|-
|13||Apr 06||[[#Week 13 - Class I|Demos]]||[[#Week 13 - Class II|Course wrap-up discussion]]||[[#Week 13 Deliverables|Project Stage 3 due]]
+
|12||Apr 06||[[#Week 12 - Class I|SIMD Part 2 - Intrinsics and Inline Assembler]]||style="background:#f0f0ff"|Good Friday Holiday||[[#Week 12 Deliverables|Project Stage 2 due]]
 +
|-
 +
|13||Apr 13||[[#Week 13 - Class I|<span style="background: #ffff00;">Quiz</span> / Project Discussion]]||[[#Week 13 - Class II|Wrap-up Discussion]]||[[#Week 13 Deliverables|Project Stage 3 due Monday, April 20, 11:59 pm (Firm!)]]
 
|-
 
|-
 
|}
 
|}
Line 102: Line 104:
 
==== Course and Setup: Accounts, agreements, servers, and more ====
 
==== Course and Setup: Accounts, agreements, servers, and more ====
 
* [[SPO600 Communication Tools]]
 
* [[SPO600 Communication Tools]]
* [[Fall 2019 SPO600 Participants]] page
+
* [[Winter 2020 SPO600 Participants]] page
 
* [[SPO600_Servers#Preparatory_Steps|Key generation]] for [[SSH]] to the [[SPO600 Servers]].
 
* [[SPO600_Servers#Preparatory_Steps|Key generation]] for [[SSH]] to the [[SPO600 Servers]].
  
Line 110: Line 112:
 
=== Week 1 - Class II ===
 
=== Week 1 - Class II ===
  
* Compiler Operation
+
==== Binary Representation of Data ====
** Stages of Compilation
+
* Integers
**# Preprocessing
+
** Integers are the basic building block of binary numbers.
**# Compiling
+
** In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is <code>2<sup>bit</sup></code>. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is <code>0:2<sup>bits</sup>-1</code> where bits is the number of bits in the unsigned integer.
**# Assembling
+
** Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is <code>-(!value)-1</code> where ! is the NOT operation (each bit gets flipped from 0&rarr;1 and 1&rarr;2)
**# Linking
+
* Fixed-point
* Analyzing compiler output
+
** A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the ''radix point''). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
** Disassembly
+
** An alternative to fixed-point values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values.
* [[SPO600 Compiled C Lab|Compiled C Lab]] (Lab 2)
+
* Floating-point
 +
** Floating point numbers have three parts: a ''sign bit'' (0 for positive, 1 for negative), a ''mantissa'' or ''significand'', and an ''exponent''. The value is interpreted as <code>''sign'' mantissa * 2<sup>exponent</sup></code>.
 +
** The most commonly-used floating point formats are defined in the [[IEEE 754]] standard.
 +
* Sound
 +
** Sound waves are air pressure vibrations
 +
** Digital sound is most often represented in raw form as a series of time-based measurements of air pressure, called Pulse Coded Modulation (PCM)
 +
** PCM takes a lot of storage, so sound is often compressed in either a lossless (perfectly recoverable) or lossy format (higher compression, but the decompressed data doesn't perfectly match the original data). To permit high compression ratios with minimal impact on quality, psychoacoustic compression is used - sound variations that most people can't perceive are removed.
 +
* Graphics
 +
** The human eye perceives luminance (brightness) as well as hue (colour). Our hue receptors are generally sensitive to three wavelengths: red, green, and blue (RGB). We can stimulate the eye to perceive most colours by presenting a combination of light at these three wavelengths.
 +
** Digital displays emit RGB colours, which are mixed together and perceived by the viewer. For printing, cyan/yellow/magenta inks are used, plus black to reduce the amount of colour ink required to represent dark tones; this is known as CYMK colour.
 +
** Images are broken into picture elements (''pixels'') and each pixel is usually represented by a group of values for RGB or CYMK channels, where each channel is represented by an integer or floating-point value. For example, using an 8-bit-per-pixel integer scheme (also known as 24-bit colour), the brightest blue could be represented as R=0,G=0,B=255; the brightest yellow would be R=255,G=255,B=0; black would be R=0,G=0,B=0; and white would be R=255,G=255,B=255. With this scheme, the number of unique colours available is 256^3 ~= 16 million.
 +
** As with sound, the raw storage of sampled data requires a lot of storage space, so various lossy and lossless compression schemes are used. Highest compression is achieved with psychovisual compression (e.g., JPEG).
 +
** Moving pictures (video, animations) are stored as sequential images, often compressed by encoding only the differences between frames to save storage space.
 +
* Compression techniques
 +
** Huffman encoding / Adaptive arithmetic encoding
 +
*** Instead of fixed-length numbers, variable-length numbers are used, with the most common values encoded in the smallest number of bits. This is an effective strategy if the distribution of values in the data set is uneven.
 +
** Repeated sequence encoding (1D, 2D, 3D)
 +
*** Run length encoding is an encoding scheme that records the number of repeated values. For example, fax messages are encoded as a series of numbers representing the number of white pixels, then the number of black pixels, the white pixels, then black pixels, alternating to the end of each line. These numbers are then represented with adaptive artithmetic encoding.
 +
*** Text data can be compressed by building a dictionary of common sequences, which may represent words or complete phrases, where each entry in the dictionary is numbered. The compressed data contains the dictionary plus a sequence of numbers which represent the occurrence of the sequences in the original text. On standard text, this typically enables 10:1 compression.
 +
** Decomposition
 +
*** Compound audio wavforms can be decomposed into individual signals, which can then be modelled as repeated sequences. For example, a waveform consisting of two notes being played at different frequencies can be decomposed into those separate notes; since each note consists of a number of repetitions of a particular wave pattern, they can individually be represented in a more compact format by describing the frequence, waveform shape, and amplitude characteristics.
 +
** Pallettization
 +
*** Images often contain repeated colours, and rarely use all of the available colours in the original encoding scheme. For example, a 1920x1080 image contains about 2 million pixels, so if every pixel was a different colour, there would be a maximum of 2 million colours. But it's likely that many of the pixels in the image are the same colour, so there might only be (perhaps) 4000 colours in the image. If each pixel is encoded as a 24-bit value, there are potentially 16 million colours available, and there is no possibility that they are all used. Instead, a palette can be provided which specifies each of the 4000 colours used in the picture, and then each pixel can be encoded as a 12-bit number which selects one of the colours from the palette. The total storage requirement for the original 24-bit scheme is 1920*1080*3 bytes per pixel = 5.9 MB. Using a 12-bit pallette, the storage requirement is 3 * 4096 bytes for the palette plus 1920*1080*1.5 bytes for the image, for a total of 3 MB -- a reduction of almost 50%
 +
** Psychoacoustic and psychovisual compression
 +
*** Much of the data in sound and images cannot be perceived by humans. Psychoacoustic and psychovisual compression remove artifacts which are least likely to be perceived. As a simple example, if two pixels on opposite sides of a large image are almost but not exactly the same, most people won't be able to tell the difference, so these can be encoded as the same colour if that saves space (for example, by reducing the size of the colour palette).
 +
 
 +
==== Computer Architecture Overview ====
 +
* [[Computer Architecture]]
 +
 
 +
==== Introduction to Assembly Language on the 6502 Processor ====
 +
 
 +
To understand basic assemly/machine language concepts, we're going to start with a very simple processor: the [[6502]].
 +
 
 +
* Resources
 +
** [[6502|6502]] Basics
 +
** [https://skilldrick.github.io/easy6502/ Easy 6502]
 +
** [http://www.6502.org/tutorials/6502opcodes.html 6502 Opcodes with Register Definitions]
 +
** [https://www.masswerk.at/6502/6502_instruction_set.html 6502 Opcodes with Detailed Operation Information]
 +
 
 +
* [[6502 Emulator]] for this course
  
 
=== Week 1 Deliverables ===
 
=== Week 1 Deliverables ===
 
# Course setup:
 
# Course setup:
## Set up your [[SPO600 Communication Tools]] - in particular, set up a blog and add it to [http://zenit.senecac.on.ca/~chris.tyler/planet/ Planet CDOT] (via the [[Planet CDOT Feed List]]).
+
## Set up your [[SPO600 Communication Tools]] - in particular, set up a blog.
 
## Add yourself to the [[Current SPO600 Participants]] page (leave the projects columns blank).
 
## Add yourself to the [[Current SPO600 Participants]] page (leave the projects columns blank).
 
## Generate a [[SPO600_Servers#Preparatory_Steps|pair of keys]] for [[SSH]] and email the public key to your professor, so that he can set up your access to the [[SPO600 Servers|class servers]].
 
## Generate a [[SPO600_Servers#Preparatory_Steps|pair of keys]] for [[SSH]] and email the public key to your professor, so that he can set up your access to the [[SPO600 Servers|class servers]].
 
## Optional (strongly recommended): [[SPO600 Host Setup|Set up a personal Linux system]].
 
## Optional (strongly recommended): [[SPO600 Host Setup|Set up a personal Linux system]].
 
## Optional: Purchase an AArch64 development board (such as a [http://96boards.org 96Boards] HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
 
## Optional: Purchase an AArch64 development board (such as a [http://96boards.org 96Boards] HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
# Complete [[SPO600 Code Review Lab|Lab 1]] and write it up on your blog.
+
# Start work on [[SPO600 Code Review Lab|Lab 1]].
 
 
== Week 2 ==
 
 
 
=== Week 2 - Class I ===
 
* [[Make and Makefiles]]
 
* [[Assembly Language]]
 
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 3)
 
 
 
=== Week 2 - Class II ===
 
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 3)
 
 
 
=== Week 2 Deliverables ===
 
* Blog your results and conclusion to [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]] and [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
* Blog about your initial work on [[SPO600 Assembler Lab|Lab 3]].
 
* Set up your account on the [[SPO600_Communication_Tools#Slack|Seneca Open Source Slack Workspace]].
 
 
 
== Week 3 ==
 
 
 
=== Week 3 - Class I ===
 
 
 
* ''Sysadmin for Devs''
 
** In-class discussion of tips and tricks for efficient work on a Linux server
 
 
 
=== Week 3 - Class II ===
 
* Finish [[SPO600 Assembler Lab|Lab 3]]
 
 
 
=== Week 3 - Deliverables ===
 
* Finish and blog your detailed results for the [[SPO600 Assembler Lab|Assembler Lab]] (Lab 3)
 
 
 
== Week 4 ==
 
 
 
=== Week 4 - Class I ===
 
* Binary Representation of Data
 
** Integers
 
*** Integers are the basic building block of binary numbers.
 
*** In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is <code>2<sup>bit</sup></code>. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is <code>0:2<sup>bits</sup>-1</code> where bits is the number of bits in the unsigned integer.
 
*** Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is <code>-(!value)-1</code> where ! is the NOT operation (each bit gets flipped from 0&rarr;1 and 1&rarr;2)
 
** Fixed-point
 
*** A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the ''radix point''). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
 
*** An alternative to fixed-point values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values.
 
** Floating-point
 
*** Floating point numbers have three parts: a ''sign bit'' (0 for positive, 1 for negative), a ''mantissa'' or ''significand'', and an ''exponent''. The value is interpreted as <code>''sign'' mantissa * 2<sup>exponent</sup></code>.
 
** Sound
 
** Graphics
 
** Compression techniques
 
*** Huffman encoding / Adaptive arithmetic encoding
 
*** Repeated sequence encoding (1D, 2D, 3D)
 
*** Decomposition
 
*** Pallettization
 
*** Psychoacoustic and psychovisual compression
 
* Problem: Scaling Sound
 
** Naive approach
 
** Lookup table
 
** Fixed-point multiply and shift
 
 
 
=== Week 4 - Class II ===
 
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 4)
 
 
 
=== Week 4 Deliverables ===
 
* Blog your results to [[SPO600 Algorithm Selection Lab|Lab 4]]
 
 
 
 
 
== Week 5 ==
 
 
 
=== Week 5 - Class I ===
 
* SIMD
 
** SIMD is an acronym for "Single Instruction, Multiple Data", and refers to a class of instructions which perform the same operation on several separate pieces of data in parallel. SIMD instructions also include related instructions to set up data for SIMD processing, and to summarize results.
 
** SIMD is based on very wide registers (128 bits to 2048 bits on implementations current as of 2019), and these wide registers can be treated as multiple "lanes" of similar data. These SIMD registers, also called vector registers, can therefore be thought of as small arrays of values.
 
** A 128-bit SIMD register can be used as:
 
*** two 64-bit lanes
 
*** four 32-bit lanes
 
*** eight 16-bit lanes
 
*** sixteen 8-bit  lanes
 
** Each architecture has a different notation for SIMD registers. In AArch64 (which will be our focus):
 
*** Vector usage uses the notation v''n''.''s'' where ''n'' is the register number and ''s'' is the shape of the lanes, expressed as the number of lanes and a letter indicating the width of the lanes: q for quad-word (128 bits), d for double-word (64 bits), s for single-word (32 bits), h for half-word (16 bits), and b for byte (8 bits). Therefore, <code>v0.16b</code> is vector register 0 used as 16 lanes of 8 bits (1 byte) each, while <code>v8.4s</code> is vector register 8 used as 4 lanes of 32 bits each. Most instructions permit either 64 or 128 bits of the register to be used.
 
*** Scalar usage uses the lane width letter followed by the vector register number. Therefore, <code>q3</code> refers to vector register 3 used as a single 128-bit value, and <code>s3</code> refers to the same register used as a single 32-bit register. Note that these are the same register referred to as v3 for vector usage. When using less than 128 bits, the remaining bits are either zero-filled (unsigned usage) or sign-extended (signed usage: the upper bits are filled with the sign bit, i.e., the same value as the high bit of the active part of the register).
 
** Most SIMD operations work on corresponding lanes of the operand registers. For example, the AArch64 instruction <code>add v0.8h, v1.8h, v2.8h</code> will take the value in the first lane of register 1, add the value in the first lane of register 2, and place the result in the first lane of register 0. At the same time, the other lanes are processed in the same way, resulting in 8 simultaneous addition operations being performed.
 
** A small number of SIMD operations work across lanes, e.g., to find the lowest or highest value in all of the lanes, to add the lanes together, or to duplicate a single value into all of the lanes of a register. These are usually used to set up or summarize the results of SIMD operations -- for example, a value of 0 might be duplicated into all of the lanes of a result register, then a loop applied to sum array data into the results register, and then a lane-summing operation performed to merge the results from all of the lanes.
 
* SIMD capabilities can be used in a program in one of three different ways:
 
*# The compiler's ''auto-vectorizer'' can be used to identify sections of code to which SIMD is applicable, and SIMD code will automatically be generated.
 
*#* This works for the basic SIMD operations, but may not be applicable to advanced SIMD instructions, which don't clearly map to C statements.
 
*#* The compiler will be very cautious about vectorizing code. See the Resources section below for insight into these challenges.
 
*#** In order to vectorize a loop, among other things, the number of loop iterations needs to be known before the loop starts, memory layout must meet SIMD alignment requirements, loops must not overlap in a way that is affected by vectorization.
 
*#** The compiler will also calculate a cost for the vectorization: in the case of a small loop, the extra setup before the loop and processing after the loop may negate the benefits of vectorization.
 
*#* Vectorization in applied by default only at the -O3 level in most compilers. In GCC:
 
*#** The main individual feature flag to turn on vectorization is <code>-ftree-vectorize</code> (enabled by default at -O3, disabled at other levels).
 
*#** You can see all of the vectorization decisions using <code>-fopt-info-vec-all</code> or you can see just the missed vectorizations using <code>-fopt-info-vec-missed</code> (which is usually what you want to focus on, because it show only the loops where vectorization was ''not'' enabled, and the reason that it was not). This approach is generally very portable.
 
*# We can explicitly include SIMD instructions in a C program by using [[Inline Assembly Language|Inline Assembler]]. This is obviously architecture-specific, so it is important to use C preprocessor directives to include/exclude this code depending on the platform for which it is compiled, and to use a generic C implementation on any platform for which you are not providing an inline assembler version.
 
*# ''C Intrinsics'' are function-like extensions to the C language. Although they look like functions, they are compiled inline, and they are used to provide access to features which are not provided by the C language itself. There is a group of intrinsics which provide access to SIMD instructions. However, the benefit of using these over inline assembler is debatable. SIMD intrinsics are not portable, and should be included with C preprocessor directives like inline assembler.
 
 
 
=== Week 5 - Class II ===
 
* [[SPO600 SIMD Lab]] (Lab 5)
 
 
 
=== Week 5 Resources ===
 
==== Auto-vectorization ====
 
* [https://gcc.gnu.org/projects/tree-ssa/vectorization.html Auto-Vectorization in GCC] - Main project page for the GCC auto-vectorizer.
 
* [http://locklessinc.com/articles/vectorize/ Auto-vectorization with gcc 4.7] - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. '''This article is strongly recommended.'''
 
* [https://software.intel.com/sites/default/files/8c/a9/CompilerAutovectorizationGuide.pdf Intel (Auto)Vectorization Tutorial] - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm
 
==== Inline Assembly Language ====
 
* [[Inline Assembly Language]]
 
* [http://developer.arm.com ARM Developer Information Centre]
 
** [https://developer.arm.com/products/architecture/a-profile/docs/den0024/a ARM Cortex-A Series Programmer’s Guide for ARMv8-A]
 
* The ''short'' guide to the ARMv8 instruction set: [https://www.element14.com/community/servlet/JiveServlet/previewBody/41836-102-1-229511/ARM.Reference_Manual.pdf ARMv8 Instruction Set Overview] ("ARM ISA Overview")
 
* The ''long'' guide to the ARMv8 instruction set: [https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile] ("ARM ARM")
 
==== C Intrinsics - AArch64 SIMD ====
 
* [https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics ARM NEON Intrinsics Reference]
 
* [https://gcc.gnu.org/onlinedocs/gcc/ARM-C-Language-Extensions-_0028ACLE_0029.html GCC ARM C Language Extensions]
 
 
 
 
 
=== Week 5 Deliverables ===
 
* Blog about the SIMD Lab
 
 
 
 
 
== Week 6 ==
 
 
 
=== Week 6 - Class I ===
 
* [[Compiler Optimizations]]
 
 
 
* Advanced Compiler Optimizations
 
** [[Profile Guided Optimization]]
 
** [[Link Time Optimization]]
 
 
 
* [[Profiling]]
 
 
 
=== Week 6 - Class II ===
 
* Continue work on the [[SPO600 SIMD Lab|SIMD Lab]] (Lab 5)
 
 
 
=== Week 6 Deliverables ===
 
* Blog about your results to Lab 5
 
 
 
 
 
== Week 7 ==
 
 
 
=== Week 7 - Class I ===
 
 
 
Building software...
 
* Configuration Systems
 
** make-based systems
 
*** [https://www.gnu.org/software/automake/manual/html_node/index.html The GNU Build System: autotools, autoconf, automake]
 
**** GNU autotools makes extensive use of the ''configuration name'' ("triplet") -- ''cpu-manufacturer-operatingSystem'' or ''cpu-manufacturer-kernel-operatingSystem'' (e.g.,
 
**** config.guess and config.sub
 
*** CMake
 
*** qmake
 
*** Meson
 
*** iMake and Others
 
** Non-make-based systems
 
*** Apache Ant
 
*** Apache Maven
 
*** Qt Build System
 
* Building in the Source Tree vs. Building in a Parallel Tree
 
** Pros and Cons
 
** [https://www.gnu.org/software/automake/manual/html_node/VPATH-Builds.html#VPATH-Builds GNU automake ''vpath'' builds]
 
* Installing and Testing in non-system directories
 
** Configuring installation to a non-standard directory
 
*** Running <code>configure</code> with <code>--prefix</code>
 
*** Running <code>make install</code> as a non-root user
 
*** DESTDIR variable for <code>make install</code>
 
** Runtime environment variables:
 
*** PATH
 
*** LD_LIBRARY_PATH and LD_PRELOAD (see the [http://man7.org/linux/man-pages/man8/ld.so.8.html ld.so manpage])
 
** Security when running software
 
*** Device access
 
**** Opening a TCP/IP or UDP/IP port below 1024
 
**** Accessing a <code>/dev</code> device entry
 
***** Root permission
 
***** Group permission
 
*** SELinux Type Enforcement
 
**** Enforcement mode
 
***** View enforcement mode: <code>getenforce</code>
 
***** Set enforcement mode: <code>setenforce</code>
 
**** Changing policy
 
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2why]
 
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2allow]
 
* Build Dependencies
 
* Packaging
 
 
 
* General information about the SPO600 projects
 
** Goal
 
** Stages
 
** Approaching the Project
 
 
 
 
 
=== Week 7 - Class II ===
 
* [[Fall 2019 SPO600 Project|Project Selection]]
 
 
 
=== Week 7 Deliverables ===
 
* Catch up on any incomplete labs (and blog about them)
 
* Blog about your project selection progress
 
 
 
 
 
== Week 8 ==
 
 
 
=== Week 8 - Class I ===
 
 
 
==== Overview/Review of Processor Operation ====
 
 
 
* Fetch-decode-dispatch-execute cycle
 
* Pipelining
 
* Branch Prediction
 
* In-order vs. Out-of-order execution
 
** Micro-ops
 
 
 
==== Memory Basics ====
 
 
 
* Organization of Memory
 
** Process organization
 
*** Text, data
 
*** Stack
 
*** Heap
 
** System organization
 
*** Kernel memory in process maps
 
*** Use of unallocated memory for buffers and cache
 
* Memory Speeds
 
* Cache
 
** Cache lookup
 
** Cache synchronization and invalidation
 
** Cache line size
 
* Prefetch
 
** Prefetch hinting
 
 
 
==== Memory Architecture ====
 
 
 
* Virtual Memory and Memory Management Units (MMUs)
 
** General principles of Virtual Memory and operation of MMUs
 
** Memory protection
 
*** Unmapped Regions
 
*** Write Protection
 
*** Execute Protection
 
*** Privilege Levels
 
** Swapping
 
** Text sharing
 
** Demand Loading
 
** Data sharing
 
*** Shared memory for Inter-Process Communication
 
*** Copy-on-Write (CoW)
 
** Memory mapped files
 
 
 
==== Memory Statistics ====
 
 
 
* Resident Set Size (RSS) and Virtual Set Size (VSS)
 
* Total memory consumption per process
 
* Total system memory consumption
 
 
 
==== Software Impact ====
 
* Alignment checks
 
* Page boundary crossing
 
 
 
=== Week 8 - Class II ===
 
 
 
* Project Discussion
 
 
 
=== Week 8 Deliverables ===
 
 
 
* Blog about your project work
 
 
 
<!--
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
######################################################################################################
 
  
  
Line 410: Line 174:
  
 
=== Week 2 - Class I ===
 
=== Week 2 - Class I ===
 
+
* [[6502 Assembly Language Lab]] (Lab 2)
* Sysadmin for Developers
 
* Building Code
 
** [[Make and Makefiles]]
 
** [[SPO600 Code Building Lab|Code Building Lab]] (Lab 2)
 
  
 
=== Week 2 - Class II ===
 
=== Week 2 - Class II ===
 
+
* 6502 Assembly Language Continued
* Compiler Operation
+
** [[6502 Math]]
** Stages of Compilation
+
** Assembly conventions and examples
**# Preprocessing
+
*** Directives
**# Compiling
+
**** define
**# Assembling
+
**** DCB
**# Linking
 
* Analyzing compiler output
 
** Disassembly
 
* [[SPO600 Compiled C Lab|Compiled C Lab]] (Lab 3)
 
  
 
=== Week 2 Deliverables ===
 
=== Week 2 Deliverables ===
 
+
* Blog your results to [[SPO600 Code Review Lab|Lab 1]] and [[6502 Assembly Language Lab|Lab 2]].
* Blog your conclusion to the [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]]
 
* Blog the results and conclusion from the [[SPO600 Code Building Lab|Compiled C Lab (Lab 2)]]
 
* Blog the results and conclusion from the [[SPO600 Compiled C Lab|Compiled C Lab (Lab 3)]]
 
  
  
Line 438: Line 191:
  
 
=== Week 3 - Class I ===
 
=== Week 3 - Class I ===
* [[Make and Makefiles]]
+
* Finish [[6502 Assembly Language Math Lab|Lab 2]]
* [[Assembly Language]]
+
* [[6502 Assembly Language Math Lab]] (Lab 3)
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 4)
 
  
 
=== Week 3 - Class II ===
 
=== Week 3 - Class II ===
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 4) Continued...
+
* [[6502 Addressing Modes]]
 +
* Project Selection - what to look for
  
 
=== Week 3 Deliverables ===
 
=== Week 3 Deliverables ===
* Blog about [[SPO600 Assembler Lab|Lab 4]].
+
* Blog your Lab 3 results (or interim results).
  
 
== Week 4 ==
 
== Week 4 ==
  
 
=== Week 4 - Class I ===
 
=== Week 4 - Class I ===
* [[SPO600 Assembler Lab|Assembler Lab]] (Lab 4) Wrap-up...
+
* Complete [[6502 Assembly Language Math Lab|Lab 3]]
* Binary Representation of Data
 
** Integers
 
** Fixed-point
 
** Floating-point
 
** Sound
 
** Graphics
 
** Compression techniques
 
*** Huffman encoding / Adaptive arithmetic encoding
 
*** Repeated sequence encoding (1D, 2D, 3D)
 
*** Decomposition
 
*** Pallettization
 
*** Psychoacoustic and psychovisual compression
 
  
 
=== Week 4 - Class II ===
 
=== Week 4 - Class II ===
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 5)
+
==== Strings and System Routines ====
 
+
* The [[6502 Emulator|6502 emulator]] has a 80x25 character display mapped starting at location '''$f000'''. Writing to a byte to screen memory will cause that character to be displayed at the corresponding location on the screen, if the character is printable. If the high bit is set, the character will be displayed in <span style="background:black;color:white;">&nbsp;reverse video </span>. For example, storing the ASCII code for "A" (which is 65 or $41) into memory location $f000 will display the letter "A" as the first character on the screen; ORing the value with 128 ($80) yields a value of 193 or $d1, and storing that value into $f000 will display <span style="background:black;color:white;">A</span> as the first character on the screen.
=== Week 4 Deliverables ===
+
* A "ROM chip" with screen control routines is mapped into the emulator at the end of the memory space (at the time of writing, the current version of the ROM exists in pages $fe and $ff). Details of the available ROM routines can be viewed using the "Notes" button in the emulator or on the [[6502_Emulator#ROM_Routines|emulator page]] on this wiki.
* Blog your results to [[SPO600 Assembler Lab|Lab 4]]
+
* Strings in assembler are stored as sequences of bytes. As is usually the case in assembler, memory management is left to the programmer. You can terminate strings with null bytes (C-style), which are easy to detect one some CPUs (e.g., <code>lda</code> followed by <code>bne / beq</code> on a 6502), or you can use character counts to track string lengths.
 
 
 
 
== Week 5 ==
 
 
 
=== Week 5 - Class I ===
 
'''Note:''' Your prof is away!
 
* Investigate various tools available for [[Profiling]]
 
** Ensure that you know how to use <code>gprof</code>
 
** Ensure that you know how to use at least one other Linux profiling tool
 
** Blog about it, including the example of profiling the sound scaling programs from [[SPO600 Algorithm Selection Lab|Lab 5]]
 
 
 
 
 
=== Week 5 - Class II ===
 
* SIMD and Auto-vectorization
 
* [[Inline Assembly Language|Inline Assembler]]
 
* C Intrinsics
 
* [[SPO600 Vectorization Lab|Vectorization Lab]] (Optional lab - recommended)
 
 
 
=== Week 5 Deliverables ===
 
* Blog your Profiling investigation results
 
* Optional: Blog about the Vectorization Lab if you performed it
 
 
 
== Week 6 ==
 
 
 
=== Week 6 - Class I ===
 
* Thanksgiving -- enjoy time with your friends and family!
 
** No class
 
 
 
=== Week 6 - Class II ===
 
* '''Note: Your prof is away'''
 
** Room is available to collaborate if desired -- AV unlock code is 2598
 
* Perform the [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 6)
 
 
 
=== Week 6 Deliverables ===
 
* Blog your results to the [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 6)
 
 
 
 
 
== Week 7 ==
 
 
 
=== Week 7 - Class I ===
 
* Discussion
 
 
 
=== Week 7 - Class II ===
 
* Discussion
 
 
 
=== Week 7 Deliverables ===
 
* Wrap up any labs not yet completed.
 
 
 
  
== Week 8 ==
+
==== Building Code ====
 +
* C code is built with the C compiler, typically called <code>cc</code> (which is usually an alias for a specific C compiler, such as <code>gcc</code>, <code>clang</code>, or <code>bcc</code>).
 +
* The C compiler runs through five steps, often by calling separate executables:
 +
*# Preprocessing - performed by the C Preprocessor (<code>cpp</code>), this step handles directives such as <code>#include</code>, <code>#define</code>, and <code>#ifdef</code> to build produce a single source code text file, with cross-references to the original input files so that error messages can be displayed correctly (e.g., an error in an included file can be correctly reported by filename and line number).
 +
*# Compilation - the C source code is converted to assembler, going through one or more intermedie representations (IR) such as [https://gcc.gnu.org/onlinedocs/gccint/GENERIC.html GENERIC] or [https://gcc.gnu.org/onlinedocs/gccint/GIMPLE.html GIMPLE], or [https://llvm.org/docs/LangRef.html LLVM IR]. The program used for this step is often called <code>cc1</code>.
 +
*# Optimization - various optimization passes are performed at different stages of processing through multiple passes, but centered on IR at the compilation step. Sometimes, the work of a previous pass is undone by a later pass: for example, a complex loop may be converted into a series of simpler loops by an early pass, in the hope that optimizations can be applied to one or more of the simpler loops; the loops may later be recombined to single loop if no optimizations are found that are applicable to the simplified loops.
 +
*# Assembly - converts the assembly language code emitted by the compilation stage into binary object code.
 +
*# Linking - connects code to functions (aka methods or procedures) which were compiled in other ''compilation units'' (they may be pre-compiled libraries available on the system, or they may be other pieces of the same code base which are compiled in separate steps). Linking may be static, where libraries are imported into the binary executable file of the output program, or linking may be dynamic, where additional information is added to the binary executable file so that a run-time linker can load and connect libraries at runtime.
 +
* Other languages which are compiled to binary form, such as C++, Ocaml, Haskell, Fortran, and COBOL go through similar processing. Languages which do not compile to binary form are either compiled to a ''bytecode'' format (binary code that doesn't correspond to actual hardware), or left in original source format, and an interpreter reads and executes the bytecode or source code at runtime. Java and Python use bytecode; Bash and JavaScript interpret source code. Some interpreters build and cache blocks of machine code on-the-fly; this is called Just-in-Time (JIT) compilation.
  
=== Week 8 - Class I ===
+
* Compiler feature flags control the operation of the compiler on the source code, including optimiation passes. When using gcc, these "feature flags" take the form <code>-f[no-]'''featureName'''</code> -- for example:
* [[Fall 2018 SPO600 Project]]
+
** <code>-fbuiltin</code> -- enables the "builtin" feature
 +
** <code>-fno-builtin</code> -- disables the "builtin" feature
 +
* Feature flags can be selected in groups using the optimization (<code>-O</code>) level:
 +
** <code>-O0</code> -- disables most (but not all) optimizations
 +
** <code>-O1</code> -- enables basic optimizations that can be performed quickly
 +
** <code>-O2</code> -- enables almost all safe operatimizations
 +
** <code>-O3</code> -- enables agressive optimization, including optimizations which may not always be safe for all code (e.g., assuming +0 == -0)
 +
** <code>-Os</code> -- optimzies for small binary and runtime size, possibly at the expense of speed
 +
** <code>-Ofast</code> -- optimizes for fast execution, possibly at the expense of size
 +
** <code>-Og</code> -- optimizes for debugging: applies optimizations that can be performed quickly, and avoids optimizations which convolute the code
 +
* To see the optimizations which are applied at each level in gcc, use: <code>gcc -Q --help=optimizers -O'''level'''</code> -- it's interesting to compare the output of different levels, such as <code>-O0</code> and <code>-O3</code>
  
=== Week 8 - Class II ===
+
* Different CPUs in one family can have different capabilities and performance characteristics. The compiler options <code>-march</code> sets the overall architecture family and CPU features to be targetted, and the <code>-mtune</code> option sets the specific target for tuning. Thus, you can produce an executable that will work across a range of CPUs, but is specifically tuned to perform best on a certain model. For example, <code>-march=ivybridge -mtune=knl</code> will cause the processor to use features which are present on all Intel Ivy Bridge (and later) processors, but tuned for optimal performance on Knight's Landing processors. Similarly, <code>-march=armv8-a -mtune=cortex-a72</code> will cause the compiler to emit code which will safely run on any ARMv8-a processor, but be tuned specifically for the Cortex-A72 core.
* Project Discussion
 
  
=== Week 8 Deliverables ===
+
* When building code on different platforms, there are a lot of variables which may need to be fed into the preprocessor, compiler, and linker. These can be manually specified, or they can be automatically determined by a tool such as GNU Autotools (typically visible as the <code>configure</code> script in the source code archive).
* Blog about your project.
+
* The source code for large projects is divided into many source files for manageability. The dependencies between these files can become complex. When developing or debugging the software, it is often necessary to make changes in one or a small number of files, and it may not be necessary to rebuild the entire project from scratch. The [[Make_and_Makefiles|<code>make</code>]] utility is used to script a build and to enable rapid partial rebuilds after a change to source code files (see [[Make and Makefiles]]).
  
== Week 2 ==
+
* Many open source projects distribute code as a source archive ("tarball") which usually decompresses to a subdirectory '''packageName-version''' (e.g. foolib-1.5.29). This will typically contain a script which configures the Makefile (<code>configure</code> if using GNU Autotools). After running this script, a Makefile will be available, which can be used to build the software. However, some projects use an alternative configuration tool instead of GNU Autotools, and some may use an alternate build system instead of make.
 
+
* To eliminate this variation, most Linux distributions use a '''package''' system, which standardizes the build process and which produces installable package files which can be used to reliably install software into standard locations with automatic dependency resolution, package tracking via a database, and simple updating capability. For example, Fedora, Red Hat Enterprise Linux, CentOS, SuSE, and OpenSuSE all use the RPM package system, in which source code is bundled with a build recipe in a "Source RPM" (SRPM), which can be built with a single command into a binary package (RPM). The RPMs can then be downloaded, have dependencies and conflicts resolved, and installed with a single command such as <code>dnf</code>. The fact that the SRPM can be built into an installable RPM through an automated process enables and simplifies automated build systems, mass rebuilds, and architecture-specific builds.
=== Week 2 - Class I ===
 
 
 
* Binary Representation of Data
 
** Numbers
 
*** Integers
 
*** Fixed-point numbers
 
*** Floating-point numbers
 
** Characters
 
*** ASCII
 
*** ISO8859-1
 
*** Unicode
 
**** Encoding schemes
 
*** EBCDIC
 
** Images
 
** Sound
 
* [[Computer Architecture]] overview (see also the [[:Category:Computer Architecture|Computer Architecture Category]])
 
* A first look at the x86_64 and AArch64 Architectures and ISA
 
** Register file comparison
 
** Instruction encoding
 
** ELF
 
** Procedure calling conventions
 
 
 
==== Reference ====
 
* [[Computer Architecture]] and [[:Category:Computer Architecture|Computer Architecture Category]]
 
* [[Aarch64 Register and Instruction Quick Start]]
 
* [[x86_64 Register and Instruction Quick Start]]
 
 
 
=== Week 2 - Class II ===
 
 
 
* Compiler Operation
 
** Stages of Compilation
 
**# Preprocessing
 
**# Compiling
 
**# Assembling
 
**# Linking
 
* Analyzing compiler output
 
** Disassembly
 
* [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
 
 
=== Week 2 Deliverables ===
 
 
 
* Blog your conclusion to the [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]]
 
* Blog the results and conclusion from the [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
 
 
 
 
== Week 3 ==
 
 
 
=== Week 3 - Class I ===
 
 
 
* [[Assembler Basics]]
 
* [[Syscalls]]
 
* [[SPO600 Assembler Lab|Assembler Lab (Lab 3)]].
 
 
 
=== Week 3 - Class II ===
 
 
 
* <strike>Complete Lab 3</strike> <span style="color: #ff0000">Class cancelled</span>
 
 
 
=== Week 3 Deliverables ===
 
 
 
* Blog your initial experience on the [[SPO600 Assembler Lab|Assembler Lab (Lab 3)]].
 
 
 
 
 
 
 
== Week 4 ==
 
 
 
=== Week 4 - Class I ===
 
 
 
* Continue work in class on the [[SPO600 Assembler Lab|Assembler Lab (Lab 3)]].
 
 
 
=== Week 4 - Class II ===
 
 
 
* Continue work in class on the [[SPO600 Assembler Lab|Assembler Lab (Lab 3)]].
 
  
 
=== Week 4 Deliverables ===
 
=== Week 4 Deliverables ===
 
+
* Blog your [[6502 Assembly Language Math Lab|Lab 3]] results.
* Blog your [[Lab 3]] results.
+
* Blogs are due at the end of the month (Feb 2 - 11:59 pm), so proofread your posts, ensure that you have at least 1-2 per week, and make sure the link from the [[Winter 2020 SPO600 Participants|Participant's Page]] is accurate. Feel free to write multiple posts about one topic or lab, if appropriate.
 
 
  
 
== Week 5 ==
 
== Week 5 ==
  
 
=== Week 5 - Class I ===
 
=== Week 5 - Class I ===
 
+
* [[6502 Assembly Language String Lab]] (Lab 4)
* [[Compiler Optimizations]]
 
  
 
=== Week 5 - Class II ===
 
=== Week 5 - Class II ===
 
+
* Introduction to x86_64 and ARMv8a/AArch64 [[Computer Architecture|Architectures]]
* Advanced Compiler Optimizations
+
** [[ARMv8]] Architecture
** [[Profile Guided Optimization]]
+
*** [[AArch64 Register and Instruction Quick Start]]
** [[Link Time Optimization]]
+
** x86_64 Architecture
* Introduction to Vector Processing/SIMD
+
*** [[x86_64 Register and Instruction Quick Start]]
** [[SPO600 Vectorization Lab|Vectorization Lab]] (Lab 4) as homework
+
* Working with [[ELF|ELF Files]]
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 5) in work groups
+
* Compiler Options: a [[SPO600 Compiled C Lab|demo]]
 
+
** -static
 +
** -g
 +
** -fno-builtin
 +
** -O0 vs -O3
 +
* Building an Open Source Project's Code
 +
** [[Make and Makefiles]] revisited
 +
** A brief introduction to GNU Autotools
  
 
=== Week 5 Deliverables ===
 
=== Week 5 Deliverables ===
 
+
* [[6502 Assembly Language String Lab|Lab 4 Results]]
* Blog your results for [[SPO600 Vectorization Lab|Lab 4]] and [[SPO600 Algorithm Selection Lab|Lab 5]] -- be sure to include links to your code, detailed results, and your reflection on the lab.
 
 
 
  
 
== Week 6 ==
 
== Week 6 ==
  
 
=== Week 6 - Class I ===
 
=== Week 6 - Class I ===
* [[Inline Assembly Language]] -- often used for:
+
* [[6502 Assembly Language String Lab]] (Lab 4) - Continued
*# Implementing a memory barrier
 
*# Performing an [[Atomic Operation]]
 
*#* '''Atomics''' are operations which must be completed in a single step (or appear to be completed in a single step) without potential interruption.
 
*#* Wikipedia has a good basic overview of the need for atomicity in the article on [http://en.wikipedia.org/wiki/Linearizability Linearizability]
 
*# Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...))
 
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 6)
 
  
 
=== Week 6 - Class II ===
 
=== Week 6 - Class II ===
* [[Addressing Mode|Processor Addressing Modes]]
+
* x86_64 and AArch64 Assembler
* Navigating CPU technical documentation
+
** See [[Assembler Basics]]
* A (very) quick intro to GDB
 
* [[Winter 2018 SPO600 Project|Project]]: Selecting, Building, Benchmarking, and Profiling
 
  
 
=== Week 6 Deliverables ===
 
=== Week 6 Deliverables ===
* Blog your Lab 5 and 6 results.
+
* Blog your [[6502 Assembly Language String Lab|Lab 4]] results
* Start blogging about your project.
 
* '''Reminder:''' Blogs will be marked as they stand at 11:59 on March 4, the Sunday at the end of Reading Week.
 
  
 
== Week 7 ==
 
== Week 7 ==
  
 
=== Week 7 - Class I ===
 
=== Week 7 - Class I ===
* Project Discussion
+
* No class - Family Day Holiday
  
 
=== Week 7 - Class II ===
 
=== Week 7 - Class II ===
* [[Profiling]]
+
* [[SPO600 64-bit Assembler Lab]] (Lab 5)
  
 
=== Week 7 Deliverables ===
 
=== Week 7 Deliverables ===
* Complete your [[Winter_2018_SPO600_Project#Stage_1|Stage I]] project posts on your blog.
+
* Blog your [[SPO600 64-bit Assembler Lab|Lab 5]] results
  
 
== Week 8 ==
 
== Week 8 ==
  
 
=== Week 8 - Class I ===
 
=== Week 8 - Class I ===
* Sysadmin for Developers
+
* [[SPO600 64-bit Assembler Lab|Lab 5]] Continued
* Project Discussion
 
  
 
=== Week 8 - Class II ===
 
=== Week 8 - Class II ===
 +
* [[Winter 2020 SPO600 Project|Course Projects]]
 +
** Building software
 +
** Benchmarking
  
==== Overview/Review of Processor Operation ====
+
* Changing an Algorithm to Improve Performance
 +
** Audio volume scaling problem
 +
*** PCM Audio is represented as 16-bit signed integer samples
 +
*** To reduce the volume of the audio, it can be scaled by a factor from 0.000 (silence) to 1.000 (original volume).
 +
*** This is a common operation on mobile and multimedia devices.
 +
*** What is the best way to do this?
 +
** Approach 1: Naive Implementation - Multiply each sample by the scaling factor (this involves multiplying each integer sample by a floating-point number, then converting the result back to an integer)
 +
** Approach 2: Lookup Table - Pre-calculate all possible values multiplied by the scaling factor, then look up the new value for each original sample value
 +
** Approach 3: Fixed-point math - Use fixed-point math rather than floating-point math
 +
** Approach 4: Vector fixed-point math - Use SIMD instructions to do multiple fixed-point operations in parallel
  
* Fetch-decode-dispatch-execute cycle
+
=== Week 8 Deliverables ===
* Pipelining
+
* Blog your [[SPO600 64-bit Assembler Lab|Lab 5]] results.
* Branch Prediction
+
* '''Reminder:''' Blogs are due for February this Sunday (March 8, 11:59 pm).
* In-order vs. Out-of-order execution
 
** Micro-ops
 
 
 
==== Memory Basics ====
 
 
 
* Organization of Memory
 
** Process organization
 
*** Text, data
 
*** Stack
 
*** Heap
 
** System organization
 
* Memory Speeds
 
* Cache
 
** Cache lookup
 
** Cache synchronization and invalidation
 
** Cache line size
 
* Prefetch
 
** Prefetch hinting
 
 
 
==== Memory Architecture ====
 
 
 
* Virtual Memory and Memory Management Units (MMUs)
 
** General principles of Virtual Memory and operation of MMUs
 
** Memory protection
 
*** Unmapped Regions
 
*** Write Protection
 
*** Execute Protection
 
*** Privilege Levels
 
** Swapping
 
** Text sharing
 
** Demand Loading
 
** Data sharing
 
*** Shared memory for Inter-Process Communication
 
*** Copy-on-Write (CoW)
 
** Memory mapped files
 
 
 
=== Software Impact ===
 
* Alignment checks
 
* Page boundary crossing
 
 
 
=== Week 8 Delivarables ===
 
* Blog about your project
 
  
 
== Week 9 ==
 
== Week 9 ==
  
 
=== Week 9 - Class I ===
 
=== Week 9 - Class I ===
 +
* [[SPO600 Algorithm Selection Lab]] (Lab 6)
  
==== Atomics ====
+
=== Week 9 - Class II ===[[Winter 2020 SPO600 Project|project]]
* '''Atomics''' are operations which must be completed in a single step (or appear to be completed in a single step) without potential interruption.
+
* [[Compiler Optimizations]]
** Wikipedia has a good basic overview of the need for atomicity in the article on [http://en.wikipedia.org/wiki/Linearizability Linerarizability]
+
* SIMD and Vectorization
** Atomics may be performed using special instructions or Kernel-compiler cooperation
+
** [[SPO600 Vectorization Lab|Optional vectorization lab]]
  
==== Memory Barriers ====
+
=== Week 9 - Deliverables ===
'''Memory Barriers''' ensure that memory accesses are sequenced so that multiple threads, processes, cores, or IO devices see a predictable view of memory.
+
* Blog about [[SPO600 Algorithm Selection Lab|Lab 6]] and your Project
* Leif Lindholm provides an excellent explanation of memory barriers.
 
** Blog series - I recommend this series, especially the introduction, as a very clear explanation of memory barrier issues.
 
*** Part 1 - [http://community.arm.com/groups/processors/blog/2011/03/22/memory-access-ordering--an-introduction Memory Access Ordering - An Introduction]
 
*** Part 2 - [http://community.arm.com/groups/processors/blog/2011/04/11/memory-access-ordering-part-2--barriers-and-the-linux-kernel Memory Access Ordering Part 2 - Barriers and the Linux Kernel]
 
*** Part 3 - [http://community.arm.com/groups/processors/blog/2011/10/19/memory-access-ordering-part-3--memory-access-ordering-in-the-arm-architecture Memory Access Ordering Part 3 - Memory Access Ordering in the ARM Architecture]
 
** Presentation at Embedded Linux Conference 2010 (Note: Acquire/Release in C++11 and ARMv8 aarch64 appeared after this presentation):
 
*** [http://elinux.org/images/f/fa/Software_implications_memory_systems.pdf Slides]
 
*** [http://free-electrons.com/pub/video/2010/elce/elce2010-lindholm-memory-450p.webm Video]
 
* [http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf Memory Barriers - A Hardware View for Software Hackers] - This is a highly-rated paper that explains memory barrier issues - as the title suggests, it is designed to describe the hardware origin of the problem to software developers. Despite the fact that it is an introduction to the topic, it is still very technical.
 
* [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14041.html ARM Technical Support Knowlege Article - In what situations might I need to insert memory barrier instructions?] - Note that there are some additional mechanisms present in ARMv8 aarch64, including Acquire/Release.
 
* [https://www.kernel.org/doc/Documentation/memory-barriers.txt Kernel Documentation on Memory Barriers] - discusses the memory barrier issue generally, and the solutions used within the Linux kernel. This is part of the kernel documentation.
 
* Acquire-Release mechanisms
 
** [http://blogs.msdn.com/b/oldnewthing/archive/2008/10/03/8969397.aspx MSDN Blog Post] with a very clear explanation of Acquire-Release.
 
** [http://preshing.com/20130922/acquire-and-release-fences/ Preshing on Programming post] with a good explanation.
 
** [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.genc010197a/index.html ARMv8 Instruction Set Architecture Manual] (ARM InfoCentre registration required) - See the section on Acquire/Release and Load/Store, especially Load/Store Exclusive (e.g., LDREX)
 
  
==== The Future of Memory ====
+
== Week 10 ==
* NUMA (on steroids!)
 
* Non-volatile, byte-addressed main memory
 
* Non-local memory / Memory-area networks
 
* Memory encryption
 
 
 
==== Building Software ====
 
* Configuration Systems
 
** make-based systems
 
*** [https://www.gnu.org/software/automake/manual/html_node/index.html#Top The GNU Build System: autotools, autoconf, automake]
 
*** Configuration name ("triplet") -- ''cpu-manufacturer-operatingSystem'' or ''cpu-manufacturer-kernel-operatingSystem''
 
**** config.guess and config.sub
 
*** CMake
 
*** qmake
 
*** Meson
 
*** iMake and Others
 
** Non-make-based systems
 
*** Apache Ant
 
*** Apache Maven
 
*** Qt Build System
 
* Building in the Source Tree vs. Building in a Parallel Tree
 
** Pros and Cons
 
** [https://www.gnu.org/software/automake/manual/html_node/VPATH-Builds.html#VPATH-Builds GNU automake ''vpath'' builds]
 
* Installing and Testing in non-system directories
 
** Configuring installation to a non-standard directory
 
*** Running <code>configure</code> with <code>--prefix</code>
 
*** Running <code>make install</code> as a non-root user
 
*** DESTDIR variable for <code>make install</code>
 
** Runtime environment variables:
 
*** PATH
 
*** LD_LIBRARY_PATH and LD_PRELOAD (see the [http://man7.org/linux/man-pages/man8/ld.so.8.html ld.so manpage])
 
** Security when running software
 
*** Device access
 
**** Opening a TCP/IP or UDP/IP port below 1024
 
**** Accessing a <code>/dev</code> device entry
 
***** Root permission
 
***** Group permission
 
*** SELinux Type Enforcement
 
**** Enforcement mode
 
***** View enforcement mode: <code>getenforce</code>
 
***** Set enforcement mode: <code>setenforce</code>
 
**** Changing policy
 
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2why]
 
***** [https://fedoraproject.org/wiki/SELinux/audit2why audit2allow]
 
 
 
=== Week 9: Class II ===
 
* Portability Issues
 
  
=== Week 9 Deliverables ===
+
=== Week 10 - Class I ===
* Blog about your project
+
* [https://youtu.be/DCp8oghdTfU Video - March 23]
 +
* Focus this week: Complete Stage 1 of your [[Winter 2020 SPO600 Project|Course Projects]]
  
== Week 10 ==
+
=== Drop-in Online Discussion Sessions ===
 +
* Tuesday to Friday (March 24-27) from 9-10 AM
 +
* Online at https://whereby.com/ctyler
 +
** There is a maximum of 12 people in the room at a time. I recommend dropping by one or twice a week with your questions.
 +
** If 9-10 am cannot work for you, email me to discuss this.
  
=== Week 10: Class I ===
+
=== Week 10 - Class II ===
* Project hacking and discussion
+
* [https://youtu.be/M-5IizEwkfY Video - March 27: Review of material for Stage 1]
 +
* Stage 1 due date '''extended''' to Wednesday, April 1, 11:59 pm
  
=== Week 10 Deliverables ===
+
=== Week 10 - Deliverables ===
* Blog about your project.
+
* Blog about your [[Winter 2020 SPO600 Project|project]]. Project Stage 1 is due next Wednesday.
* Note: March blogs are due Monday, April 2. Remember that the target is 1-2 posts/week, which is 4-8 posts/month.
 
  
 
== Week 11 ==
 
== Week 11 ==
  
 
=== Week 11 - Class I ===
 
=== Week 11 - Class I ===
* Project hacking and discussion
+
* Quiz #4 - Online in Blackboard
 +
* '''Optional video:''' [https://youtu.be/CdyERanIxmI Building Software] - This video provides a review of building an open-source software package from either a source archive (zip or tarball) or from a code repository (such as a <code>git</code> repository).
 +
* [https://youtu.be/Hip1KtYZKE0 Video - March 30: Profiling Software]
 +
** Profiling with <code>gprof</code> and <code>perf</code>
  
 
=== Week 11 - Class II ===
 
=== Week 11 - Class II ===
* [[Compiler Intrinsics]]
+
* [https://youtu.be/EIPbufXhiQs Video - April 3: SIMD and Auto-vectorization]
* Project discussion
+
* SIMD-Autovectorization Resources
 +
** [https://gcc.gnu.org/projects/tree-ssa/vectorization.html Auto-Vectorization in GCC] - Main project page for the GCC auto-vectorizer.
 +
** [http://locklessinc.com/articles/vectorize/ Auto-vectorization with gcc 4.7] - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. '''This article is strongly recommended.'''
 +
** [https://software.intel.com/sites/default/files/8c/a9/CompilerAutovectorizationGuide.pdf Intel (Auto)Vectorization Tutorial] - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm
 +
 
 +
=== Week 11 Deliverables ===
 +
* [[Winter 2020 SPO600 Project|Project Stage 1] due Wednesday, April 1 (yes, really) at 11:59 pm
 +
* Blog about your project as you continue into Stage 2
 +
** March posts are due on Monday, April 6 at 11:59 pm.
  
 
== Week 12 ==
 
== Week 12 ==
  
 
=== Week 12 - Class I ===
 
=== Week 12 - Class I ===
* Class cancelled
+
* [https://youtu.be/76rtxPozPJI Video - April 6: SIMD, Inline Assembler, and Compiler Intrinsics]
 +
** [[Inline Assembly Language]]
 +
** [[Compiler Intrinsics]]
 +
 
 +
* Retired SPO600 Labs - These labs are not being used this semester but may be useful for reference. The software in these labs was used in the video for this week.
 +
** [[SPO600 SIMD Lab]]
 +
** [[SPO600 Inline Assembler Lab]]
  
 
=== Week 12 - Class II ===
 
=== Week 12 - Class II ===
* Project hacking and discussion
+
* No class - Good Friday
 
 
-->
 
<!--
 
###################################################################################
 
###################################################################################
 
###################################################################################
 
###################################################################################
 
###################################################################################
 
 
 
== Week 6 ==
 
 
 
=== Week 6 - Class II ===
 
 
 
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 6)
 
 
 
== Week 7 ==
 
 
 
=== Week 7 - Class I ===
 
 
 
Project discussion
 
 
 
=== Week 7 - Class II ===
 
 
 
Profiling
 
 
 
=== Week 7 Deliverables ===
 
 
 
Blog about your project.
 
 
 
=== Week 6 Deliverables ===
 
 
 
* Blog your results for the [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 6) -- be sure to include links to your code, detailed results, and your reflection on the lab.
 
 
 
== Week x8 ==
 
 
 
=== Week x8 - Class I ===
 
 
 
* Review
 
* Plans for Remainder of Term
 
 
 
=== Week x8 - Class II ===
 
  
 +
=== Resources ===
 +
==== Auto-vectorization ====
 +
* [https://gcc.gnu.org/projects/tree-ssa/vectorization.html Auto-Vectorization in GCC] - Main project page for the GCC auto-vectorizer.
 +
* [http://locklessinc.com/articles/vectorize/ Auto-vectorization with gcc 4.7] - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. '''This article is strongly recommended.'''
 +
* [https://software.intel.com/sites/default/files/8c/a9/CompilerAutovectorizationGuide.pdf Intel (Auto)Vectorization Tutorial] - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm
 +
==== Inline Assembly Language ====
 
* [[Inline Assembly Language]]
 
* [[Inline Assembly Language]]
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7)
+
* [http://developer.arm.com ARM Developer Information Centre]
 
+
** [https://developer.arm.com/products/architecture/a-profile/docs/den0024/a ARM Cortex-A Series Programmer’s Guide for ARMv8-A]
=== Week x8 Deliverables ===
+
* The ''short'' guide to the ARMv8 instruction set: [https://www.element14.com/community/servlet/JiveServlet/previewBody/41836-102-1-229511/ARM.Reference_Manual.pdf ARMv8 Instruction Set Overview] ("ARM ISA Overview")
 
+
* The ''long'' guide to the ARMv8 instruction set: [https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile] ("ARM ARM")
* Blog about your Lab 7 results
+
==== C Intrinsics - AArch64 SIMD ====
 
+
* [https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics ARM NEON Intrinsics Reference]
 
+
* [https://gcc.gnu.org/onlinedocs/gcc/ARM-C-Language-Extensions-_0028ACLE_0029.html GCC ARM C Language Extensions]
== Week x9 ==
 
 
 
=== Week x9 - Class I ===
 
 
 
* Benchmarking and Profiling
 
** Notes to follow
 
 
 
=== Week x9 - Class II ===
 
 
 
* [[Fall 2017 SPO600 Project]]
 
 
 
=== Week x9 Deliverables ===
 
 
 
* Start blogging about your project!
 
 
 
 
 
 
 
###################################################################################
 
 
 
=== Week 2 - Class II ===
 
 
 
* [[SPO600 Assembler Lab|Assembly language lab]] (lab 3)
 
 
 
=== Week 2 Deliverables ===
 
 
 
* Blog your conclusion to the [[SPO600 Code Review Lab|Code Review Lab]] (Lab 1)
 
 
 
== Week 3 ==
 
 
 
=== Week 3 - Class I ===
 
 
 
* Continue group work on [[SPO600 Assembler Lab|Lab 3]].
 
 
 
=== Week 3 - Class II ===
 
 
 
* [[SPO600 Compiled C Lab]] (Lab 4)
 
 
 
=== Week 3 Deliverables ===
 
 
 
* Blog your conclusion to:
 
** [[SPO600 Assembler Lab|Lab 3]]
 
** [[SPO600 Compiled C Lab|Lab 4]]
 
 
 
== Week 4 ==
 
 
 
=== Week 4 - Class I ===
 
 
 
Software Optimization
 
* [[Compiler Optimizations]]
 
* [[Profile Guided Optimization]]
 
* Algorithm Selection
 
 
 
=== Week 4 - Class II ===
 
 
 
* [[SPO600 Algorithm Selection Lab]] (Lab 5)
 
 
 
=== Week 4 Deliverables ===
 
 
 
* Blog about your Lab 5 results.
 
 
 
== Week 5 ==
 
 
 
=== Week 5 - Class I ===
 
 
 
* Finish the [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]]
 
 
 
=== Week 5 - Class II ===
 
 
 
* Introduction to Vector Processing/SIMD
 
* [[SPO600 Vectorization Lab|Vectorization Lab]] (Lab 6)
 
 
 
=== Week 5 Deliverables ===
 
 
 
* Blog your results for the [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 5)
 
* Blog your results for the [[SPO600 Vectorization Lab|Vectorization Lab]] (Lab 6)
 
* For each of the above, be sure to include links to your code, detailed results, and your reflection on the lab.
 
 
 
== Week 6 ==
 
 
 
=== Week 6 - Class I ===
 
* [[Inline Assembly Language]] -- often used for:
 
*# Implementing a memory barrier
 
*# Performing an [[Atomic Operation]]
 
*#* '''Atomics''' are operations which must be completed in a single step (or appear to be completed in a single step) without potential interruption.
 
*#* Wikipedia has a good basic overview of the need for atomicity in the article on [http://en.wikipedia.org/wiki/Linearizability Linerarizability]
 
*# Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...))
 
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7)
 
 
 
=== Week 6 - Class II ===
 
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7) continued...
 
 
 
=== Week 6 Deliverables ===
 
* Blog your Lab 7 results.
 
 
 
== Week 7 ==
 
 
 
=== Week 7 - Class I ===
 
 
 
==== Overview/Review of Processor Operation ====
 
 
 
* Fetch-decode-dispatch-execute cycle
 
* Pipelining
 
* Branch Prediction
 
* In-order vs. Out-of-order execution
 
** Micro-ops
 
 
 
==== Memory Basics ====
 
 
 
* Organization of Memory
 
** System organization
 
** Process organization
 
*** Text, data
 
*** Stack
 
*** Heap
 
* Memory Speeds
 
* Cache
 
** Cache lookup
 
** Cache synchronization and invalidation
 
** Cache line size
 
* Prefetch
 
** Prefetch hinting
 
 
 
==== Memory Architecture ====
 
 
 
* Virtual Memory and Memory Management Units (MMUs)
 
** General principles of VM and operation of MMUs
 
** Memory protection
 
*** Unmapped Regions
 
*** Write Protection
 
*** Execute Protection
 
*** Privilege Levels
 
** Swapping
 
** Text sharing
 
** Data sharing
 
** Shared memory for Inter-Process Communication
 
** Copy-on-Write (CoW)
 
** Demand Loading
 
** Memory mapped files
 
 
 
==== Memory Barriers ====
 
'''Memory Barriers''' ensure that memory accesses are sequenced so that multiple threads, processes, cores, or IO devices see a predictable view of memory.
 
* Leif Lindholm provides an excellent explanation of memory barriers.
 
** Blog series - I recommend this series, especially the introduction, as a very clear explanation of memory barrier issues.
 
*** Part 1 - [http://community.arm.com/groups/processors/blog/2011/03/22/memory-access-ordering--an-introduction Memory Access Ordering - An Introduction]
 
*** Part 2 - [http://community.arm.com/groups/processors/blog/2011/04/11/memory-access-ordering-part-2--barriers-and-the-linux-kernel Memory Access Ordering Part 2 - Barriers and the Linux Kernel]
 
*** Part 3 - [http://community.arm.com/groups/processors/blog/2011/10/19/memory-access-ordering-part-3--memory-access-ordering-in-the-arm-architecture Memory Access Ordering Part 3 - Memory Access Ordering in the ARM Architecture]
 
** Presentation at Embedded Linux Conference 2010 (Note: Acquire/Release in C++11 and ARMv8 aarch64 appeared after this presentation):
 
*** [http://elinux.org/images/f/fa/Software_implications_memory_systems.pdf Slides]
 
*** [http://free-electrons.com/pub/video/2010/elce/elce2010-lindholm-memory-450p.webm Video]
 
* [http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf Memory Barriers - A Hardware View for Software Hackers] - This is a highly-rated paper that explains memory barrier issues - as the title suggests, it is designed to describe the hardware origin of the problem to software developers. Despite the fact that it is an introduction to the topic, it is still very technical.
 
* [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14041.html ARM Technical Support Knowlege Article - In what situations might I need to insert memory barrier instructions?] - Note that there are some additional mechanisms present in ARMv8 aarch64, including Acquire/Release.
 
* [https://www.kernel.org/doc/Documentation/memory-barriers.txt Kernel Documentation on Memory Barriers] - discusses the memory barrier issue generally, and the solutions used within the Linux kernel. This is part of the kernel documentation.
 
* Acquire-Release mechanisms
 
** [http://blogs.msdn.com/b/oldnewthing/archive/2008/10/03/8969397.aspx MSDN Blog Post] with a very clear explanation of Acquire-Release.
 
** [http://preshing.com/20130922/acquire-and-release-fences/ Preshing on Programming post] with a good explanation.
 
** [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.genc010197a/index.html ARMv8 Instruction Set Architecture Manual] (ARM InfoCentre registration required) - See the section on Acquire/Release and Load/Store, especially Load/Store Exclusive (e.g., LDREX)
 
 
 
==== The Future of Memory ====
 
 
 
* NUMA (on steroids!)
 
* Non-volatile, byte-addressed main memory
 
* Non-local memory
 
* Memory encryption
 
 
 
=== Week 7 - Class II ===
 
 
 
* [[Winter 2017 SPO600 Project|Course Project]]
 
 
 
=== Week 7 Deliverables ===
 
 
 
* Blog your Lab 7 results, including the second part
 
* (To be announced: Project Deliverables)
 
 
 
== Week 8 ==
 
 
 
=== Week 8 - Class I ===
 
 
 
* Project Discussions
 
 
 
=== Week 8 - Class II ===
 
 
 
* Project Presentation #0
 
** Selected glibc function(s)
 
** Plan of Action
 
 
 
=== Week 8 Deliverables ===
 
 
 
* Blog about your selected function(s) and project plan
 
** Remember: You should be posting 1-2 times per week
 
 
 
#################################################################################
 
#################################################################################
 
#################################################################################
 
 
 
== Week 3 ==
 
 
 
=== Tuesday (Jan 26) ===
 
 
 
* Continue work on the [[SPO600 Assembler Lab|Assembly language lab]] (lab 3)
 
 
 
=== Friday (Jan 29) ===
 
 
 
* [[SPO600 Compiled C Lab|Compiled C lab]] (lab 4)
 
 
 
=== Week 3 Deliverables ===
 
 
 
* Blog about your [[SPO600 Assembler Lab|Assembly language lab]] (lab 3).
 
* Blog about your [[SPO600 Compiled C Lab|Compiled C lab]] (lab 4) experience and results. Consider the optimizations and transformations that the compiler performed.
 
* Remember that these posts (as all of your blog posts) will be marked both for communication (clarity, quality of writing (including grammar and spelling), formatting, use of links, completeness) and for content (lab completion and results). Your posts should contain both factual results as well as your reflections on the meaning of those results, the experience of performing the lab, and what you have learned.
 
 
 
 
 
'''Reminder:''' Blogs will be marked as they stand at the end of the month (Sunday).
 
 
 
== Week 4 ==
 
 
 
=== Tuesday (Feb 2) ===
 
 
 
Software Optimization
 
* [[Compiler Optimizations]]
 
* Algorithm Selection
 
 
 
=== Friday (Feb 5) ===
 
 
 
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 6)
 
 
 
=== Week 4 Deliverables ===
 
 
 
* Blog about your Lab 5 results.
 
 
 
== Week 5 ==
 
 
 
=== Tuesday (Feb 9) ===
 
 
 
* Finish the [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]]
 
* Discussion of Benchmarking Challenges
 
* Introduction to Vector Processing/SIMD
 
 
 
=== Friday (Feb 12) ===
 
 
 
* [[SPO600 Vectorization Lab|Vectorization Lab]] (Lab 6)
 
 
 
=== Week 5 Deliverables ===
 
 
 
* Blog your results for the [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab]] (Lab 5)
 
* Blog your results for the [[SPO600 Vectorization Lab|Vectorization Lab]] (Lab 6)
 
* For each of the above, be sure to include links to your code, detailed results, and your reflection on the lab.
 
 
 
== Week 6 ==
 
 
 
=== Tuesday (Feb 16) ===
 
* Discussion of Memory Architecture
 
 
 
=== Friday (Feb 19) ===
 
* [[Inline Assembly Language]] -- often used for:
 
*# Implementing a memory barrier
 
*# Performing an [[Atomic Operation]]
 
*# Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...))
 
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7)
 
 
 
=== Week 6 Deliverables ===
 
* Blog your Lab 7 results.
 
 
 
== Week 7 ==
 
 
 
=== Tuesday (Feb 23) ===
 
* Discussion of [[Winter 2016 SPO600 Compiler Options Presentation|Course Presentation]] assignment
 
 
 
=== Friday (Feb 26) ===
 
* Discussion of the [[Winter 2016 SPO600 Project|Course Project]]
 
 
 
=== Week 7 Deliverables ===
 
* Blog about your selected Presentation and Project topics.
 
 
 
== Week 8 ==
 
 
 
[http://connect.linaro.org/bkk16/|Linaro Connect] - No classes.
 
 
 
=== Week 8 Deliverables ===
 
 
 
* Prepare for your Presentation
 
* Work on your Project
 
* Blog about what you're doing!
 
 
 
== Week 9 ==
 
 
 
=== Tuesday (Mar 14) ===
 
 
 
* [[Winter 2016 SPO600 Compiler Options Presentation|Presentations]]
 
 
 
=== Friday (Mar 18) ===
 
 
 
* [[Winter 2016 SPO600 Compiler Options Presentation|Presentations]]
 
 
 
=== Week 9 Deliverables ===
 
 
 
* Blog about your Presentation, incorporating any discussion or feedback during the presentation.
 
 
 
== Week 10 ==
 
 
 
=== Tuesday (Mar 22) ===
 
 
 
* [[Winter 2016 SPO600 Project|Course Project]] - Stage I Updates
 
 
 
=== Week 10 Deliverables ===
 
 
 
* Blog your Stage I Updates. '''Important!''' - this will be used to assign your Stage I project mark! Include:
 
** Which software package you are working on
 
** Your experience building the software "out of the box" on x86_64 and AArch64 platforms
 
** Baseline results (performance)
 
** Which area of the software you will be working on and which approach you are going to take to optimizing the software...
 
**# Improving the Build Instructions (e.g., compiler options), OR
 
**# Changing the Software (substituting a different algorithm, or refactoring for better compiler optimization e.g., auto-vectorization), OR
 
**# Adding Platform-Specific code for AArch64
 
 
 
== Week 11 ==
 
 
 
=== Tuesday (Mar 29) ===
 
 
 
* Discussion & Hack Session
 
 
 
=== Thursday (Mar 31) ===
 
 
 
Reminder: '''Special Event:''' [https://www.eventbrite.ca/e/leadership-lunch-with-mike-shaver-engineer-director-for-facebook-tickets-23046621064 Leadership Lunch with Mike Shaver]
 
 
 
=== Friday (Apr 1) ===
 
 
 
* Discussion & Hack Session
 
 
 
=== Week 11 Deliverables ===
 
 
 
* Blog about your project work.
 
 
 
 
 
== Week 12 ==
 
 
 
=== Tuesday (Apr 5) ===
 
 
 
* Discussion & Hack Session
 
 
 
=== Friday (Apr 8) ===
 
 
 
* Project Stage II Updates
 
 
 
=== Week 12 Deliverables ===
 
 
 
* Blog your Stage II Project Updates by '''Midnight, Sunday, Apr 10.''' Note that this will be used for your Stage II project mark (20%).
 
  
 
== Week 13 ==
 
== Week 13 ==
  
=== Tuesday (Apr 12) ===
+
=== Week 13 - Class I ===
 
+
* [https://youtu.be/GLzAVWW8dEo Video - April 16: Project Stage 3]
* Wrap-Up Discussion
 
 
 
=== Friday (Apr 15) ===
 
 
 
* Stage III Project Updates
 
 
 
=== Week 13 Deliverables ===
 
 
 
* Blog your Stage III Project Updates by Midnight on Thursday, April 21.
 
 
 
* Complete ALL your blogging for this course by Midnight on Thursday, April 21. Make sure that you have included all of the labs, your presentation, and your project work. Remember that there should be at least 1-2 posts per week. Your blogging from April 1-April 21 will be used for your April communication mark.
 
 
 
== Week 2 ==
 
 
 
=== Tuesday (Sep 15) ===
 
 
 
{{Admon/tip|Bring Your Laptop|Classes are held in a [[Active Learning Classroom]]. If you have a laptop or other device with a VGA or HDMI output (such as a smartphone!) please bring it. You'll need either a local linux environment or an [[SSH]] client -- which is built-in to Linux, Mac, and Chromebook systems, and readily available for Windows, Android, and iOS devices.}}
 
 
 
* [[SPO600 Compiled C Lab|Compiled C Lab (Lab 2)]]
 
* Sheets from Last Week
 
** Open Source Student Agreement
 
 
 
=== Friday (Sep 18) ===
 
 
 
* Introductions
 
* [[Compiler Optimizations]]
 
* Introduction to the [[Fall 2015 SPO600 Compiler Options Presentation|Compiler Options Presentation]]
 
 
 
=== Week 2 Deliverables ===
 
 
 
* Blog about your [[SPO600 Code Review Lab|Code Review Lab (Lab 1)]] and [[SPO600 Compiled C Lab|Lab 2]] experience and results. For lab 2, consider the optimizations and transformations that the compiler performed. Remember that these posts (as all of your blog posts) will be marked both for communication (clarity, quality of writing (including grammar and spelling), formatting, use of links, completeness) and for content (lab completion and results). Your posts should contain both factual results as well as your reflections on the meaning of those results, the experience of performing the lab, and what you have learned.
 
 
 
== Week 3 ==
 
 
 
This week [[User:Chris Tyler|your professor]] is at [http://connect.linaro.org/sfo15/ Linaro Connect], an engineering conference run by [http://www.linaro.org Linaro] - a distributed not-for-profit collaborative technology company focused on Linux on ARM.
 
 
 
* [[Fall 2015 SPO600 Compiler Options Presentation|Select and prepare to teach the class about two compiler options]].
 
 
 
=== Week 3 Deliverables ===
 
* Be prepared to give your [[Fall 2015 SPO600 Compiler Options Presentation|presentation]] on Tuesday of next week (September 29).
 
 
 
== Week 4 ==
 
 
 
=== Tuesday (Sep 29) ===
 
* Presentations
 
 
 
=== Friday (Oct 2) ===
 
* Presentations
 
* Introduction to ARM64 hardware
 
* [[SPO600 Algorithm Selection Lab|Algorithm Selection Lab (Lab 3)]]
 
 
 
=== Week 4 Deliverables ===
 
* Blog your [[Fall 2015 SPO600 Compiler Options Presentation|presentation]], incorporating any feedback and Q&A input that was given during/after the presentation in class.
 
 
 
== Week 5 ==
 
 
 
=== Tuesday (Oct 6) ===
 
* Class discussion/hacking on [[SPO600 Algorithm Selection Lab|Lab 3]].
 
 
 
=== Friday (Oct 9) ===
 
* More on Lab 3
 
* Discussion of Benchmarking
 
 
 
=== Week 5 Deliverables ===
 
* Blog your [[SPO600 Algorithm Selection Lab|Lab 3]] results.
 
 
 
== Week 6 ==
 
 
 
=== Tuesday (Oct 13) ===
 
* Discussion of benchmarking
 
** Control of variables
 
*** Competition for system resources
 
*** Repeatability
 
* Planning for a Compiler Options Test Framework
 
 
 
=== Friday (Oct 16) ===
 
* Compiler Options Framework
 
** Divide up tasks
 
** Start development
 
 
 
=== Week 6 Deliverables ===
 
* Blog your recommendations for the test framework design.
 
 
 
 
 
== Week 7 ==
 
 
 
=== Tuesday (Oct 20) ===
 
* Build the [[SPO600 Framework Project|Compiler Options Test Framework]]
 
 
 
=== Friday (Oct 23) ===
 
* Project selection
 
** Your task over reading week: Become an expert in building your selected software, and then make it work with the [[SPO600 Framework Project|Compiler Options Test Framework]]
 
 
 
=== Week 7 Deliverables ===
 
* Blog about the compiler options framework, and your work on that project.
 
* Blog about your selected project.
 
 
 
== Week 8 ==
 
 
 
=== Tuesday (Nov 3) ===
 
* No class scheduled - your [[User:Chris Tyler|prof]] is in Whitehorse, YK at an NSERC workshop.
 
* Please work on your [[Fall 2015 SPO600 Course Project|project]], and be ready to present on Friday.
 
 
 
=== Friday (Nov 6) ===
 
* Present your Stage I results for your [[Fall 2015 SPO600 Course Project|project]].
 
 
 
=== Week 8 Deliverables ===
 
 
 
* Blog about your [[Fall 2015 SPO600 Course Project|stage I project results]]. This will be used to assign the first marks for your project.
 
 
 
== Week 9 ==
 
 
 
=== Tuesday (Nov 10) ===
 
 
 
* [[Computer Architecture]] overview (see also the [[:Category:Computer Architecture|Computer Architecture Category]])
 
 
 
=== Friday (Nov 13) ===
 
 
 
* [[SPO600 Assembler Lab|Assembly language lab]] (lab 4)
 
 
 
=== Week 9 Deliverables ===
 
 
 
* Blog about your project progress (2+ posts per week).
 
* Blog the [[SPO600 Assembler Lab|Assembly language lab]] -- include your results, a link to your source code, and your reflections on the experience.
 
 
 
 
 
== Week 10 ==
 
 
 
=== Tuesday (Nov 17) ===
 
* Discussion & Hack Session
 
** [[SPO600 Assembler Lab|Assembly language lab (Lab 4) results]]
 
** Testing Framework
 
 
 
=== Friday (Nov 20) ===
 
* Hack session on the Testing Framework
 
 
 
=== Week 10 Deliverables ===
 
* Blog about your project work
 
* Blog about your Lab 5 results
 
 
 
== Week 11 ==
 
 
 
=== Tuesday (Nov 22) ===
 
* SIMD and Vectorization
 
* [[SPO600 Vectorization Lab|Vectorization Lab (Lab 6)]]
 
 
 
=== Friday (Nov 25) ===
 
* Discussion of the State of the Framework
 
* Hack Session
 
 
 
=== Week 11 Deliverables ===
 
* Blog your [[SPO600 Vectorization Lab|Lab 6]] results.
 
 
 
== Week 12 ==
 
 
 
=== Tuesday (Dec 1) ===
 
* Stage II Results - Brief Presentations
 
 
 
=== Friday (Dec 4) ===
 
* '''No Class''' - Early start to Exam Week
 
 
 
=== Week 12 Deliverables ===
 
* Blog about your Project Status - Stage II Results
 
** Provide results for the various flag combinations you tested
 
** Discuss the results, highlighting any anomalies
 
  
== Final Deliverables ==
+
=== Week 13 - Class II ===
* Blog about your Project Status - Stage III Results
+
* Wrap-up Session
** Important: Incorporate any feedback on your Stage II results
 
** Outline what you learned from your investigation into various combination of GCC flags
 
** Discuss what the upstream projects should do based on these results
 
** Communicate the results to the upstream project, if appropriate
 
** Outline further investigation that should be undertaken
 
* Blog a reflective blog post on the course
 
** What you have learned
 
** What you already knew
 
** What was good or bad about the way the course proceeded]
 
** How you might use this knowledge in the future
 
* This is the last chance to submit any lab postings, etc.
 
'''All blog postings must be in by Friday, December 18, at 11:59 pm to be included in the final grade.'''
 
  
-->
 
  
 
<BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/>
 
<BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/><BR/>

Latest revision as of 13:30, 31 August 2021

This is the schedule and main index page for the SPO600 Software Portability and Optimization course for Winter 2020.

Important.png
This page may be obsolete.
It contains historical information. For current information, please see Current SPO600 Weekly Schedule.

Contents

Schedule Summary Table

This is a summary/index table. Please follow the links in each cell for additional detail which will be added below as the course proceeds -- especially for the Deliverables column.

Week Week of... Class I
Tuesday 1:30-3:15
Room B1024
Class II
Friday 11:40-1:25
Room K1263
Deliverables
(Summary - click for details)
1 Jan 06 Introduction to the Course / Introduction to the Problem / How is code accepted into an open source project? (Homework: Lab 1) Computer architecture basics / Binary Representation of Data / Introduction to Assembly Language Set up for the course
2 Jan 13 6502 Assembly Basics Lab (Lab 2) Math, Assembly language conventions, and Examples Lab 1 and 2
3 Jan 20 6502 Math Lab (Lab 3) Addressing Modes Lab 3
4 Jan 27 Continue with Lab 3 System routines / Building Code Lab 3
5 Feb 03 6502 String Lab (Lab 4) Introduction to x86_64 and AArch64 architectures Lab 4
6 Feb 10 6502 String Lab (Lab 4) Continued x86_64 and AArch64 Assembly Lab 4
7 Feb 17 Family Day Holiday 64-bit Assembly Language Lab (Lab 5) Lab 5
Reading Feb 24 Reading Week
8 Mar 02 Lab 5 Continued Projects / Changing an Algorithm Lab 5, Project Blogs
9 Mar 09 Algorithm Selection Lab (Lab 6) Compiler Optimizations / SIMD and Vectorization Lab 6
Switchover Mar 16 Online Switchover Week
10 Mar 23 Online Startup / Project Stage 1 Review for Stage 1 Project Blogging
11 Mar 30 Quiz / Profiling SIMD Part 1 - Autovectorization Project Stage 1 due April 1, 11:59 pm / Blog about your project as you start Stage 2
12 Apr 06 SIMD Part 2 - Intrinsics and Inline Assembler Good Friday Holiday Project Stage 2 due
13 Apr 13 Quiz / Project Discussion Wrap-up Discussion Project Stage 3 due Monday, April 20, 11:59 pm (Firm!)

Week 1

Week 1 - Class I

Introduction to the Problems

Porting and Portability
  • Most software is written in a high-level language which can be compiled into machine code for a specific computer architecture. In many cases, this code can be compiled for multiple architectures. However, there is a lot of existing code that contains some architecture-specific code fragments written in architecture-specific high-level code or in Assembly Language.
  • Reasons that code is architecture-specific:
    • System assumptions that don't hold true on other platforms
    • Code that takes advantage of platform-specific features
  • Reasons for writing code in Assembly Langauge include:
    • Performance
    • Atomic Operations
    • Direct access to hardware features, e.g., CPUID registers
  • Most of the historical reasons for including assembler are no longer valid. Modern compilers can out-perform most hand-optimized assembly code, atomic operations can be handled by libraries or compiler intrinsics, and most hardware access should be performed through the operating system or appropriate libraries.
  • A new architecture has appeared: AArch64, which is part of ARMv8. This is the first new computer architecture to appear in several years (at least, the first mainstream computer architecture).
  • At this point, most key open source software (the software typically present in a Linux distribution such as Ubuntu or Fedora, for example) now runs on AArch64. However, it may not run as well as on older architectures (such as x86_64).
Benchmarking and Profiling

Benchmarking involves testing software performance under controlled conditions so that the performance can be compared to other software, the same software operating on other types of computers, or so that the impact of a change to the software can be gauged.

Profiling is the process of analyzing software performance on finer scale, determining resource usage per program part (typically per function/method). This can identify software bottlenecks and potential targets for optimization.

Optimization

Optimization is the process of evaluating different ways that software can be written or built and selecting the option that has the best performance tradeoffs.

Optimization may involve substituting software algorithms, altering the sequence of operations, using architecture-specific code, or altering the build process. It is important to ensure that the optimized software produces correct results and does not cause an unacceptable performance regression for other use-cases, system configurations, operating systems, or architectures.

The definition of "performance" varies according to the target system and the operating goals. For example, in some contexts, low memory or storage usage is important; in other cases, fast operation; and in other cases, low CPU utilization or long battery life may be the most important factor. It is often possible to trade off performance in one area for another; using a lookup table, for example, can reduce CPU utilization and improve battery life in some algorithms, in return for increased memory consumption.

Most advanced compilers perform some level of optimization, and the options selected for compilation can have a significant effect on the trade-offs made by the compiler, affecting memory usage, execution speed, executable size, power consumption, and debuggability.

Build Process

Building software is a complex task that many developers gloss over. The simple act of compiling a program invokes a process with five or more stages, including pre-proccessing, compiling, optimizing, assembling, and linking. However, a complex software system will have hundreds or even thousands of source files, as well as dozens or hundreds of build configuration options, auto configuration scripts (cmake, autotools), build scripts (such as Makefiles) to coordinate the process, test suites, and more.

The build process varies significantly between software packages. Most software distribution projects (including Linux distributions such as Ubuntu and Fedora) use a packaging system that further wraps the build process in a standardized script format, so that different software packages can be built using a consistent process.

In order to get consistent and comparable benchmark results, you need to ensure that the software is being built in a consistent way. Altering the build process is one way of optimizing software.

Note that the build time for a complex package can range up to hours or even days!

General Course Information

  • Course resources are linked from the CDOT wiki, starting at https://wiki.cdot.senecacollege.ca/wiki/SPO600 (Quick find: This page will usually be Google's top result for a search on "SPO600").
  • Coursework is submitted by blogging.
  • Quizzes will be short (1 page) and will be held without announcement at any time, generally at the start of class. There is no opportunity to re-take a missed quiz, but your lowest three quiz scores will not be counted, so do not worry if you miss one or two.
    • Students with test accommodations: an alternate monthly quiz is available in the Test Centre. See the professor for details.
  • Course marks (see Weekly Schedule for dates):
    • 60% - Project Deliverables
    • 20% - Communication (Blog and Wiki writing)
    • 20% - Labs and Quizzes (10% labs - completed/not completed; 10% for quizzes - lowest 3 scores not counted)
  • Classes will be held in an Active Learning Classroom -- you are encouraged to bring your own laptop to class. If you do not have a laptop, consider signing one out of the Learning Commons for class, or using a smartphone with an HDMI adapter.
  • For more course information, refer to the SPO600 Weekly Schedule (this page), the Course Outline, and SPO600 Course Policies.

Course and Setup: Accounts, agreements, servers, and more

How open source communities work

Week 1 - Class II

Binary Representation of Data

  • Integers
    • Integers are the basic building block of binary numbers.
    • In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is 2bit. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is 0:2bits-1 where bits is the number of bits in the unsigned integer.
    • Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is -(!value)-1 where ! is the NOT operation (each bit gets flipped from 0→1 and 1→2)
  • Fixed-point
    • A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the radix point). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
    • An alternative to fixed-point values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values.
  • Floating-point
    • Floating point numbers have three parts: a sign bit (0 for positive, 1 for negative), a mantissa or significand, and an exponent. The value is interpreted as sign mantissa * 2exponent.
    • The most commonly-used floating point formats are defined in the IEEE 754 standard.
  • Sound
    • Sound waves are air pressure vibrations
    • Digital sound is most often represented in raw form as a series of time-based measurements of air pressure, called Pulse Coded Modulation (PCM)
    • PCM takes a lot of storage, so sound is often compressed in either a lossless (perfectly recoverable) or lossy format (higher compression, but the decompressed data doesn't perfectly match the original data). To permit high compression ratios with minimal impact on quality, psychoacoustic compression is used - sound variations that most people can't perceive are removed.
  • Graphics
    • The human eye perceives luminance (brightness) as well as hue (colour). Our hue receptors are generally sensitive to three wavelengths: red, green, and blue (RGB). We can stimulate the eye to perceive most colours by presenting a combination of light at these three wavelengths.
    • Digital displays emit RGB colours, which are mixed together and perceived by the viewer. For printing, cyan/yellow/magenta inks are used, plus black to reduce the amount of colour ink required to represent dark tones; this is known as CYMK colour.
    • Images are broken into picture elements (pixels) and each pixel is usually represented by a group of values for RGB or CYMK channels, where each channel is represented by an integer or floating-point value. For example, using an 8-bit-per-pixel integer scheme (also known as 24-bit colour), the brightest blue could be represented as R=0,G=0,B=255; the brightest yellow would be R=255,G=255,B=0; black would be R=0,G=0,B=0; and white would be R=255,G=255,B=255. With this scheme, the number of unique colours available is 256^3 ~= 16 million.
    • As with sound, the raw storage of sampled data requires a lot of storage space, so various lossy and lossless compression schemes are used. Highest compression is achieved with psychovisual compression (e.g., JPEG).
    • Moving pictures (video, animations) are stored as sequential images, often compressed by encoding only the differences between frames to save storage space.
  • Compression techniques
    • Huffman encoding / Adaptive arithmetic encoding
      • Instead of fixed-length numbers, variable-length numbers are used, with the most common values encoded in the smallest number of bits. This is an effective strategy if the distribution of values in the data set is uneven.
    • Repeated sequence encoding (1D, 2D, 3D)
      • Run length encoding is an encoding scheme that records the number of repeated values. For example, fax messages are encoded as a series of numbers representing the number of white pixels, then the number of black pixels, the white pixels, then black pixels, alternating to the end of each line. These numbers are then represented with adaptive artithmetic encoding.
      • Text data can be compressed by building a dictionary of common sequences, which may represent words or complete phrases, where each entry in the dictionary is numbered. The compressed data contains the dictionary plus a sequence of numbers which represent the occurrence of the sequences in the original text. On standard text, this typically enables 10:1 compression.
    • Decomposition
      • Compound audio wavforms can be decomposed into individual signals, which can then be modelled as repeated sequences. For example, a waveform consisting of two notes being played at different frequencies can be decomposed into those separate notes; since each note consists of a number of repetitions of a particular wave pattern, they can individually be represented in a more compact format by describing the frequence, waveform shape, and amplitude characteristics.
    • Pallettization
      • Images often contain repeated colours, and rarely use all of the available colours in the original encoding scheme. For example, a 1920x1080 image contains about 2 million pixels, so if every pixel was a different colour, there would be a maximum of 2 million colours. But it's likely that many of the pixels in the image are the same colour, so there might only be (perhaps) 4000 colours in the image. If each pixel is encoded as a 24-bit value, there are potentially 16 million colours available, and there is no possibility that they are all used. Instead, a palette can be provided which specifies each of the 4000 colours used in the picture, and then each pixel can be encoded as a 12-bit number which selects one of the colours from the palette. The total storage requirement for the original 24-bit scheme is 1920*1080*3 bytes per pixel = 5.9 MB. Using a 12-bit pallette, the storage requirement is 3 * 4096 bytes for the palette plus 1920*1080*1.5 bytes for the image, for a total of 3 MB -- a reduction of almost 50%
    • Psychoacoustic and psychovisual compression
      • Much of the data in sound and images cannot be perceived by humans. Psychoacoustic and psychovisual compression remove artifacts which are least likely to be perceived. As a simple example, if two pixels on opposite sides of a large image are almost but not exactly the same, most people won't be able to tell the difference, so these can be encoded as the same colour if that saves space (for example, by reducing the size of the colour palette).

Computer Architecture Overview

Introduction to Assembly Language on the 6502 Processor

To understand basic assemly/machine language concepts, we're going to start with a very simple processor: the 6502.

Week 1 Deliverables

  1. Course setup:
    1. Set up your SPO600 Communication Tools - in particular, set up a blog.
    2. Add yourself to the Current SPO600 Participants page (leave the projects columns blank).
    3. Generate a pair of keys for SSH and email the public key to your professor, so that he can set up your access to the class servers.
    4. Optional (strongly recommended): Set up a personal Linux system.
    5. Optional: Purchase an AArch64 development board (such as a 96Boards HiKey or Raspberry Pi 3 or 4. (If you use a Pi, install a 64-bit Linux operating system on it, not a 32-bit version).
  2. Start work on Lab 1.


Week 2

Week 2 - Class I

Week 2 - Class II

  • 6502 Assembly Language Continued
    • 6502 Math
    • Assembly conventions and examples
      • Directives
        • define
        • DCB

Week 2 Deliverables


Week 3

Week 3 - Class I

Week 3 - Class II

Week 3 Deliverables

  • Blog your Lab 3 results (or interim results).

Week 4

Week 4 - Class I

Week 4 - Class II

Strings and System Routines

  • The 6502 emulator has a 80x25 character display mapped starting at location $f000. Writing to a byte to screen memory will cause that character to be displayed at the corresponding location on the screen, if the character is printable. If the high bit is set, the character will be displayed in  reverse video . For example, storing the ASCII code for "A" (which is 65 or $41) into memory location $f000 will display the letter "A" as the first character on the screen; ORing the value with 128 ($80) yields a value of 193 or $d1, and storing that value into $f000 will display A as the first character on the screen.
  • A "ROM chip" with screen control routines is mapped into the emulator at the end of the memory space (at the time of writing, the current version of the ROM exists in pages $fe and $ff). Details of the available ROM routines can be viewed using the "Notes" button in the emulator or on the emulator page on this wiki.
  • Strings in assembler are stored as sequences of bytes. As is usually the case in assembler, memory management is left to the programmer. You can terminate strings with null bytes (C-style), which are easy to detect one some CPUs (e.g., lda followed by bne / beq on a 6502), or you can use character counts to track string lengths.

Building Code

  • C code is built with the C compiler, typically called cc (which is usually an alias for a specific C compiler, such as gcc, clang, or bcc).
  • The C compiler runs through five steps, often by calling separate executables:
    1. Preprocessing - performed by the C Preprocessor (cpp), this step handles directives such as #include, #define, and #ifdef to build produce a single source code text file, with cross-references to the original input files so that error messages can be displayed correctly (e.g., an error in an included file can be correctly reported by filename and line number).
    2. Compilation - the C source code is converted to assembler, going through one or more intermedie representations (IR) such as GENERIC or GIMPLE, or LLVM IR. The program used for this step is often called cc1.
    3. Optimization - various optimization passes are performed at different stages of processing through multiple passes, but centered on IR at the compilation step. Sometimes, the work of a previous pass is undone by a later pass: for example, a complex loop may be converted into a series of simpler loops by an early pass, in the hope that optimizations can be applied to one or more of the simpler loops; the loops may later be recombined to single loop if no optimizations are found that are applicable to the simplified loops.
    4. Assembly - converts the assembly language code emitted by the compilation stage into binary object code.
    5. Linking - connects code to functions (aka methods or procedures) which were compiled in other compilation units (they may be pre-compiled libraries available on the system, or they may be other pieces of the same code base which are compiled in separate steps). Linking may be static, where libraries are imported into the binary executable file of the output program, or linking may be dynamic, where additional information is added to the binary executable file so that a run-time linker can load and connect libraries at runtime.
  • Other languages which are compiled to binary form, such as C++, Ocaml, Haskell, Fortran, and COBOL go through similar processing. Languages which do not compile to binary form are either compiled to a bytecode format (binary code that doesn't correspond to actual hardware), or left in original source format, and an interpreter reads and executes the bytecode or source code at runtime. Java and Python use bytecode; Bash and JavaScript interpret source code. Some interpreters build and cache blocks of machine code on-the-fly; this is called Just-in-Time (JIT) compilation.
  • Compiler feature flags control the operation of the compiler on the source code, including optimiation passes. When using gcc, these "feature flags" take the form -f[no-]featureName -- for example:
    • -fbuiltin -- enables the "builtin" feature
    • -fno-builtin -- disables the "builtin" feature
  • Feature flags can be selected in groups using the optimization (-O) level:
    • -O0 -- disables most (but not all) optimizations
    • -O1 -- enables basic optimizations that can be performed quickly
    • -O2 -- enables almost all safe operatimizations
    • -O3 -- enables agressive optimization, including optimizations which may not always be safe for all code (e.g., assuming +0 == -0)
    • -Os -- optimzies for small binary and runtime size, possibly at the expense of speed
    • -Ofast -- optimizes for fast execution, possibly at the expense of size
    • -Og -- optimizes for debugging: applies optimizations that can be performed quickly, and avoids optimizations which convolute the code
  • To see the optimizations which are applied at each level in gcc, use: gcc -Q --help=optimizers -Olevel -- it's interesting to compare the output of different levels, such as -O0 and -O3
  • Different CPUs in one family can have different capabilities and performance characteristics. The compiler options -march sets the overall architecture family and CPU features to be targetted, and the -mtune option sets the specific target for tuning. Thus, you can produce an executable that will work across a range of CPUs, but is specifically tuned to perform best on a certain model. For example, -march=ivybridge -mtune=knl will cause the processor to use features which are present on all Intel Ivy Bridge (and later) processors, but tuned for optimal performance on Knight's Landing processors. Similarly, -march=armv8-a -mtune=cortex-a72 will cause the compiler to emit code which will safely run on any ARMv8-a processor, but be tuned specifically for the Cortex-A72 core.
  • When building code on different platforms, there are a lot of variables which may need to be fed into the preprocessor, compiler, and linker. These can be manually specified, or they can be automatically determined by a tool such as GNU Autotools (typically visible as the configure script in the source code archive).
  • The source code for large projects is divided into many source files for manageability. The dependencies between these files can become complex. When developing or debugging the software, it is often necessary to make changes in one or a small number of files, and it may not be necessary to rebuild the entire project from scratch. The make utility is used to script a build and to enable rapid partial rebuilds after a change to source code files (see Make and Makefiles).
  • Many open source projects distribute code as a source archive ("tarball") which usually decompresses to a subdirectory packageName-version (e.g. foolib-1.5.29). This will typically contain a script which configures the Makefile (configure if using GNU Autotools). After running this script, a Makefile will be available, which can be used to build the software. However, some projects use an alternative configuration tool instead of GNU Autotools, and some may use an alternate build system instead of make.
  • To eliminate this variation, most Linux distributions use a package system, which standardizes the build process and which produces installable package files which can be used to reliably install software into standard locations with automatic dependency resolution, package tracking via a database, and simple updating capability. For example, Fedora, Red Hat Enterprise Linux, CentOS, SuSE, and OpenSuSE all use the RPM package system, in which source code is bundled with a build recipe in a "Source RPM" (SRPM), which can be built with a single command into a binary package (RPM). The RPMs can then be downloaded, have dependencies and conflicts resolved, and installed with a single command such as dnf. The fact that the SRPM can be built into an installable RPM through an automated process enables and simplifies automated build systems, mass rebuilds, and architecture-specific builds.

Week 4 Deliverables

  • Blog your Lab 3 results.
  • Blogs are due at the end of the month (Feb 2 - 11:59 pm), so proofread your posts, ensure that you have at least 1-2 per week, and make sure the link from the Participant's Page is accurate. Feel free to write multiple posts about one topic or lab, if appropriate.

Week 5

Week 5 - Class I

Week 5 - Class II

Week 5 Deliverables

Week 6

Week 6 - Class I

Week 6 - Class II

Week 6 Deliverables

Week 7

Week 7 - Class I

  • No class - Family Day Holiday

Week 7 - Class II

Week 7 Deliverables

Week 8

Week 8 - Class I

Week 8 - Class II

  • Changing an Algorithm to Improve Performance
    • Audio volume scaling problem
      • PCM Audio is represented as 16-bit signed integer samples
      • To reduce the volume of the audio, it can be scaled by a factor from 0.000 (silence) to 1.000 (original volume).
      • This is a common operation on mobile and multimedia devices.
      • What is the best way to do this?
    • Approach 1: Naive Implementation - Multiply each sample by the scaling factor (this involves multiplying each integer sample by a floating-point number, then converting the result back to an integer)
    • Approach 2: Lookup Table - Pre-calculate all possible values multiplied by the scaling factor, then look up the new value for each original sample value
    • Approach 3: Fixed-point math - Use fixed-point math rather than floating-point math
    • Approach 4: Vector fixed-point math - Use SIMD instructions to do multiple fixed-point operations in parallel

Week 8 Deliverables

  • Blog your Lab 5 results.
  • Reminder: Blogs are due for February this Sunday (March 8, 11:59 pm).

Week 9

Week 9 - Class I

=== Week 9 - Class II ===project

Week 9 - Deliverables

  • Blog about Lab 6 and your Project

Week 10

Week 10 - Class I

Drop-in Online Discussion Sessions

  • Tuesday to Friday (March 24-27) from 9-10 AM
  • Online at https://whereby.com/ctyler
    • There is a maximum of 12 people in the room at a time. I recommend dropping by one or twice a week with your questions.
    • If 9-10 am cannot work for you, email me to discuss this.

Week 10 - Class II

Week 10 - Deliverables

  • Blog about your project. Project Stage 1 is due next Wednesday.

Week 11

Week 11 - Class I

  • Quiz #4 - Online in Blackboard
  • Optional video: Building Software - This video provides a review of building an open-source software package from either a source archive (zip or tarball) or from a code repository (such as a git repository).
  • Video - March 30: Profiling Software
    • Profiling with gprof and perf

Week 11 - Class II

  • Video - April 3: SIMD and Auto-vectorization
  • SIMD-Autovectorization Resources
    • Auto-Vectorization in GCC - Main project page for the GCC auto-vectorizer.
    • Auto-vectorization with gcc 4.7 - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. This article is strongly recommended.
    • Intel (Auto)Vectorization Tutorial - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm

Week 11 Deliverables

  • [[Winter 2020 SPO600 Project|Project Stage 1] due Wednesday, April 1 (yes, really) at 11:59 pm
  • Blog about your project as you continue into Stage 2
    • March posts are due on Monday, April 6 at 11:59 pm.

Week 12

Week 12 - Class I

Week 12 - Class II

  • No class - Good Friday

Resources

Auto-vectorization

  • Auto-Vectorization in GCC - Main project page for the GCC auto-vectorizer.
  • Auto-vectorization with gcc 4.7 - An excellent discussion of the capabilities and limitations of the GCC auto-vectorizer, intrinsics for providing hints to GCC, and other code pattern changes that can improve results. Note that there has been some improvement in the auto-vectorizer since this article was written. This article is strongly recommended.
  • Intel (Auto)Vectorization Tutorial - this deals with the Intel compiler (ICC) but the general technical discussion is valid for other compilers such as gcc and llvm

Inline Assembly Language

C Intrinsics - AArch64 SIMD

Week 13

Week 13 - Class I

Week 13 - Class II

  • Wrap-up Session