Fall 2022 SPO600 Weekly Schedule
This is the schedule and main index page for the SPO600 Software Portability and Optimization course for Fall 2022.
Schedule Summary Table
Please follow the links in each cell for additional detail which will be added below as the course proceeds -- especially for the Deliverables column.
Week 1
Week 1 - Class I
Video
General Course Information
- Course resources are linked from the CDOT wiki, starting at https://wiki.cdot.senecacollege.ca/wiki/SPO600 (Quick find: This page will usually be Google's top result for a search on "SPO600"), arranged by week and class. There will be lots of hyperlinks -- be sure to follow these links.
- Coursework is submitted by blogging. The only exception to this is quizzes.
- Quizzes will be short (~1 page) and will be held without announcement at the start of any synchronous class. There is no opportunity to re-take a missed quiz, but your lowest three quiz scores will not be counted, so do not worry if you miss one or two.
- Students with test accommodations: an alternate monthly quiz can be made available via the Test Centre. Communicate with your professor for details.
- Course marks (see Weekly Schedule for dates):
- 60% - Project Deliverables in three phases (15%, 20%, 25%)
- 20% - Communication (Blog writing, in four phases roughly a month long each, 5% each)
- 20% - Labs and Quizzes (10% labs; 10% for quizzes - lowest 3 quiz scores not counted)
About SPO600 Classes
- Wednesday: synchronous (live) classes at 11:40 am - login to learn.senecacollege.ca ("Blackboard"), go to SPO600, and select the "Wednesday Classes" option on the left-hand menu.
- Friday: these classes will usually be asynchronous (pre-recorded) - see this page for details each week.
- There may be occasional exceptions to this pattern.
Introduction to the Problems
Porting and Portability
- Most software is written in a high-level language which can be compiled into machine code for a specific computer architecture. In many cases, this code can be compiled or interpreted for execution on multiple computer architectures - this is called 'portable' code. However, there is a lot of existing code that contains some architecture-specific code fragments which contains assumptions about the architecture, resulting in architecture-specific high-level or Assembly Language code.
- Reasons that code is architecture-specific:
- System assumptions that don't hold true on other platforms
- Variable or word size
- Endianness
- Code that takes advantage of platform-specific features
- System assumptions that don't hold true on other platforms
- Reasons for writing code in Assembly Language include:
- Performance
- Atomic Operations
- Direct access to hardware features, e.g., CPUID registers
- Most of the historical reasons for using assembler are no longer valid. Modern compilers can out-perform most hand-optimized assembly code, atomic operations can be handled by libraries or compiler intrinsics, and most hardware access should be performed through the operating system or appropriate libraries.
- A new architecture has appeared: AArch64, which is part of ARMv8. This is the first new computer architecture to appear in several years (at least, the first mainstream computer architecture).
- At this point, most key open source software (the software typically present in a Linux distribution such as Ubuntu or Fedora, for example) now runs on AArch64. However, it may not yet be as extensively optimized as on older architectures (such as x86_64).
Benchmarking and Profiling
Benchmarking involves testing software performance under controlled conditions so that the performance can be compared to other software, the same software operating on other types of computers, or so that the impact of a change to the software can be gauged.
Profiling is the process of analyzing software performance on finer scale, determining resource usage per program part (typically per function/method). This can identify software bottlenecks and potential targets for optimization. The resource utilization studies may include memory, CPU cycles/time, or power.
Optimization
Optimization is the process of evaluating different ways that software can be written or built and selecting the option that has the best performance tradeoffs.
Optimization may involve substituting software algorithms, altering the sequence of operations, using architecture-specific code, or altering the build process. It is important to ensure that the optimized software produces correct results and does not cause an unacceptable performance regression for other use-cases, system configurations, operating systems, or architectures.
The definition of "performance" varies according to the target system and the operating goals. For example, in some contexts, low memory or storage usage is important; in other cases, fast operation; and in other cases, low CPU utilization or long battery life may be the most important factor. It is often possible to trade off performance in one area for another; using a lookup table, for example, can reduce CPU utilization and improve battery life in some algorithms, in return for increased memory consumption.
Most advanced compilers perform some level of optimization, and the options selected for compilation can have a significant effect on the trade-offs made by the compiler, affecting memory usage, execution speed, executable size, power consumption, and debuggability.
Build Process
Building software is a complex task that many developers gloss over. The simple act of compiling a program invokes a process with five or more stages, including pre-processing, compiling, optimizing, assembling, and linking. However, a complex software system will have hundreds or even thousands of source files, as well as dozens or hundreds of build configuration options, auto configuration scripts (cmake, autotools), build scripts (such as Makefiles) to coordinate the process, test suites, and more.
The build process varies significantly between software packages. Most software distribution projects (including Linux distributions such as Ubuntu and Fedora) use a packaging system that further wraps the build process in a standardized script format, so that different software packages can be built using a consistent process.
In order to get consistent and comparable benchmark results, you need to ensure that the software is being built in a consistent way. Altering the build process is one way of optimizing software.
Note that the build time for a complex package can range up to hours or even days!
Course Setup
Follow the instructions on the SPO600 Communication Tools page to set up a blog, create SSH keys, and send your blog URLs and public key to me.
I will use this information to:
- Update the Current SPO600 Participants page with your information, and
- Create an account for you on the SPO600 Servers.
This updating is done in batches once or twice a week -- allow some time!
How open source communities work
- Do the Code Review Lab (Lab 1) as homework.
Week 1 - Class II
Video
Binary Representation of Data
- Binary
- Binary is a system which uses "bits" (binary digits) to represent values.
- Each bit has one of two values, signified by the symbols 0 and 1. These correspond to:
- Electrically: typically off/on, or low/high voltage, or low/high current. Many other electrical representations are possible.
- Logically: false or true.
- Binary numbers are resistant to errors, especially when compared to other systems such as analog voltages.
- To represent the numbers 0-10 as an analog electrical value, we could use a voltage from 0 - 10 volts. However, if we use a long cable, there will be signal loss and the voltage will drop: we could apply 10 volts on one end of the cable, but only observe (say) 9.1 volts on the other end of the cable. Alternately, electromagnetic interference from nearby devices could slightly increase the signal.
- If we instead use the same voltages and cable length to carry a binary signal, where 0 volts == off == "0" and 10 volts == on == "1", a signal that had degraded from 10 volts to 9.1 volts would still be counted as a "1" and a 0 volt signal with some stray electromagnetic interference presenting as (say) 0.4 volts would still be counted as "0". However, we will need to use multiple bits to carry larger numbers -- either in parallel (multiple wires side-by-side), or sequentially (multiple bits presented over the same wire in sequence).
- Integers
- Integers are the basic building block of binary numbering schemes.
- In an unsigned integer, the bits are numbered from right to left starting at 0, and the value of each bit is
2bit
. The value represented is the sum of each bit multiplied by its corresponding bit value. The range of an unsigned integer is0:2bits-1
where bits is the number of bits in the unsigned integer -- for example, an 8-bit unsigned integer has a range of 0 through 28-1 = 255. - Signed integers are generally stored in twos-complement format, where the highest bit is used as a sign bit. If that bit is set, the value represented is
-(!value)-1
where ! is the NOT operation (each bit gets flipped from 0→1 and 1→0)
- Fixed-point
- A fixed-point value is encoded the same as an integer, except that some of the bits are fractional -- they're considered to be to the right of the "binary point" (binary version of "decimal point" - or more generically, the radix point). For example, binary 000001.00 is decimal 1.0, and 000001.11 is decimal 1.75.
- An alternative to fixed-point fractional values is integer values in a smaller unit of measurement. For example, some accounting software may use integer values representing cents. For input and display purposes, dollar and cent values are converted to/from cent values. Similarly, a program that stores measurements could use milimetres instead of fractional meters.
- Floating-point
- The most commonly-used floating point formats are defined in the IEEE 754 standard.
- IEEE754 floating point numbers have three parts: a sign bit (0 for positive, 1 for negative), a mantissa or significand, and an exponent. The significand has an implied 1 and radix point preceeding the stored value. The exponent is stored as an unsigned integer to which a bias value has been added; the bias value is 2(number of exponent bits - 1) - 1. The floating point value is interpreted in normal cases as
sign mantissa * 2(exponent - bias)
. Exponent values which are all-zeros or all-ones encode four categories of special cases: zero, infinity, Not a Number (NaN), and subnormal numbers (numbers which are close to zero, where the significand does not have an implied 1 to the left of the radix point); in these special cases, the sign bit and significand values may have special meanings. - There are some new floating-point formats appearing, such as Brain Float 16, a 16-bit format with the same dynamic range as 32-bit IEEE 754 floating point but with less accuracy, intended for use in machine learning applications.
- Characters
- Characters are encoded as integers, where each integer corresponds to one code point in a character table (e.g., code 65 in ASCII corresponds to the character "A").
- Historically, many different coding schemes have been used, but the two most common ones were the American Standard Code for Information Interchange (ASCII), and Extended Binary Coded Decimal Interchange Code (EBCDIC - primarily used on IBM midrange and mainframe systems).
- ASCII characters occupied seven bits (code points 0-127), and contains only characters used in North American English. ASCII characters are usually encoded in bytes, so many vendors of ASCII-based systems used the remaining codes 128-255 for special characters such as graphics, line symbols (horizontal, vertical, connector, and corner line symbols for drawing tables), and accented characters; these were called "extended ASCII".
- Several ISO standards exist in an attempt to standardize the "extended ascii" characters, such as ISO8859, which was intended to enable the encoding of European languages by adding currency symbols and accented characters. However, the original version of ISO8859 (called ISO8859-1) does not include all accented characters and was created before the Euro symbol was standardized, so there are multiple versions of ISO8859, ranging from ISO8859-1 through ISO8859-15.
- The Unicode and ISO10646 initiatives were initiated to create a single character code set that would encode all symbols used in human writing, both for current and obsolete languages. These initiatives were merged, and the Unicode and ISO10646 standards define a common character set with 232 potential code points. However, Unicode also describes transformation formats for data interchange, rendering and composition/decomposition recommendations, and font symbol recommendations.
- The first 127 code points in Unicode correspond to ASCII code points, and the first 255 code points correspond to ISO8869-1 code points. The first 65536 code points form the Basic Multilingual Pane (BMP), which contains most of the characters required to write in all contemporary languages. Therefore, for many applications, it is inefficient to store Unicode as full 32-bit values. To solve this issue, several Unicode Transformation Formats (also known -- technically incorrectly -- as Unicode Transfer Formats) have been defined, including UTF-8, UTF-16, and UTF-32 (32-bit). UTF-8 represents ASCII and some ISO-8859 characters as a single byte, the remainder of the BMP as 2-3 bytes per character, and the remaining characters using 3-4 bytes per character. UTF-16 is similar, encoding much of the BMP in a single 16-bit value, and most other characters as two 16-bit values.
- Sound
- Sound waves are air pressure vibrations.
- Sound is most often represented in raw digital form as a series of time-based measurements of air pressure, called Pulse Coded Modulation (PCM).
- PCM takes a lot of storage, so sound is often compressed in either a lossless (perfectly recoverable) or lossy format (higher compression, but the decompressed data doesn't perfectly match the original data). To permit high compression ratios with minimal impact on quality, psychoacoustic compression is used - sound variations that most people can't perceive are removed.
- Graphics
- The human eye perceives luminance (brightness) as well as hue (colour). Our main hue receptors ("cones") are generally sensitive to three wavelengths: red, green, and blue (RGB). We can stimulate the eye to perceive most colours by presenting a combination of light at these three wavelengths utilizing metamerism.
- Digital displays emit RGB colours, which are mixed together and perceived by the viewer. This is called additive colour.
- For printing, cyan (C)/yellow (Y)/magenta (M) pigmented inks are used, plus black (K) to reduce the amount of colour ink required to represent dark tones; this is known as CYMK colour. These pigments absorb light at specific frequencies, subtracting energy from white or near-white sunlight or artificial light. This is called subtractive colour.
- Images are broken into picture elements (pixels) and each pixel is usually represented by a group of values for RGB or CYMK channels, where each channel is represented by an integer or floating-point value. For example, using an 8-bit-per-channel integer scheme (also known as 24-bit colour), the brightest blue could be represented as R=0,G=0,B=255; the brightest yellow would be R=255,G=255,B=0; black would be R=0,G=0,B=0; and white would be R=255,G=255,B=255. With this 8-bit-per-channel (24 bit total) scheme, the number of unique colours available is 256^3 ~= 16 million.
- As with sound, the raw storage of sampled data requires a lot of storage space, so various lossy and lossless compression schemes are used. Highest compression is achieved with psychovisual compression (e.g., JPEG).
- Moving pictures (video, animations) are stored as sequential images, often compressed by encoding only the differences between frames to save storage space. Motion compensation can further compress the data stream by describing how portions of the previous frame should be moved and positioned in the current frame.
- Compression techniques
- Huffman encoding / Adaptive arithmetic encoding
- Instead of fixed-length numbers, variable-length numbers are used, with the most common values encoded in the smallest number of bits. This is an effective strategy if the distribution of values in the data set is uneven.
- Repeated sequence encoding (1D, 2D, 3D)
- Run length encoding is an encoding scheme that records the number of repeated values. For example, fax messages are encoded as a series of numbers representing the number of white pixels, then the number of black pixels, then white pixels, then black pixels, alternating to the end of each line. These numbers are then represented with adaptive arithmetic encoding.
- Text data can be compressed by building a dictionary of common sequences, which may represent words or complete phrases, where each entry in the dictionary is numbered. The compressed data contains the dictionary plus a sequence of numbers which represent the occurrence of the sequences in the original text. On standard text, this typically enables 10:1 compression.
- Decomposition
- Compound audio waveforms can be decomposed into individual signals, which can then be modelled as repeated sequences. For example, a waveform consisting of two notes being played at different frequencies can be decomposed into those separate notes; since each note consists of a number of repetitions of a particular wave pattern, they can individually be represented in a more compact format by describing the frequency, waveform shape, and amplitude characteristics.
- Palletization
- Images often contain repeated colours, and rarely use all of the available colours in the original encoding scheme. For example, a 1920x1080 "full HD" image contains about 2 million pixels, so if every pixel was a different colour, there would be a maximum of 2 million colours. But it's likely that many of the pixels in the image are the same colour, so there might only be (perhaps) 4000 colours in the image. If each pixel is encoded as a 24-bit value, there are potentially 16 million colours available, and there is no possibility that they are all used. Instead, a palette can be provided which specifies each of the 4000 colours used in the picture, and then each pixel can be encoded as a 12-bit number which selects one of the colours from the palette. The total storage requirement for the original 24-bit scheme is 1920*1080*3 bytes per pixel = 5.9 MB. Using a 12-bit pallette, the storage requirement is 3 * 4096 bytes for the palette plus 1920*1080*1.5 bytes for the image, for a total of 3 MB -- a reduction of almost 50%
- Psychoacoustic and psychovisual compression
- Much of the data in sound and images cannot be perceived by humans. Psychoacoustic and psychovisual compression remove artifacts which are least likely to be perceived. As a simple example, if two pixels on opposite sides of a large image are almost but not exactly the same, most people won't be able to tell the difference, so these can be encoded as the same colour if that saves space (for example, by reducing the size of the colour palette).
- Huffman encoding / Adaptive arithmetic encoding
Week 1 Deliverables
- Follow the SPO600 Communication Tools set-up instructions.
- Optional (strongly recommended): Set up a personal Linux system.
- Optional: If you have an AArch64 development board (such as a Raspberry Pi 4, Raspberry Pi 400, or 96Boards device), consider installing a 64-bit Linux operating system such as Fedora on it.
- Start work on Lab 1. Blog your work.
Week 2
Week 2 - Class I
Video
- Summary video recording from class
- Calculating 6502 Program Execution Time
- Reminder: The Wednesday classes are live. An edited recording is provided for reference only - it is no substitute for attending class (via Zoom), taking notes, and asking questions!
Machine Language, Assembly Language
- Although we program computers in a variety of languages, they can really only execute one language: Machine Language, which is encoded in an architecture-specific binary code, sometimes called object code.
- Machine language is not easy to read. Assembly Language corresponds very closely to machine language, but is (sort of!) human-readable.
- Assembly language is converted into machine code by a particular type of compiler called an Assembler (sometimes the language itself is also referred to as "Assembler").
6502
Modern processors are complex - the reference manual for 64-bit ARM processors is over 11000 pages long! - so we're going to look at assembly language on a much simpler processor to get started. This processor is the 6502, a processor used in many early home and personal computers as well as video game systems, including the Commodore PET, VIC-20, C64; the Apple II; the Atari 400 and 800 computers and 2600 video game systems; and many others.
- Introduction to the 6502 (note the Resources links on that page)
- Introduction to the 6502 Instructions
- Introduction to the 6502 Addressing Modes
- Information about the 6502 Emulator which we will use in this course, and some example code
- Link to the actual 6502 emulator
Lab 2
- 6502 Assembly Language Lab - Lab 2
Week 2 - Class II
Videos
Reading
- 6502 Jumps, Branches, and Procedures
- 6502 Math (including Bitwise Operations)
Week 2 Deliverables
- If not already completed last week:
- Set up your SPO600 Communication Tools
- Complete Lab 1 and blog your work.
- Study the 6502 Instructions and 6502 Addressing Modes and make sure you understand what each one does.
- Complete Lab 2 and blog your results.
Week 3
Week 3 - Class I
Video
Lab
- 6502 Math and Strings Lab (Lab 3)
Week 3 - Class II
Video
- 6502 Assembly Language
- 6502 - Additional Resources
- An old video on the basics of using the 6502 Emulator
- 6502 Assembler Directives - using "define" and "dcb"
- Building code: make
Resources
- Make and Makefiles
- 6502 Example Code
- 6502 Emulator Example Code page on this wiki
- Chris Tyler's 6502js-code repository on GitHub (includes Wordle-like example)
- 6502asm.com - a site with an early version of the 6502 Emulator - see the "Examples" pull-down menu (these examples will run in [emulator]
Week 3 Deliverables
- Lab 3
- Note that September blog posts are due at the end of next week, so don't get behind in your blogging
Week 4
Week 4 - Class I
Video
Reading Resources
- Compiler Optimizations
- Connecting to course servers
- SPO600 Servers
- SSH
- Screen utility - allows disconnection/reconnection to remote host
Week 4 - Class II
Video
Resouces
Week 4 Deliverables
- September blogs are due this weekend (Sunday, October 2 at 11:59 pm)
Week 5
Week 5 - Class I
Video
Resources
- Assembly Language
- ELF file format
- X86_64 Register and Instruction Quick Start
- Aarch64 Register and Instruction Quick Start
- ARM 64-bit CPU Instruction Set and Software Developer Manuals
- ARM Aarch64 documentation
- ARM Developer Information Centre
- ARM Cortex-A Series Programmer’s Guide for ARMv8-A
- The short guide to the ARMv8 instruction set: ARMv8 Instruction Set Overview ("ARM ISA Overview")
- The long guide to the ARMv8 instruction set: ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile ("ARM ARM")
- Procedure Call Standard for the ARM 64-bit Architecture (AArch64)
- ARM Developer Information Centre
- x86_64 Documentation
- AMD Developer Guide and Manuals(see the AMD64 Architecture section, particularly the AMD64 Architecture Programmer’s Manual Volume 3: General Purpose and System Instructions)
- Intel Software Developers Manuals
- GAS Manual - Using as, The GNU Assembler: https://sourceware.org/binutils/docs/as/
Week 5 - Class II
Video
Lab 4
Week 5 Deliverables
Week 6
Week 6 - Class I
We used this class for introductions, a discussion of how things are going, and feedback on the course.
Week 6 - Class II
Video
- Inline Assembly Language - Inserting assembly language code into programs written in other languages (in this case, C)
- Single Instruction, Multiple Data (SIMD)
- Algorithm Selection and Benchmarking
Lab 5
- Algorithm Selection Lab (Lab 5)
Week 6 Deliverables
Week 7
Week 7 - Class I
Video
- Video summary will be posted after editing
Week 7 - Class II
Please catch up on course material to this point. If you are fully caught up, you can start to take a look at SVE2:
Reading
SVE2 Demonstration
- Code available here: https://github.com/ctyler/sve2-test
- This is an implementation of a very simple program which takes an image file, adjusts the red/green/blue channels of that file, and then writes an output file. Each channel is adjusted by a factor in the range 0.0 to 2.0 (with saturation).
- The image adjustment is performed in the function
adjust_channels()
in the fileadjust_channels.c
. There are three implementations:- A basic (naive) implementation in C. Although this is a very basic implementation, it is potentially subject to autovectorization.
- An implementation using inline assembler for SVE2 with strucure loads.
- An implementation using inline assembler for SVE2 with an interleaved factor table.
- An implementation using ACLE compile intrinsics.
- The implementation built is dependent on the value of the ADJUST_CHANNEL_IMPLEMENTATION macro.
- The provided Makefile will build four versions of the binary -- one using each of the four implementations -- and it will run through 3 tests with each binary. The tests use the input image file
tests/input/bree.jpg
(a picture of a cat) and place the output in the filestests/output/bree[1234][abc].jpg
. The output files are processed with adjustment factors of 0.5/0.5/0.5, 1.0/1.0/1.0, and 2.0/2.0/2.0. - Please examine, build, and test the code, compare the implementations, and note how it works - there are extensive comments in the code, especially for implementation 2.
- Your observations about the code might make a good blog post!
Week 7 Deliverables
Week 8
Week 8 - Class I
Video
Week 8 - Class II
Video
Reading
SVE2 Demonstration
- Code available here: https://github.com/ctyler/sve2-test
- This is an implementation of a very simple program which takes an image file, adjusts the red/green/blue channels of that file, and then writes an output file. Each channel is adjusted by a factor in the range 0.0 to 2.0 (with saturation).
- The image adjustment is performed in the function
adjust_channels()
in the fileadjust_channels.c
. There are three implementations:- A basic (naive) implementation in C. Although this is a very basic implementation, it is potentially subject to autovectorization.
- An implementation using inline assembler for SVE2 with strucure loads.
- An implementation using inline assembler for SVE2 with an interleaved factor table.
- An implementation using ACLE compile intrinsics.
- The implementation built is dependent on the value of the ADJUST_CHANNEL_IMPLEMENTATION macro.
- The provided Makefile will build four versions of the binary -- one using each of the four implementations -- and it will run through 3 tests with each binary. The tests use the input image file
tests/input/bree.jpg
(a picture of a cat) and place the output in the filestests/output/bree[1234][abc].jpg
. The output files are processed with adjustment factors of 0.5/0.5/0.5, 1.0/1.0/1.0, and 2.0/2.0/2.0. - Please examine, build, and test the code, compare the implementations, and note how it works - there are extensive comments in the code, especially for implementation 2.
- Your observations about the code might make a good blog post!
Week 8 Deliverables =
- Continue your blogging
- Include blogging on SVE/SVE
- The second group of blog posts is due on or before this Sunday (November 6, 11:59 pm)
Week 9
Week 9 - Class I
Video
- Will be posted after editing
iFunc
GNU iFunc is a facility for handling indirect functions. The basic premise is that you prototype the function to be called, add the ifunc
attribute to that prototype, and provide the name of a resolver function. The resolver function is called at program initialization, and returns a pointer to the function to be executed when the function referenced in the prototype is called. The resolver typically picks one of several implementations based on the capabilities of the machine on which the code is running; for example, it could return a pointer to a non-SVE, SVE, or SVE2 implementation of a function based on cpu capabilities (on an Aarch64 system) or it could return a pointer to an SSE, SSE2, AVX, or AVX512 implementation (on an x86_64 system).
There is a GitHub repository available with example iFunc code -- please clone this to israel.cdot.systems and build and test the code there. You should see different results if you run the output executable directly (./ifunc-test
) and run it through the qemu-aarch64 tool, which will emultate SVE2 capabilities (qemu-aarch64 ./ifunc-test
). Make sure you understand how the code works.
Reading/Resources
Week 9 Deliverables
- Investigate the iFunc example code
- Blog about your investigation