Difference between revisions of "Winter 2017 SPO600 Weekly Schedule"
Chris Tyler (talk | contribs) |
Chris Tyler (talk | contribs) |
||
Line 241: | Line 241: | ||
*# Implementing a memory barrier | *# Implementing a memory barrier | ||
*# Performing an [[Atomic Operation]] | *# Performing an [[Atomic Operation]] | ||
+ | *#* '''Atomics''' are operations which must be completed in a single step (or appear to be completed in a single step) without potential interruption. | ||
+ | *#* Wikipedia has a good basic overview of the need for atomicity in the article on [http://en.wikipedia.org/wiki/Linearizability Linerarizability] | ||
*# Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...)) | *# Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...)) | ||
* [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7) | * [[SPO600 Inline Assembler Lab|Inline Assembler Lab]] (Lab 7) | ||
Line 249: | Line 251: | ||
=== Week 6 Deliverables === | === Week 6 Deliverables === | ||
* Blog your Lab 7 results. | * Blog your Lab 7 results. | ||
+ | |||
+ | == Week 7 == | ||
+ | |||
+ | === Week 7 - Class I === | ||
+ | |||
+ | ==== Overview/Review of Processor Operation ==== | ||
+ | |||
+ | * Fetch-decode-dispatch cycle | ||
+ | * Pipelining | ||
+ | * Branch Prediction | ||
+ | * In-order vs. Out-of-order execution | ||
+ | ** Micro-ops | ||
+ | |||
+ | ==== Memory Basics ==== | ||
+ | |||
+ | * Organization of Memory | ||
+ | * Memory Speeds | ||
+ | * Cache | ||
+ | ** Cache lookup | ||
+ | ** Cache synchronization and invalidation | ||
+ | ** Cache line size | ||
+ | * Prefetch | ||
+ | ** Prefetch hinting | ||
+ | |||
+ | ==== Memory Architecture ==== | ||
+ | |||
+ | * Virtual Memory and Memory Management Units (MMUs) | ||
+ | ** General principles of VM and operation of MMUs | ||
+ | ** Memory protection | ||
+ | *** Unmapped Regions | ||
+ | *** Write Protection | ||
+ | *** Execute Protection | ||
+ | *** Privilege Levels | ||
+ | ** Swapping | ||
+ | ** Text sharing | ||
+ | ** Data sharing | ||
+ | ** Copy-on-Write (CoW) | ||
+ | ** Demand Loading | ||
+ | |||
+ | ==== Memory Barriers ==== | ||
+ | '''Memory Barriers''' ensure that memory accesses are sequenced so that multiple threads, processes, cores, or IO devices see a predictable view of memory. | ||
+ | * Leif Lindholm provides an excellent explanation of memory barriers. | ||
+ | ** Blog series - I recommend this series, especially the introduction, as a very clear explanation of memory barrier issues. | ||
+ | *** Part 1 - [http://community.arm.com/groups/processors/blog/2011/03/22/memory-access-ordering--an-introduction Memory Access Ordering - An Introduction] | ||
+ | *** Part 2 - [http://community.arm.com/groups/processors/blog/2011/04/11/memory-access-ordering-part-2--barriers-and-the-linux-kernel Memory Access Ordering Part 2 - Barriers and the Linux Kernel] | ||
+ | *** Part 3 - [http://community.arm.com/groups/processors/blog/2011/10/19/memory-access-ordering-part-3--memory-access-ordering-in-the-arm-architecture Memory Access Ordering Part 3 - Memory Access Ordering in the ARM Architecture] | ||
+ | ** Presentation at Embedded Linux Conference 2010 (Note: Acquire/Release in C++11 and ARMv8 aarch64 appeared after this presentation): | ||
+ | *** [http://elinux.org/images/f/fa/Software_implications_memory_systems.pdf Slides] | ||
+ | *** [http://free-electrons.com/pub/video/2010/elce/elce2010-lindholm-memory-450p.webm Video] | ||
+ | * [http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf Memory Barriers - A Hardware View for Software Hackers] - This is a highly-rated paper that explains memory barrier issues - as the title suggests, it is designed to describe the hardware origin of the problem to software developers. Despite the fact that it is an introduction to the topic, it is still very technical. | ||
+ | * [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14041.html ARM Technical Support Knowlege Article - In what situations might I need to insert memory barrier instructions?] - Note that there are some additional mechanisms present in ARMv8 aarch64, including Acquire/Release. | ||
+ | * [https://www.kernel.org/doc/Documentation/memory-barriers.txt Kernel Documentation on Memory Barriers] - discusses the memory barrier issue generally, and the solutions used within the Linux kernel. This is part of the kernel documentation. | ||
+ | * Acquire-Release mechanisms | ||
+ | ** [http://blogs.msdn.com/b/oldnewthing/archive/2008/10/03/8969397.aspx MSDN Blog Post] with a very clear explanation of Acquire-Release. | ||
+ | ** [http://preshing.com/20130922/acquire-and-release-fences/ Preshing on Programming post] with a good explanation. | ||
+ | ** [http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.genc010197a/index.html ARMv8 Instruction Set Architecture Manual] (ARM InfoCentre registration required) - See the section on Acquire/Release and Load/Store, especially Load/Store Exclusive (e.g., LDREX) | ||
+ | |||
+ | === Week 7 - Class II === | ||
+ | |||
+ | * Course Project | ||
+ | |||
+ | === Week 7 Deliverables === | ||
+ | |||
+ | * Blog your Lab 7 results, including the second part | ||
+ | * (To be announced: Project Deliverables) | ||
+ | |||
<!-- | <!-- | ||
+ | |||
+ | ################################################################################# | ||
+ | ################################################################################# | ||
+ | ################################################################################# | ||
+ | |||
== Week 3 == | == Week 3 == | ||
Revision as of 23:47, 20 February 2017
This is the schedule and main index page for the SPO600 Software Portability and Optimization course for Winter 2017.
- Previous semester: Winter 2016 SPO600 Weekly Schedule.
Contents
Schedule Summary Table
This is a summary/index table. Please follow the links in each cell for additional detail which will be added below as the course proceeds -- especially for the Deliverables column.
Evaluation
Category | Percentage | Evaluation Dates |
---|---|---|
Communication | 20% | 5% each: End of January, end of February, end of March, end of course. |
Quizzes | 10% | May be held during any class, usually at the start of class. A minimum of 5 one-page quizzes will be given. No make-up/retake option is offered if you miss a quiz. Lowest 3 scores will not be counted. |
Labs | 10% | See deliverables column above. All labs must be submitted by April 21. |
Project work | 60% | 3 stages: 15% (TBA) / 20% (TBA) / 25% (TBA) |
Week 1
Week 1 - Class I
Introduction to the Problems
Porting and Portability
- Most software is written in a high-level language which can be compiled into machine code for a specific computer architecture. In many cases, this code can be compiled for multiple architectures. However, there is a lot of existing code that contains some architecture-specific code fragments written in Assembly Language (or, in some cases, machine-specific high-level code).
- Reasons for writing code in Assembly Langauge include:
- Performance
- Atomic Operations
- Direct access to hardware features, e.g., CPUID registers
- Most of the historical reasons for including assembler are no longer valid. Modern compilers can out-perform most hand-optimized assembly code, atomic operations can be handled by libraries or compiler intrinsics, and most hardware access should be performed through the operating system or appropriate libraries.
- A new architecture has appeared: AArch64, which is part of ARMv8. This is the first new computer architecture to appear in several years (at least, the first mainstream computer architecture).
- At this point, most key open source software (the software typically present in a Linux distribution such as Ubuntu or Fedora, for example) now runs on AArch64. However, it may not run as well as on older architectures (such as x86_64).
Benchmarking and Profiling
Benchmarking involves testing software performance under controlled conditions so that the performance can be compared to other software, the same software operating on other types of computers, or so that the impact of a change to the software can be gauged.
Profiling is the process of analyzing software performance on finer scale, determining resource usage per program part (typically per function/method). This can identify software bottlenecks and potential targets for optimization.
Optimization
Optimization is the process of evaluating different ways that software can be written or built and selecting the option that has the best performance tradeoffs.
Optimization may involve substituting software algorithms, altering the sequence of operations, using architecture-specific code, or altering the build process. It is important to ensure that the optimized software produces correct results and does not cause an unacceptable performance regression for other use-cases, system configurations, operating systems, or architectures.
The definition of "performance" varies according to the target system and the operating goals. For example, in some contexts, low memory or storage usage is important; in other cases, fast operation; and in other cases, low CPU utilization or long battery life may be the most important factor. It is often possible to trade off performance in one area for another; using a lookup table, for example, can reduce CPU utilization and improve battery life in some algorithms, in return for increased memory consumption.
Most advanced compilers perform some level of optimization, and the options selected for compilation can have a significant effect on the trade-offs made by the compiler, affecting memory usage, execution speed, executable size, power consumption, and debuggability.
Build Process
Building software is a complex task that many developers gloss over. The simple act of compiling a program invokes a process with five or more stages, including pre-proccessing, compiling, optimizing, assembling, and linking. However, a complex software system will have hundreds or even thousands of source files, as well as dozens or hundreds of build configuration options, auto configuration scripts (cmake, autotools), build scripts (such as Makefiles) to coordinate the process, test suites, and more.
The build process varies significantly between software packages. Most software distribution projects (including Linux distributions such as Ubuntu and Fedora) use a packaging system that further wraps the build process in a standardized script format, so that different software packages can be built using a consistent process.
In order to get consistent and comparable benchmark results, you need to ensure that the software is being built in a consistent way. Altering the build process is one way of optimizing software.
Note that the build time for a complex package can range up to hours or even days!
General Course Information
- Course resources are linked from the CDOT wiki, starting at http://zenit.senecac.on.ca/wiki/index.php/SPO600 (Quick find: This page will usually be Google's top result for a search on "SPO600").
- Coursework is submitted by blogging.
- Quizzes will be short (1 page) and will be held without announcement at any time. Your lowest three quiz scores will not be counted, so do not worry if you miss one or two.
- Course marks (see Weekly Schedule for dates):
- 60% - Project Deliverables
- 20% - Communication (Blog and Wiki writing)
- 20% - Labs and Quizzes (10% labs - completed/not completed; 10% for quizzes - lowest 3 scores not counted)
- All classes will be held in an Active Learning Classroom -- you are encouraged to bring your own laptop to class. If you do not have a laptop, consider signing one out of the Learning Commons for class, or using a smartphone with an HDMI adapter.
- For more course information, refer to the SPO600 Weekly Schedule (this page), the Course Outline, and SPO600 Course Policies.
- Optional: You can participate in the Linaro Code Porting/Optimization contest. For details, see the YouTube video of Jon "maddog" Hall and Steve Mcintyre at Linaro Connect USA 2013.
Discussion of how open source communities work
- Background for the Code Review Lab (Lab 1).
Week 1 - Class II
- Overview of the Build and Release Process
- Working with Code
- Getting Code
- In a tarball
- From git
- Git basics
- Working with other version control systems
- Getting and Installing Build Dependencies
- Required tools
- Required libraries, headers, and modules
- Building the Code
- Configuration tools (autotools, cmake)
- Make
- The compiler toolchain
- Preprocessor
- Compiler
- Assembler
- Linker
- Debug vs. Non-debug/Stripped binaries
- Installation Scripts
- Getting Code
- Looking at How Distributions Package the Code
- Using fedpkg
- Code Building Lab (Lab 2) as homework
Week 1 Deliverables
- Course setup:
- Set up your SPO600 Communication Tools - in particular, set up a blog and add it to Planet CDOT (via the Planet CDOT Feed List).
- Add yourself to the Winter 2017 SPO600 Participants page (leave the projects columns blank).
- Generate a pair of keys for SSH and email the public key to your professor.
- Sign and return the Open Source Professional Option Student Agreement.
- Complete Labs
- Code Review Lab (Lab 1) (Due end of week 2)
- Code Building Lab (Lab 2) (Due end of week 2)
- Optional (recommended): Set up a personal Fedora system.
- Optional: Purchase an AArch64 development board (such as a 96Boards HiKey or Raspberry Pi 3).
Week 2
Week 2 - Class I
- Computer Architecture overview (see also the Computer Architecture Category)
Week 2 - Class II
- Assembly language lab (lab 3)
Week 2 Deliverables
- Blog your conclusion to the Code Review Lab (Lab 1)
Week 3
Week 3 - Class I
- Continue group work on Lab 3.
Week 3 - Class II
- SPO600 Compiled C Lab (Lab 4)
Week 3 Deliverables
Week 4
Week 4 - Class I
Software Optimization
- Compiler Optimizations
- Profile Guided Optimization
- Algorithm Selection
Week 4 - Class II
- SPO600 Algorithm Selection Lab (Lab 5)
Week 4 Deliverables
- Blog about your Lab 5 results.
Week 5
Week 5 - Class I
- Finish the Algorithm Selection Lab
Week 5 - Class II
- Introduction to Vector Processing/SIMD
- Vectorization Lab (Lab 6)
Week 5 Deliverables
- Blog your results for the Algorithm Selection Lab (Lab 5)
- Blog your results for the Vectorization Lab (Lab 6)
- For each of the above, be sure to include links to your code, detailed results, and your reflection on the lab.
Week 6
Week 6 - Class I
- Inline Assembly Language -- often used for:
- Implementing a memory barrier
- Performing an Atomic Operation
- Atomics are operations which must be completed in a single step (or appear to be completed in a single step) without potential interruption.
- Wikipedia has a good basic overview of the need for atomicity in the article on Linerarizability
- Gaining performance (by accessing processor features not exposed by the high-level language being used (C, C++, ...))
- Inline Assembler Lab (Lab 7)
Week 6 - Class II
- Inline Assembler Lab (Lab 7) continued...
Week 6 Deliverables
- Blog your Lab 7 results.
Week 7
Week 7 - Class I
Overview/Review of Processor Operation
- Fetch-decode-dispatch cycle
- Pipelining
- Branch Prediction
- In-order vs. Out-of-order execution
- Micro-ops
Memory Basics
- Organization of Memory
- Memory Speeds
- Cache
- Cache lookup
- Cache synchronization and invalidation
- Cache line size
- Prefetch
- Prefetch hinting
Memory Architecture
- Virtual Memory and Memory Management Units (MMUs)
- General principles of VM and operation of MMUs
- Memory protection
- Unmapped Regions
- Write Protection
- Execute Protection
- Privilege Levels
- Swapping
- Text sharing
- Data sharing
- Copy-on-Write (CoW)
- Demand Loading
Memory Barriers
Memory Barriers ensure that memory accesses are sequenced so that multiple threads, processes, cores, or IO devices see a predictable view of memory.
- Leif Lindholm provides an excellent explanation of memory barriers.
- Blog series - I recommend this series, especially the introduction, as a very clear explanation of memory barrier issues.
- Presentation at Embedded Linux Conference 2010 (Note: Acquire/Release in C++11 and ARMv8 aarch64 appeared after this presentation):
- Memory Barriers - A Hardware View for Software Hackers - This is a highly-rated paper that explains memory barrier issues - as the title suggests, it is designed to describe the hardware origin of the problem to software developers. Despite the fact that it is an introduction to the topic, it is still very technical.
- ARM Technical Support Knowlege Article - In what situations might I need to insert memory barrier instructions? - Note that there are some additional mechanisms present in ARMv8 aarch64, including Acquire/Release.
- Kernel Documentation on Memory Barriers - discusses the memory barrier issue generally, and the solutions used within the Linux kernel. This is part of the kernel documentation.
- Acquire-Release mechanisms
- MSDN Blog Post with a very clear explanation of Acquire-Release.
- Preshing on Programming post with a good explanation.
- ARMv8 Instruction Set Architecture Manual (ARM InfoCentre registration required) - See the section on Acquire/Release and Load/Store, especially Load/Store Exclusive (e.g., LDREX)
Week 7 - Class II
- Course Project
Week 7 Deliverables
- Blog your Lab 7 results, including the second part
- (To be announced: Project Deliverables)