Changes

Jump to: navigation, search

SPO600 Algorithm Selection Lab

1,057 bytes added, 11:26, 9 March 2020
Three Approaches
[[Category:SPO600 Labs]]{{Admon/lab|Purpose of this Lab|In this lab, you will investigate the impact of different algorithms which produce the same effect. You will test and select one of three algorithms for adjusting the volume of PCM audio samples based on benchmarking.}}
== Lab 5 6 ==
=== Background ===
* Digital sound is typically represented, uncompressed, as signed 16-bit integer signal samples. There is one stream are two streams of samples , one each for the left and right stereo channels, at typical sample rates of 44.1 or 48 thousand samples per secondper channel, for a total of 88.2 or 96 thousand samples per second(kHz). Since there are 16 bits (2 bytes) per sample, the data rate is 88.2 * 1000 * 2 = 176,400 bytes/second (~172 KiB/sec) or 96 * 1000 * 2 = 192,000 bytes/second (~187.5 KiB/sec).* To change the volume of sound, each sample can be scaled (multiplied) by a volume factor, in the range of 0.00 (silence) to 1.00 (full volume).
* On a mobile device, the amount of processing required to scale sound will affect battery life.
=== Basic Sound Scale Program Three Approaches ===
Perform Three approaches to this lab on one of the ARMv8 AArch64 [[SPO600 Servers]].problem are provided:
# Unpack the archive The basic or Naive algorithm (<code>/public/spo600-20181-algorithm-selection-labvol1.tgzc</code>). This approach multiplies each sound sample by 0.75, casting from signed 16-bit integer to floating point and back again. Casting between integer and floating point can be [[Expensive|expensive]] operations.# Examine the A lookup-based algorithm (<code>vol1vol2.c</code> source code). This program:approach uses a pre-calculated table of all 65536 possible results, and looks up each sample in that table instead of multiplying.## Creates 500,000 random "sound samples" in an input array A fixed-point algorithm (the number of samples is set in the <code>volvol3.hc</code> file).## Scales those samples by This approach uses fixed-point math and bit shifting to perform the volume factor 0.75 and stores them in an output arraymultiplication without using floating-point math.## Sums the output array and prints the sum.# Build and test this file.=== Don't Compare Across Machines ===#* Does it produce the same output each time?# Test In this lab, ''do not'' compare the relative performance across different machines, because the systems provided have a wide range of this programprocessor implementations, from server-class to mobile-class. Adjust However, ''do'' compare the number relative performance of samples as necessary.#* How long does it take to run?#* How much time is spent scaling the sound samples?#* Do multiple runs take various algorithms on the ''same time?'' machine.
=== Alternate Approaches Benchmarking ===
The sample program uses Get the most basic, obvious algorithm files for this lab from one of the problem[[SPO600 Servers]] -- but you can perform the lab wherever you want (feel free to use your laptop or home system). Let's call this "Algorithm 0", or the "Naive Algorithm"Test on both an x86_64 and an AArch64 system.
Try these alternate algorithms for scaling Review the contents of this archive:* <code>vol.h</code> controls the sound number of samples by modifying copies of to be processed* <code>vol1.c</code>, <code>vol2. Edit c</code>, and <code>vol3.c</code> implement the various algorithms* The <code>Makefile</code> can be used to build your modified the programs as well as the original. Test each approach to see the performance impact:
Perform these steps:# PreUnpack the archive <code>/public/spo600-calculate algorithm-selection-lab.tgz</code># Study each of the source code files and make sure that you understand what the code is doing.# '''Make a lookup table (array) prediction''' of the relative performance of each scaling algorithm.# Build and test each of the programs.#* Do all possible sample values multiplied by of the algorithms produce the volume factorsame output?#** How can you verify this?#** If there is a difference, and look up is it significant enough to matter?#* Change the number of samples so that each sample in that table program takes a reasonable amount of time to get execute (suggested minimum 20 seconds, 1 minute or more is better).# Test the scaled valuesperformance of each program.# Convert * Find a way to measure performance ''without'' the volume factor 0.75 time taken to a fixperform the test setup pre-point integer by multiplying by a binary number representing a fixedprocessing (generating the samples) and post-point value "1". For example, you could use 0b100000000 processing (= 256 in decimalsumming the results) so that you can measure ''only'' the time taken to represent 1scale the samples.00. Shift '''This is the hard part!'''#* How much time is spent scaling the result to sound samples?#* Do multiple runs take the right same time? How much variation do you observe? What is the required number likely cause of bits after this variation?#* Is there any difference in the results produced by the multiplication (>>8 if various algorithms?#* Does the difference between the algorithms vary depending on the architecture and implementation on which you're using 256 as test?#* What is the multiplier).relative memory usage of each program?# Was your prediction accurate?
=== Deliverables ===
Blog about your experiments with a detailed analysis of your results, including memory usage, time performance, accuracy, and trade-offs.
ImportantMake sure you convincingly prove your results to your reader! -- Also be sure to explain what you're doing so that a reader coming across your blog post understands the context (in other words, don't just jump into a discussion of optimization results -- give your post some context).
'''Optional - Recommended:''' Compare results across several '''implementations ''' of AArch64 and x86_64 systems. Note that on different CPU implementations, the relative performance of different algorithms will vary; for example, table lookup may outperform other algorithms on a system with a fast memory system (cache), but not on a system with a slower memory system.* For AArch64, you could compare the performance of Cortex-A57 octa-core CPU (on aarchie) AArchie against the APM XGene-1 octa-core CPUs (on bbetty or ccharlie)various class servers, or against Cortex-A53 cores (e.g., on between the class servers and a Raspberry Pi 3, (in 64-bit mode) or on ddouglas)an ARM Chromebook.* For x86_64, you could compare the performance of different processors, such as xerxes, your own laptop or desktop, and Seneca systems such as Matrix, Zenit, or lap lab desktops.
=== Things to consider ===
==== Design of Your Tests ====
* Most solutions for a problem of this type involve generating a large amount of data in an array, processing that array using the function being evaluated, and then storing that data back into an array. The test setup can take more time than the actual test! Make sure that you measure the time taken in the code under test function only -- you need to be able to remove the rest of the processing time from your evaluation.* You may need to run a very large amount of sample data through the function to be able to detect its performance. Feel free to edit the sample count in <code>vol.h</code> as necessary.
* If you do not use the output from your calculation (e.g., do something with the output array), the compiler may recognize that, and remove the code you're trying to test. Be sure to process the results in some way so that the optimizer preserves the code you want to test. It is a good idea to calculate some sort of verification value to ensure that both approaches generate the same results.
* Be aware of what other tasks the system is handling during your test run, including software running on behalf of other users.
==== Analyzing Results =Tips ===* What is the impact {{Admon/tip|Analysis|Do a thorough analysis of various optimization levels on the software results. Be certain (and prove!) that your performance?* Does measurement ''does not'' include the distribution generation or summarization of the test data matter?* If samples are fed at CD rate (44100 samples per second x 2 channels x 2 bytes per sample). Do multiple runs and discard the outliers. Decide whether to use mean, minimum, can each of or maximum time values from the algorithms keep up?* What is the memory footprint of each multiple runs, and explain why you made that decision. Control your variables well. Show relative performance as percentage change, e.g., "this approach was NN% faster than that approach?".}}* What is the performance of each approach?* What is the energy consumption of each approach? (What information do you need to calculate {{Admon/tip|Non-Decimal Notation|In this?)* Various machineslab, such as Aarchiethe number prefix 0x indicates a hexadecimal number, Bbetty, DDouglasand 0b indicates a binary number, in harmony with the C language.}} {{Admon/tip|Time and Memory Usage of a Program|You can get basic timing information for a Raspberry Pi 3, have different performance profiles program by running <code>time ''programName''</code> -- so it's not reasonable the output will show the total time taken (real), the amount of CPU time used to compare performance between run the machinesapplication (user), but it is reasonable to compare and the amount of CPU time used by the relative performance operating system on behalf of the two algorithms in each contextapplication (system). Do you get similar results?* What other optimizations can be applied The version of the <code>time</code> command located in <code>/bin/time</code> gives slightly different information than the version built in to this problem?bash -- including maximum resident memory usage: <code>/bin/time ''./programName''</code>}}
=== Tips ===
{{Admon/tip|SOX|If you want to try this with actual sound samples, you can convert a sound file of your choice to raw 16-bit signed integer PCM data using the [http://sox.sourceforge.net/ sox] utility present on most Linux systems and available for a wide range of platforms.}}
{{Admon/tip|Stack Limitstdint.h|Fixed-size, non-static arrays will be placed in the stack space. The size of the stack space is controlled by per-process limits, inherited from the shell, and adjustable with the <code>ulimitstdint.h</code> command. Allocating an array larger than the stack header provides definitions for many specialized integer size limit will cause a segmentation fault, usually on the first write. To see the current stack limit, use <code>ulimit -s</code> (displayed value is in KB; default is usually 8192 KB or 8 MB)types. To set the current stack limit, place a new size in KB or the keyword Use <code>unlimitedint16_t</code>after the <code>for 16-s</code> argument.<br /><br />Alternate (and preferred) approach, as used in the provided sample code: allocate the array space with <code>malloc()</code> or <code>calloc()</code>bit signed integers.}}
{{Admon/tip|stdint.hScripting|The <code>stdint.h</code> header provides definitions for many specialized integer size types. Use <code>int16_t</code> for 16-bit signed integers.bash scripting capabilities to reduce tedious manual steps!}}

Navigation menu