Difference between revisions of "Team Sonic"

From CDOT Wiki
Jump to: navigation, search
(Updating my Assignment 1 info)
(Added summery)
Line 38: Line 38:
 
# $LD_PROFILE=libLolaBunny.so ./RabbitCTRunner [LOCATION OF MODULE] [LOCATION OF DATA-SET] [REPORT FILE] [VOLUMLE SIZE] <br />'''Example:''' LD_PROFILE=libLolaBunny.so ./RabbitCTRunner ../modules/LolaBunny/libLolaBunny.so ~/datasets/rabbitct_512-v2.rctd ./resultFile 128
 
# $LD_PROFILE=libLolaBunny.so ./RabbitCTRunner [LOCATION OF MODULE] [LOCATION OF DATA-SET] [REPORT FILE] [VOLUMLE SIZE] <br />'''Example:''' LD_PROFILE=libLolaBunny.so ./RabbitCTRunner ../modules/LolaBunny/libLolaBunny.so ~/datasets/rabbitct_512-v2.rctd ./resultFile 128
 
# $ sprof -p [LOCATION OF MODULE] [LOCATION OF PROFILE FILE] > log <BR />'''Example:''' sprof -p ../modules/LolaBunny/libLolaBunny.so /var/tmp/libLolaBunny.so.profile > log <br />'''Note.''' /var/tmp/ was my default location for profiles. Man ld.so for LD_PROFILE_EXPORT
 
# $ sprof -p [LOCATION OF MODULE] [LOCATION OF PROFILE FILE] > log <BR />'''Example:''' sprof -p ../modules/LolaBunny/libLolaBunny.so /var/tmp/libLolaBunny.so.profile > log <br />'''Note.''' /var/tmp/ was my default location for profiles. Man ld.so for LD_PROFILE_EXPORT
 +
 +
You should now have a log file in your current dir, which has the a flat-profile of the module:
 +
 +
<pre>
 +
[prasanth@localhost RabbitCTRunner]$ cat log
 +
Flat profile:
 +
 +
Each sample counts as 0.01 seconds.
 +
  %  cumulative  self              self    total
 +
time  seconds  seconds    calls  us/call  us/call  name
 +
100.00    112.58  112.58        0    0.00          RCTAlgorithmBackprojection
 +
 +
</pre>
 +
 +
This is the output from RabbitCTRunner:
 +
<pre>
 +
[prasanth@localhost RabbitCTRunner]$ LD_PROFILE=libLolaBunny.so ./RabbitCTRunner ../modules/LolaBunny/libLolaBunny.so ~/datasets/rabbitct_512-v2.rctd ./resultFile 128
 +
RabbitCT runner http://www.rabbitct.com/
 +
Info: using 4 buffer subsets with 240 projections each.
 +
Running ... this may take some time.
 +
  (\_/)
 +
(='.'=)
 +
(")_(")
 +
 +
--------------------------------------------------------------
 +
Quality of reconstructed volume:
 +
Root Mean Squared Error: 38914.3 HU
 +
Mean Squared Error: 1.51433e+09 HU^2
 +
Max. Absolute Error: 65535 HU
 +
PSNR: -19.5571 dB
 +
 +
--------------------------------------------------------------
 +
Runtime statistics:
 +
Total:  112.627 s
 +
Average: 117.32 ms
 +
 +
</pre>
 +
======Summary======
 +
With the flat-profile data, I can say with some certainty that 100% of the time is spent on the 'RCTAlgorithmBackprojection' method. Digging in to the source code this is the actual code of this method:
 +
 +
''LolaBunny.cpp''
 +
<pre>
 +
FNCSIGN bool RCTAlgorithmBackprojection(RabbitCtGlobalData* r)
 +
{
 +
    unsigned int L  = r->L;
 +
    float        O_L = r->O_L;
 +
    float        R_L = r->R_L;
 +
    double*      A_n = r->A_n;
 +
    float*      I_n = r->I_n;
 +
    float*      f_L = r->f_L;
 +
 +
    s_rcgd = r;
 +
 +
    for (unsigned int k=0; k<L; k++)
 +
    {
 +
        double z = O_L + (double)k * R_L;
 +
        for (unsigned int j=0; j<L; j++)
 +
        {
 +
            double y = O_L + (double)j * R_L;
 +
            for (unsigned int i=0; i<L; i++)
 +
            {
 +
                double x = O_L + (double)i * R_L;
 +
 +
                double w_n =  A_n[2] * x + A_n[5] * y + A_n[8] * z + A_n[11];
 +
                double u_n = (A_n[0] * x + A_n[3] * y + A_n[6] * z + A_n[9] ) / w_n;
 +
                double v_n = (A_n[1] * x + A_n[4] * y + A_n[7] * z + A_n[10]) / w_n;
 +
 +
                f_L[k * L * L + j * L + i] += (float)(1.0 / (w_n * w_n) * p_hat_n(u_n, v_n));
 +
            }
 +
        }
 +
    }
 +
 +
    return true;
 +
}
 +
</pre>
 +
 +
We can see there is a nested for loop, containing three for loops. In Big-O notation the order of growth for this method would be O(N3). It is also using double precision and matrix multiplications, therefore I think this code can be optimized using CUDA.
 +
These results here were calculated on a:
 +
<pre>
 +
Lenovo T400 laptop
 +
Intel® Core™2 Duo CPU P8600 @ 2.40GHz × 2
 +
4GB 1066 MHz Memory
 +
Fedora Release 18 (Spherical Cow) 64-bit
 +
Kernel: 3.6.10-4.fc18.x86_64
 +
</pre>
 +
I'm sure running this on our GTX 480 can yield better results. (hopefully).
  
 
====[[User:Leo_Turalba | Leo]]====
 
====[[User:Leo_Turalba | Leo]]====

Revision as of 20:25, 7 February 2013

Team Sonic

Sonicteam.png

Members

  1. Prasanth Vaaheeswaran
  2. Daniel Lev
  3. Leo Turalba
Email All

About

Progress

Assignment 1

Daniel

Topic

I'm interested in profiling and parallelizing an algorithm that finds the average value of an array of integral values. Although it might not sound exciting to some, I think it's a great starting point.

Updates

Feb 6: I created a program that does what I specified. I profiled it, and the profile showed that my application spends most of its time in the init_and_avg() function. That function has a loop that can be parallelized because each iteration is independent.

Prasanth

Topic

Find open source tomography algorithm, profile it, then attempt to convert parts of source code to take advantage of GPU using CUDA.

Update
Project

I have found an open source project called RabbitCT which aims to support the creation of efficient algorithms to process voxel data. The ultimate goal is to improve the performance of 3-D cone beam reconstruction, which is used extensively in the medical field to processing cone bean computed tomography. "Cone bean computed tomography (CT) is an emerging imaging technology. It provides all projections needed for three-dimensional (3D) reconstruction in a single spin of the X- ray source-detector pair. This facilitates fast, low-dose data acquisition, which is required for the imaging of rapidly moving objects, such as the human heart, as well as for intra-operative CT applications." [Klaus Mueller]

Profiling

After many many(!) issues, I was able to profile the reconstruction algorithm on my laptop.

RabbitCT provides three files:

  1. RabbitCTRunner - the application that accepts the data-set and algorithm module and gives some basic metrics of algorithm performance (i.e. elapsed time)
  2. RabbitCT Dataset V2 - platform independent data-set of actual C-arm scan of rabbit (~2.5Gb for "small" ver)
  3. LolaBunny - a basic non-optimized reference algorithm module (shared library)

Since the main processing is offloaded to the shared library (or dll on windows) I can't simply compile and run gprof on the main application (RabbitCTRunner), I had to set some special settings in shell to profile a shared library and use sprof; gprof can not be used for profiling shared libraries (found this out the hard long way). After many failed attempts this method finally yielded the results I needed.

Steps to reproduce:

  1. run cmake and create executables RabbitCTRunner and the share library LolaBunny
  2. $LD_PROFILE=libLolaBunny.so ./RabbitCTRunner [LOCATION OF MODULE] [LOCATION OF DATA-SET] [REPORT FILE] [VOLUMLE SIZE]
    Example: LD_PROFILE=libLolaBunny.so ./RabbitCTRunner ../modules/LolaBunny/libLolaBunny.so ~/datasets/rabbitct_512-v2.rctd ./resultFile 128
  3. $ sprof -p [LOCATION OF MODULE] [LOCATION OF PROFILE FILE] > log
    Example: sprof -p ../modules/LolaBunny/libLolaBunny.so /var/tmp/libLolaBunny.so.profile > log
    Note. /var/tmp/ was my default location for profiles. Man ld.so for LD_PROFILE_EXPORT

You should now have a log file in your current dir, which has the a flat-profile of the module:

[prasanth@localhost RabbitCTRunner]$ cat log
Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total
 time   seconds   seconds    calls  us/call  us/call  name
100.00    112.58   112.58        0     0.00           RCTAlgorithmBackprojection

This is the output from RabbitCTRunner:

[prasanth@localhost RabbitCTRunner]$ LD_PROFILE=libLolaBunny.so ./RabbitCTRunner ../modules/LolaBunny/libLolaBunny.so ~/datasets/rabbitct_512-v2.rctd ./resultFile 128
RabbitCT runner http://www.rabbitct.com/
Info: using 4 buffer subsets with 240 projections each.
Running ... this may take some time.
  (\_/)
 (='.'=)
 (")_(")

--------------------------------------------------------------
Quality of reconstructed volume:
Root Mean Squared Error: 38914.3 HU
Mean Squared Error: 1.51433e+09 HU^2
Max. Absolute Error: 65535 HU
PSNR: -19.5571 dB

--------------------------------------------------------------
Runtime statistics:
Total:   112.627 s
Average: 117.32 ms

Summary

With the flat-profile data, I can say with some certainty that 100% of the time is spent on the 'RCTAlgorithmBackprojection' method. Digging in to the source code this is the actual code of this method:

LolaBunny.cpp

FNCSIGN bool RCTAlgorithmBackprojection(RabbitCtGlobalData* r)
{
    unsigned int L   = r->L;
    float        O_L = r->O_L;
    float        R_L = r->R_L;
    double*      A_n = r->A_n;
    float*       I_n = r->I_n;
    float*       f_L = r->f_L;
 
    s_rcgd = r;
 
    for (unsigned int k=0; k<L; k++)
    {
        double z = O_L + (double)k * R_L;
        for (unsigned int j=0; j<L; j++)
        {
            double y = O_L + (double)j * R_L;
            for (unsigned int i=0; i<L; i++)
            {
                double x = O_L + (double)i * R_L;
 
                double w_n =  A_n[2] * x + A_n[5] * y + A_n[8] * z + A_n[11];
                double u_n = (A_n[0] * x + A_n[3] * y + A_n[6] * z + A_n[9] ) / w_n;
                double v_n = (A_n[1] * x + A_n[4] * y + A_n[7] * z + A_n[10]) / w_n;
 
                f_L[k * L * L + j * L + i] += (float)(1.0 / (w_n * w_n) * p_hat_n(u_n, v_n)); 
            }
        }
    }
 
    return true;
}

We can see there is a nested for loop, containing three for loops. In Big-O notation the order of growth for this method would be O(N3). It is also using double precision and matrix multiplications, therefore I think this code can be optimized using CUDA. These results here were calculated on a:

Lenovo T400 laptop
Intel® Core™2 Duo CPU P8600 @ 2.40GHz × 2
4GB 1066 MHz Memory
Fedora Release 18 (Spherical Cow) 64-bit 
Kernel: 3.6.10-4.fc18.x86_64

I'm sure running this on our GTX 480 can yield better results. (hopefully).

Leo

Topic

Im looking at fractal maps right now. Julia and Mandelbrot set. Im trying to figure out how to do the Julia set because last semester, one of the assignment is for Mandelbrot Set.

Updates

Assignment 2

Assignment 3