Will it be GPU or CPU that’s the question.

Posted on: September 18, 2007

Baljinder and sachin have come up with a brand new innovative idea it all up to the testings and POCs to decide…

Long story short 2 of our team mates have found a way to do the complex mathematical computation in much better time and efficiency, instead of the traditional system. They have found a yet better computational core a graphic card core (in terms of massive floating point matrix multiplication) which reduces the computational time to ‘n’ folds. Any how its me a UI guy briefing on this complex topic, i guess you would better understand after reading Baljindar’s mail.


We have very exciting results to share with you! Sachin and I had been doing some PoCs with Cuda on nVidia GEForce 8600 (GT), for the matrix multiplication and we have got very impressive results.

The stage was set when Sachin did full profiling and found the following stats:

  • Whole ADE tests took ~ 20 secs . Out of which, around 16-17 secs were taken by matrix multiplication, mainly in function ‘f_Proj()’ which took ~69% of this time. There we’re multiplying two matrices of size 2048 x 400 and 400 x 400 respectively for B/P/F and the other ones of size 2048 x 1600 and 1600 x 1600 for entropy.
  • Each matrix multiplication for entropy took ~ 6 secs in ‘findProjections()’.

Having figure out that the pain points are actually matrix multiplications, we started investigating to find better ways of matrix multiplication. We found that Cuda SDK provided by nVidia looked like a promising candidate, since it is highly optimized for floating point operations. And guess what, the results that we got from preliminary tests are extremely encouraging!

  • 1. The current architecture took roughly 30 milliseconds. An impressive gain of 6/.03 = 200 times! And that too when we used a very naive, unoptimized way of multiplying matrices (obviously, open_cv uses much better algorithm taken from BLAS) 🙂

It is worth noting that the FPU that we used for these tests, is quite a mediocre one by computing standards, and we expect results to improve more once we use high end cards like nVidia GEForce 8800 Ultra.

We’re very excited to see such a gain 🙂

Baljinder & Sachin

Its as though we have found Kryptonite in the form of a graphic card, I had no idea a graphic cards could be used to do such things… all i used my graphic card was for playing games. we are not far from reaching the moon now 😉 . Good going Baljindar and Sachin Guavus needs people to come up with such innovative ideas…

P.S. I hope you guys don’t remove the concept of CPU all together 😉


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

I have moved to a different location

check out my new home Flexout


September 2007
    Oct »

Blog Stat

  • 84,019 Hop's so far!!!

follow me


Linkedin Blogger Twitter Youtube Orkut

Top Clicks

  • None

latest flickr photos

top rated

%d bloggers like this: