Hallo!


I'm not sure if this is strange , but i just assembled my new PC with 3 exactly the same EVGA GTX280 GPU's and decide to test how they will perform on GPU calculating in SLI mode.The results are :


Code:
 
root@bt:~# pyrit list_cores
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+

The following cores seem available...
#1:  'CUDA-Device #1 'GeForce GTX 280''
#2:  'CUDA-Device #2 'GeForce GTX 280''
#3:  'CUDA-Device #3 'GeForce GTX 280''
#4:  'CPU-Core (SSE2)'
Code:
 
root@bt:~# pyrit benchmark_long
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+

Running benchmark (28483.5 PMKs/s)... / 

Computed 28483.47 PMKs/s total.
#1: 'CUDA-Device #1 'GeForce GTX 280'': 12800.2 PMKs/s (RTT 3.0)
#2: 'CUDA-Device #2 'GeForce GTX 280'': 8116.3 PMKs/s (RTT 3.2)
#3: 'CUDA-Device #3 'GeForce GTX 280'': 7528.0 PMKs/s (RTT 3.2)
#4: 'CPU-Core (SSE2)': 793.2 PMKs/s (RTT 3.0)
My question is is it normal that there is a huge difference in performance of GPU 1 and 2 and 3 ?The difference is ~35-40 %!Is that normal?All the hardware is brand new.The GPU's have temp difference about 3-4 degrees.

I'm using installed and fully updated on my HDD Backtrack 5 KDEx64.I used the tutorial on the wiki http://www.backtrack-linux.org/wiki/...A_On_BackTrack to install Pyrit and i have the latest developer drivers and Cuda toolkit from nVidia.

All inputs are welcome! If you need any additional info just let me know.