First sorry for my english. I am currently testing Backtrack 5 R2 on my new laptop, which has a Nvidia Geforce 610M graphic card. As I want to test GPU acceleration, I have installed nvidia drivers (downloaded on nvidia website) and cuda toolkit, then I have installed bumblebee and bumblebee-nvidia packages. I think installation has been correctly performed (for information, I leave xorg.conf empty when I startx because I meet some problems - "screen is not found" - when nvidia-xconfig writes it).
Here are some samples:
root@self:~# cat /proc/acpi/bbswitch
root@self:~# pyrit benchmark
Pyrit 0.4.1-dev (svn r308) (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
This code is distributed under the GNU General Public License v3+
Running benchmark (3920.8 PMKs/s)... |
Computed 3920.77 PMKs/s total.
#1: 'CUDA-Device #1 'GeForce 610M'': 3033.5 PMKs/s (RTT 2.8)
#2: 'CPU-Core (SSE2/AES)': 163.9 PMKs/s (RTT 3.0)
#3: 'CPU-Core (SSE2/AES)': 160.9 PMKs/s (RTT 3.0)
#4: 'CPU-Core (SSE2/AES)': 159.9 PMKs/s (RTT 3.1)
#5: 'CPU-Core (SSE2/AES)': 158.5 PMKs/s (RTT 3.0)
#6: 'CPU-Core (SSE2/AES)': 161.2 PMKs/s (RTT 3.3)
#7: 'CPU-Core (SSE2/AES)': 159.0 PMKs/s (RTT 3.0)
#8: 'CPU-Core (SSE2/AES)': 161.9 PMKs/s (RTT 3.0)
At first sight, the nvidia card seems to be well recognized and used by the system. However, there is something which troubles me:
root@self:/pentest/passwords/oclhashcat+# ./cudaHashcat-plus64.bin ~/hash.txt ~/Desktop/HuegelCDC-clean.lst
cudaHashcat-plus v0.07 by atom starting...
Unique digests: 1
Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes
Password lengths range: 1 - 15
Platform: NVidia compatible platform found
Watchdog: Temperature limit set to 90c
Device #1: GeForce 610M, 2047MB, 1700Mhz, 1MCU
Device #1: Allocating 1MB host-memory
Device #1: Kernel ./kernels/4318/m0000_a0.sm_21.64.cubin
Speed........: 2392.4k c/s Real, 2102.2k c/s GPU
Here again, the nvidia card seems to be well recognized, but I think there is a problem because the screen (glxspheres) literally runs in slow-motion (only 17 frames/sec). When I use bumblebee and launch "optirun glxsphere" under Debian Squeeze (on the same laptop), the glxsphere video is much better and faster.
root@self:~# optirun glxspheres
Polygons in scene: 62464
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: GeForce 610M/PCIe/SSE2
16.708605 frames/sec - 14.316601 Mpixels/sec
16.602246 frames/sec - 14.225468 Mpixels/sec
17.418694 frames/sec - 14.925034 Mpixels/sec
17.051183 frames/sec - 14.610135 Mpixels/sec
16.977838 frames/sec - 14.547291 Mpixels/sec
In addition, there is no performance difference between John 'simple' and John 'jumbo' benchmarks (john --test), so I think GPU is not used by john jumbo .
So here are my questions: is Optimus limited? What about John? And the most important, what I am doing wrong?
Thanks for your help,
Have a nice day.