Ok, I managed to get it working as well, albeit by a different route. This may be something some of you that are having problems may want to try.
My original issue was that by following the original tutorial I could never get my xorg reconfigured in such a way that it would stop throwing fatal errors about the # of devices not matching the number of screens.
So I went here >> nvidia.com/object/cuda_get.html << grabbed the 32bit version for ubuntu 8.10. Logged out of x, then ran
root@bt:/home# sh cudadriver_2.2_linux_32_185.18.08-beta.run
Followed the prompts in the Nvida installer, restarted X and was good to go.
python pyrit.py benchmark returns
439.58 PMKs/s for CPU
1506.69 PMKs/s for Device GeForce Gt 120M
So all seems well.
My system if anyone cares is an ASUS N81Vg Intel Core 2 Duo P8600(2.40GHz) 4GB Memory NVIDIA GeForce GT 120M.
If something about what I did is obviously wrong please point it out so I don't goof others up but the benchmark works, X starts, and compiz still functions so I haven't found anything wrong yet.
Please, could somebody figure out such issue. My system is Gateway 7807 Laptop with 9800 GTS card, I have installed 185.18.08-bt5 driver, and cpyrite-cuda, then tested how it works
root@track-laptop:~# pyrit list_cores
The ESSID-blobspace seems to be empty; you should create an ESSID...
The following cores seem available...
#1: 'CPU-Core (x86)'
#2: 'CPU-Core (x86)'
So such driver doesn't fit to this card?
last kernel 184.108.40.206 + apt-get install nvidia-drivers (190.18)
"pyrit list_cores" crashes X, and it is not possible to use an alternate shell ctrl+alt+n
acer 5920g , 345abg , nvidia 8600m
bt5 kde 64bit + acpi + cuda 4.0 / nvidia 270.40 / pyrit
Assuming you compiled headers, replaced symlink and created Module.symvers:
Problem is nvidia-driver from repo is not properly recognized after installation, causing pyrit to crash. You may check this at System-Nvidia X Server Settings: "You do not appear to be using the NVIDIA X driver".
So, a workaround for this is to get latest cuda driver from the Nvidia site.
To complicate it just a bit more, nvidia-driver and cpyrit-cuda are mutually dependent (at the repo) so:
- If you remove nvidia-driver, then cpyrit-cuda is removed too.
- If you install cpyrit-cuda, then nvidia-driver is installed too.
But we only want cpyrit-cuda installed and not that tricky driver, so:
1. Download latest cuda driver from Nvidia site to /root
2. Close X session and stay at console
3. Install both pyrit and cpyrit-cuda (along with nvidia-driver) from repo
4. Overwrite nvidia-driver with latest downloaded beta cuda driver:
Say yes to everything, this is:Code:./cudadriver_2.3_linux_32_190.18.run
- Compile driver - Y
- Try to uninstall previous driver -Y
- Modify xorg.conf -Y
Then apt-get update, apt-get upgrade to fix broken packages, if any.
Startx will now show the Nvidia Beta splash and pyrit will hopefully work.
(At least it did for me)
Edit 1. After reading the thread again, noticed this is what monovitae already did.
Edit 2. This procedure is not supported by the RE team. It involves installing a beta driver which may cause issues. So, basically you're flying alone and you most likely won't get support from admins / dev team should something goes wrong. The proper way is to ask admins for a fix / howto or just wait for them to update the repository.
If I wished to run 20billion passwords on a .cap file, what would be the quickest way from start to end.
Looking at Purehates original post it would seem the airolin db is created at 5k per sec, and the cracking then at 186k per sec. This by my calculations comes out at somewhere near 46 days from start to finish.
Is there new hardware available or new techniques which improve on this? (please dont quote super CIA computers, I am talking hardware which is reasonable, lets say a pc for £1000 all in).
Also, there's that scientist at NVIDIA who's predicting 20GFlops per GPU by 2015! That'll do your 20Billion passwords in about 1 day on a single GPU, knowing NVIDIA, they probably stick more than one on a card
Then there's the untapped potential of gaming consoles, PS3's have been used in supercomputer clusters for a while now, some people are saying they could do about 2TFlop each? Pyrit can use OpenCL and aren't PS3's able to use OpenCL? anyone benched a PS3?
edit: Apparently PS3's are only capable of about 160Gflops, there goes that idea
Processed all workunits for ESSID 'myessid'; 8231 PMKs per second.
Computed 8231.39 PMKs/s total.
#1: 'CUDA-Device #1 'GeForce GTS 250'': 6375.7 PMKs/s (Occ. 99.4%; RTT 2.9)
#2: 'CPU-Core (SSE2)': 633.8 PMKs/s (Occ. 99.9%; RTT 3.0)
#3: 'CPU-Core (SSE2)': 638.1 PMKs/s (Occ. 99.5%; RTT 3.0)
#4: 'CPU-Core (SSE2)': 630.4 PMKs/s (Occ. 99.5%; RTT 2.9)
thats actually pretty nice ..Code:2830415 passphrases tested in 17.00 seconds: 166450.03 passphrases/second