The Beast Slayer
#91
It would seem environmentally dubious to constantly run clean water down the drain.

I always found the challenge to running multiple GPUs under Linux wasn't the fan control or overclocking, and more just getting them all working nicely in a single X session in order to control them at all!

You're probably already aware of coolbits, here are example settings commands for nvidia-settings:

Enable PowerMizer (Prefer Maximum Performance):
nvidia-settings -a '[gpu:0]/GPUPowerMizerMode=1'

Gain manual fan control:
nvidia-settings -a '[gpu:0]/GPUFanControlState=1'

Set GPU fan to 70%:
nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=70'

Set GPU Clock offset +120Mhz:
nvidia-settings -a '[gpu:0]/GPUGraphicsClockOffset[3]=120'

Set GPU Memory offset +100Mhz:
nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=100'


Here are the same settings in the correct format for your .nvidia-settings-rc on a multi-gpu system:

[gpu:0]/GPUPowerMizerMode=1
[gpu:1]/GPUPowerMizerMode=1
[gpu:2]...etc

[gpu:0]/GPUFanControlState=1
[gpu:1]/GPUFanControlState=1
[gpu:2]...etc

[fan:0]/GPUTargetFanSpeed=70
[fan:1]/GPUTargetFanSpeed=70
[fan:2]

[gpu:0]/GPUGraphicsClockOffset[3]=120
[gpu:1]/GPUGraphicsClockOffset[3]=120
[gpu:2]...etc

[gpu:0]/GPUMemoryTransferRateOffset[3]=100
[gpu:1]/GPUMemoryTransferRateOffset[3]=100
[gpu:2]...etc


Hope this helps
[Image: sigimage.php?u=652873&t=212997&b=twilight2]
Reply
Likes:
#92
(2017-03-27, 11:13:52 PM)BestPony Wrote: It would seem environmentally dubious to constantly run clean water down the drain.

True. Mind you, it's desalinated water that a significant portion of the country refuses to drink (I'm still alive, so I'm pretty sure there's nothing wrong with it).

Thankfully I don't need to worry about using those commands. Can easily do them in the nVidia X Server GUI. I didn't really have any major issues getting the GPUs working together. F@H didn't automatically detect the GPUs, so I had to manually add the slots. It was sufficient to add them with the automatic GPU index. That is, set to -1. From there, I had to manually set the OpenCL and CUDA indexes by experimenting to see which numbers were the right ones. Took no more than two minutes per system.
[Image: sigimage.php?e=hiigaran&b=changeling1]
Reply
Likes:
#93
Times might have changed since I last messed with overclocking under Linux, but you might find that the nVidia X Server GUI doesn't retain the fan and OC settings after reboot. That's what the commands/config file were for.
[Image: sigimage.php?u=652873&t=212997&b=twilight2]
Reply
Likes:
#94
I believe you're right about that. I think I only need to set the State and Speed commands in that order for each GPU. At least, if I only want to control the fans. For overclocking, I'll just add the relevant lines whenever I'm ready.

What I'm not sure about is how to actually make the script and set it to start automatically each time I boot. I'm guessing I need the Linux equivalent of a .bat file to put those commands in first.
[Image: sigimage.php?e=hiigaran&b=changeling1]
Reply
Likes:
#95
Hiigs, if you become a historical geniuis some day, will you remember me? ;D

Knew you wouldn't give up though! Nobody ever gives up when they have the passion to make great things.
Reply
Likes:
#96
I've got some Linux foo but I'm sure there are people more knowledgeable than I who can do this a better way:

1. Edit your /home/<your username>/.nvidia-settings-rc file with the desired attributes I specified above

2. I found that after a reboot this file may get overwritten and you'll lose your changes. Test it, and if that happens you can make the file read-only (even to the root account) via
Code:
sudo chattr +i /home/<your username>/.nvidia-settings-rc


3. To make sure the file is applied after reboot, put the following in your crontab
Code:
@reboot sh -c '/usr/bin/nvidia-settings --load-config-only'


Let me know how you get on
[Image: sigimage.php?u=652873&t=212997&b=twilight2]
Reply
Likes:
#97
Small update:

[Image: LLez7Ev.jpg]

To help with fixing the cable jungle, I've cut out a length of cable trunking, then cut two sets of holes for the PCI-e cables to plug into the video cards, and another hole in the center where the cables come together. Additional trunking will be made to hide more cables, and guide them to where they need to go. When the chillbox casing is built, the trunking will be aligned properly and secured.

Speaking of the chillbox, as I would need a decent amount of insulation, I've doubled up on the standoffs, so there's a larger gap between the motherboards and the acrylic.

I'm also tempted to swap the hard drives out for some M2 drives, just so I can get rid of some additional cables. A couple of 30 or 60 gig M2 drives would probably cost about 60 to 80 dollars currently. I don't think it's worth it though. These systems don't access the hard drives that frequently, and they're not set to save their data at regular enough intervals to cause any major interruptions. Besides, that cash can go towards more cards!
[Image: sigimage.php?e=hiigaran&b=changeling1]
Reply
Likes: Ruthalas




SOON