Low-level API such as Vulkan?
#1
So I've been thinking... is it possible for F@H to be done on something like Vulkan (preferably) or DX12? It seems like it would dramatically improve GPU performance. Just an open discussion thread, you think it's feasible? Benefits?
[Image: sigimage.php?u=730376&t=212997&b=celestia1]
[Image: sigimage.php?u=515661&t=50711&b=twilight3]
Reply
Likes:
#2
I could be completely wrong here, but I'm pretty sure folding uses compute specific APIs like Cuda and OpenCL, unlike games which use graphics APIs like OpenGL, DirectX and Vulkan. There's really very little comparison because they're made for separate tasks and purposes. I also thought that betters APIs allow for better utilisation of resources, but if the GPU or CPU is already at 100% there can't be much more performance to squeeze out.
[Image: sigimage.php?u=715231&t=212997&b=vinylscratch1]
Reply
Likes:
#3
(2017-04-21, 07:37:36 PM)for Deltalizer Wrote: I could be completely wrong here, but I'm pretty sure folding uses compute specific APIs like Cuda and OpenCL, unlike games which use graphics APIs like OpenGL, DirectX and Vulkan. There's really very little comparison because they're made for separate tasks and purposes. I also thought that betters APIs allow for better utilisation of resources, but if the GPU or CPU is already at 100% there can't be much more performance to squeeze out.

Hmm seems like you're right. OpenCL is a computational API I just realized. I only thought of vulkan because of that visualizer. 

And oh, what a low level API does is not exactly "squeeze" out performance. When done correctly, you can get better performance even at 100% load with regular APIs. It lowers overhead so software can take more control over hardware. And basically then allow the CPU to give the GPU more instructions. And allows the GPU to execute more efficiently.
[Image: sigimage.php?u=730376&t=212997&b=celestia1]
[Image: sigimage.php?u=515661&t=50711&b=twilight3]
Reply
Likes:
#4
Vulkan is just an API with drivers behind it to allow the API to function (a runtime, headers for compiling, etc.) Same is the case for OpenCL, CUDA, OpenGL, DirectX[...,9,10,11,12], and probably a few others, but those are the main ones.
IRC channel Yay for brony@home (irc.canternet.org #foldingismagic)
http://derpy.me/gLbV2

[Image: sigimage.php?u=605723&t=212997&b=twilight3]

Reply
Likes:
#5
(2017-04-22, 09:47:11 AM)tiwake Wrote: Vulkan is just an API with drivers behind it to allow the API to function (a runtime, headers for compiling, etc.) Same is the case for OpenCL, CUDA, OpenGL, DirectX[...,9,10,11,12], and probably a few others, but those are the main ones.

Yeah, I realized. But, I hope something like a low overhead OpenCL gets made because that would be awesome.
[Image: sigimage.php?u=730376&t=212997&b=celestia1]
[Image: sigimage.php?u=515661&t=50711&b=twilight3]
Reply
Likes:
#6
Back in the day when CUDA and OpenCL didn't exist, some people found ways to use graphics APIs to do general purpose tasks (I've seen shaders designed to compute physics and to do ray tracing). However, this involved putting your input data into "textures" and "triangle meshes", since that's the only thing that those APIs could load into GPU memory and provide to shader functions. Then those shader functions would write their results into "output textures", which could be transferred from GPU memory back to your main RAM.

However, there were various limitations to this, such as the troubles one would have when trying to do integer or double precision computations, since textures supported only single precision floats and RGB(A) values and so on...

Also, since such "shaders" were doing things very different than expected by GPU manufacturers (such as hundreds of "random" texture reads and writes per shader execution, and many conditional statements), the hardware wasn't optimized with those tasks in mind, and they were quite inefficient compared to graphical shaders.

So, eventually technologies such as CUDA and OpenCL were developed to address those issues, and hardware was redesigned to provide better support for general purpose computations, although I think still physics and ray tracing are amongst the best optimized non-standard GPU uses at both software and hardware level. At least the existence of PhysX and OptiX would suggest that.

So, currently when it comes to GPGPU (general purpose GPU) computations, you have two reasonable choices:
CUDA - works only on Nvidia cards, but usually works better than OpenCL on those cards. From what I've heard, this is mostly because of better driver optimizations? Also, many developers choose CUDA because it means official support from Nvidia.
OpenCL - works on Nvidia, AMD, Intel GPUs, as well as CPUs and even distributed systems. Your best bet if you want to use all your resources, and not just a single GPU. However, support and drivers for Nvidia cards are lackluster from what I've heard.

It's important to remember that all the technologies we're talking about here (DirectX, OpenGL, Vulkan, CUDA and OpenCL) have one thing in common - they work on both sides - there's library code working on the CPU side, and firmware working on the GPU side. When it comes to overhead, both are meaningful, although most differences between these technologies are on the CPU side - such as the CPU threads that OpenGL uses less efficiently than Vulkan. However, there's still reason to use OpenGL, because it's generally a higher-level library, so for small projects it's much easier to write and maintain code written for OpenGL than it is to write for Vulkan.

Although that's kinda off topic. In the end, for folding, BOINC etc. you gotta use OpenCL if you want to support more devices, or CUDA if you want to use Nvidia cards to the best of their potential. CUDA and OpenCL already are low overhead technologies, and I haven't heard of any replacements for OpenCL being in development. I think for now, the best course is for OpenCL to be further developed and hopefully Nvidia support for OpenCL will improve which would likely make a big difference. (however, Nvidia is not likely to put much effort into that, considering that would make CUDA obsolete).
[Image: sig-1969.png][Image: sig.png]
Reply
Likes:




SOON