Overclock.net banner
1 - 20 of 29 Posts

· Registered
Joined
·
2,306 Posts
Discussion Starter · #1 ·
Here is a quick guide for ATI/AMD GPU users to get SETI crunching on your GPUs.
  1. Stop and close your BOINC client (make sure it is stopped and closed properly, not just minimised)
  2. Sign up to the SETI project HERE and set your SETI your SETI project preferences as follows:

    Code:

    Code:
    Resource share = 100
    Use CPU = no
    Use NVIDIA GPU = yes
    Maximum CPU % for graphics = 5
    Run only the selected applications
    [email protected] Enhanced = yes
    Astropulse v5 = yes
    Astropulse v5.05 = yes
    Accept work from other applications = yes
  3. Sign in to boincstats and click the "host list" heading in the left menu
  4. under "host name", click the host that you want to run the SETI project on
  5. Under that host name, put a check in ONLY the "attach" and "do not use no cpu" boxes and then click "set"
  6. Open your BOINC client and it will now download the initial SETI project files
  7. After the SETI project files have finished downloading, no SETI tasks should be downloaded and no workunits should start to crunch - if they are downloaded and crunching, you have not set the SETI project preferences properly and it is downloading CPU tasks - go fix it and set the SETI project preferences as per step 2.
  8. Stop and close your BOINC client (again, make sure it is stopped and closed properly, not just minimised)
  9. Now it is time to set up the GPU apps - go download the r177 multibeam GPU only app files from HERE and also download the r516 astropulse GPU only app files from HERE
  10. Once downloaded, extract both archives and copy the files into your SETI project folder:
    ie. C:\ProgramData\BOINC\projects\setiathome.berkeley.edu
  11. Now create a new text file and name it app_info.xml (it has to be a .xml file). Note: if you have "hide extensions" on in Windows, it will end up being a .txt, so disable that so you cans ee the full file name and make sure it ends with .xml or it won't work.
  12. In youe app_info.xml file, add the following:

    Code:

    Code:
    <app_info>
    
    <app>
    <name>setiathome_enhanced</name>
    </app>
    <file_info>
    <name>MB_6.10_win_SSE3_ATI_HD5_r177.exe</name>
    <executable/>
    </file_info>
    <file_info>
    <name>MultiBeam_Kernels.cl</name>
    <executable/>
    </file_info>
    <app_version>
    <app_name>setiathome_enhanced</app_name>
    <version_num>610</version_num>
    <platform>windows_intelx86</platform>
    <avg_ncpus>0.05</avg_ncpus>
    <max_ncpus>0.05</max_ncpus>
    <plan_class>ati13ati</plan_class>
    <cmdline>-period_iterations_num 1 -instances_per_device 1</cmdline>
    <flops>20987654321</flops>
    <file_ref>
    <file_name>MB_6.10_win_SSE3_ATI_HD5_r177.exe</file_name>
    <main_program/>
    </file_ref>
    <file_ref>
    <file_name>MultiBeam_Kernels.cl</file_name>
    <copy_file/>
    </file_ref>
    <coproc>
    <type>ATI</type>
    <count>1</count>
    </coproc>
    </app_version>
    
    <app>
    <name>astropulse_v505</name>
    </app>
    <file_info>
    <name>ap_5.06_win_x86_SSE2_OpenCL_ATI_r516.exe</name>
    <executable/>
    </file_info>
    <file_info>
    <name>AstroPulse_Kernels.cl</name>
    <executable/>
    </file_info>
    <app_version>
    <app_name>astropulse_v505</app_name>
    <version_num>506</version_num>
    <platform>windows_intelx86</platform>
    <avg_ncpus>0.05</avg_ncpus>
    <max_ncpus>0.05</max_ncpus>
    <plan_class>ati13ati</plan_class>
    <cmdline>-instances_per_device 1 -hp -unroll 10 -ffa_block 4096 -ffa_block_fetch 2048</cmdline>
    <flops>30987654321</flops>
    <file_ref>
    <file_name>ap_5.06_win_x86_SSE2_OpenCL_ATI_r516.exe</file_name>
    <main_program/>                           
    </file_ref>
    <file_ref>
    <file_name>AstroPulse_Kernels.cl</file_name>
    <copy_file/>
    </file_ref>
    <coproc>
    <type>ATI</type>
    <count>1</count>
    </coproc>
    </app_version>
    
    </app_info>
  13. Now save and close the app_info.xml file to the SETI project folder as well:
    ie. C:\ProgramData\BOINC\projects\setiathome.berkeley.edu
  14. Note 1: if you ARE NOT using HD58xx or HD69xx series GPUs, you need to edit the app_info.xml file and CHANGE the 2x instances of "MB_6.10_win_SSE3_ATI_HD5_r177.exe" to "MB_6.10_win_SSE3_ATI_r177.exe" otherwise your CPU usage be high and it may cause your GPU driver to continually restart, the BOINC client to crash or give errors when validating the workunit.
  15. Note 2: I have set "count" to 1 so it will crunch 1 workunit per GPU, and for me, that seems to offer the best estimated speed per instance and lowest CPU usage - YMMV. If you want it to crunch 2 workunits per GPU, set it to 0.5 and set "instances per device" to 2. If you want 4 workunits per GPU then set it to 0.25 and set "instances per device" to 4 - but keep in mind the more workunits per GPU the more CPU usage and depending on how powerful your CPU is, to many workunits per GPU could bottleneck your GPUs and impact GPU usage. Also, it was mentioned in the SETI forums that crunching more then 1 workunit per GPU can cause workunit computation and/or validation errors.
  16. Note 3: I initially had an issue with it using 100% CPU usage and only ~70% GPU usage, and I had to change "instances per device" to 1 and that dropped CPU usage down to 55% and increased GPU usage to ~94% - it seems that each workunit uses approx 25% CPU usage as well (I have 2 GPU's, so 25% per workunit x2 is ~50% CPU usage total) - again YMMV. You can try and reduce CPU usage further by increasing the "period_iterations_num" value, however it didn't do anything for me.
  17. Note 4: The Astropulse tasks are rare, so don't worry if it doesn't download any for days at a time. But the multibeam tasks are quite abundant, so you should start to download them within the first hour. If you don't have any downloaded within an hour, check your configuration and check your SETI project preferences are setup as per step 2.
That should get your ATI/AMD GPUs crunching SETI. If you have any problems, please post them
smile.gif


Also, after your first few GPU workunits have been validated, don't forget to update the GPU Credits Database HERE so others can get an idea of time per workunit and estimated PPD
wink.gif


For anyone else who wanted to crunch GPU only and never quite got it working, here is my app_info.xml file for reference:
 

· Team Red Lobbyist
Joined
·
1,661 Posts
pin for read later
 

· Registered
Joined
·
2,306 Posts
Discussion Starter · #6 ·
Quote:
Originally Posted by cechk01;13072397
I dont see a download links on any of the optimized apps webpages
I've update the links now - sorry about that.
 

· Registered
Joined
·
2,306 Posts
Discussion Starter · #8 ·
np DR.

For anyone wondering, this is not a high PPD project - so if you're only interested in high ppd, stick to [email protected], dnetc (if it ever comes back up) or primegrid (very good for nvidia gpus) or collatz.

Basically, on a stock clocked 5850 (725/1050) each workunit takes around 35-45minutes and gives approx 145 credits - for an estimated 5300ppd.

Saying that, I've always wanted to run this project and see if there really is any other inteligent life out there
aliensmiley.gif
 

· Premium Member
Joined
·
5,184 Posts
Quote:
Originally Posted by un-nefer;13081027

Basically, on a stock clocked 5850 (725/1050) each workunit takes around 35-45minutes and gives approx 145 credits - for an estimated 5300ppd.]
That is anything but generous. At stock clocked 5770 (850/1200) for roughly 40 minutes, a Collatz WU usually gives me high 2000s - low 3000s
 

· Registered
Joined
·
2,306 Posts
Discussion Starter · #11 ·
yeah. No idea how they work out ppd, but if they bumped it to match collatz or pg then at least it would be more reasonable for most ppl
smile.gif
 

· Premium Member
Joined
·
6,497 Posts
i remember using these in the past. they were like 90%cpu and 10% gpu usage. really didnt help any at all getting them done. waste of time to me. gpu can be running something else while the cpu does all the work. unless its an nvidia capable card.
 

· Premium Member
Joined
·
18,744 Posts
might test these when i get my new psu in tommor but for the low points not worth to run long term it seems.
 

· Registered
Joined
·
2,306 Posts
Discussion Starter · #18 ·
If you limit them to 1 wu per GPU then it should use <30% CPU per wu per GPU. And if you have enough CPU available, GPU usage should stay between 90%-95%.

When I did a test with SETI on the weekend, I had 1x WU on each GPU and CPU uasge was ~55% and GPU usage was ~94% on both GPUs.
 

· Premium Member
Joined
·
6,497 Posts
running this on my work 5550 to see how it does. so far so good. doesnt seem to process multibeams any faster than the 5000+ amd cpu thats already in this pc. maybe the higher end gpus get a better gain.
 

· Registered
Joined
·
2,306 Posts
Discussion Starter · #20 ·
OK, my earlier gestimate on CPU usage is incorrect.

If you try and crunch the "HD5" multibeam application on anything but "HD58XX" or "HD69XX" series GPUs, then you will have high CPU usage - as I have mentioned already, about ~27% CPU usage per WU per GPU.

However, if you crunch "HD5" multibeam application on ONLY the "HD58XX" or "HD69XX" series GPUs, CPU usage is all but gone. I'm lucky to see 5% CPU usage, even with 2x 5850's running a WU each.

Basically, for me, the higher CPU usage was due to my 4870 trying to run the "HD5" multibeam application as per my app_info.xml file. So if you DO NOT have a "HD58XX" or "HD69XX" series GPU, you should use the standard "MB_6.10_win_SSE3_ATI_r177.exe" multibeam application.
 
1 - 20 of 29 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top