Overclock.net › Forums › Overclockers Care › Overclock.net BOINC Team › [Guide] How to run BOINC on HP Cloud Services Free Beta
New Posts  All Forums:Forum Nav:

[Guide] How to run BOINC on HP Cloud Services Free Beta

post #1 of 31
Thread Starter 
UPDATE: The free beta has ended. It was a fun ride for a bit.

For those that did not already know, HP are in the beta test phase of their cloud services. You can currently apply for a free beta test of the service, on which you can run many things including BOINC. Access is limited and there is no telling how long this will be free for, so get on it now.

I am writing this guide to help people with no knowledge or experience in Linux / Unix set up BOINC on their cloud service. I followed a couple of guides on the EVGA forums (listed at bottom), however I found that they missed a few steps which are not immediately apparent to Linux / Unix n00bs like myself.

First off, some quick points:

What do I need?
  • A computer with admin rights running Windows 7 (note, I don't know if this will work for other OSes)
  • A credit card (will not be charged)
  • An e-mail address
  • 20-30mins to set things up after you get the invite

What will I get?
  • Access to up to 20 CPU cores to play with
  • About 50k PPD (if running Collatz, YMMV)
  • A lot of e-peen


Have a read here for details of HP's Cloud Services (HPCS)

The process

1. Apply for beta access code Apply for beta access code (Click to show)
Firstly you need to apply for an account. Sign up here for your free beta access. It is worth filling in the "How will you use HPCS" box properly as they seem to judge access based on this. As an example this is what I wrote:
Quote:
I will be initially running a distributed computing project to assess performance and reliability with a view creating a business case for outsourcing my department's CFD / FEA computation time

Fill in the form and fire it off, then wait for them to get back to you. Please note that it can take them a couple of days to respond, I got my key within 48 hours.



2. Set up some servers Set up some servers (Click to show)
At some point you will get an e-mail with your access code and a link. Copy the code, click on the link paste the code in the box and away you go!

You should be greated with a screen like this Welcome Screen (Click to show)
422


Click on the "Activate now" button for one of the computing instances. At some point you will be prompted to enter your billing information. You need to do this before you continue. Don't worry, you will not be charged without plenty of warning.

Next thing to do is to create a key pair. You will need one key pair for each of the two server locations. Click on the "Key Pair" text and create one key. I called mine "Server A"
Key Pair text (Click to show)
153
Create Key Pair (Click to show)
443
Note: The key pair shown has been deleted. I'm not that stupid.


Copy the text to a notepad file and save it. Note you must keep this safe, as the text says there is no way to retrieve it later.


Now go back to your "manage servers" page and start creating servers. I created 2 4 CPU / 4GB servers and a 2 CPU / 2GB server. Note that you are limited to 20GB of ram per location, so that makes 40GB total over the two locations.

I used Ubuntu Maveric 10.10 as the OS (third line in case your text box doesn't go that wide)
Create Servers (Click to show)
150


Next you need to attach a public IP address to each server so that you can actually connect to it. Click on the server instance:
Server Instance (Click to show)
286

Then click on "Attach Public IP"
Attach Public IP (Click to show)
397
Note that your screen might vary, I am doing this after the fact with a little image editing thrown in.

Attach all your new servers to public IPs and make a note of them.


Repeat this entire stage for the other location too. You will need to create and save another key pair.



3. Set up PuTTY to access your new servers Set up PuTTY (Click to show)
Create a folder called PuTTY on your computer.

Download PuTTy and PuTTY Gen and put them in your PuTTY folder.

Create a notepad file in your PuTTY folder and paste one of your key pairs into it and save it as a .pem file, in my case ServerA.pem
Create PEM file (Click to show)
487

Open PuTTYgen and select Conversions and Import Key. Select the Key Pair (.pem file) you created and then select Save private key. Ignore the warning.
I saved the files as ServerA.ppk and ServerB.ppk because only one key is needed for each compute cluster location.

Close PuTTY Gen

Open PuTTY

Copy your server's Public IP Address into the Host name box.
Expand SSH under Connection and select Auth.
Select Browse under Private key field for authentication and select your private key ServerA.ppk or ServerB.ppk
PuTTY (Click to show)
287

Select Session, enter the server's Public IP address under Saved Sessions and select Save so you don't have to go through this again.
Select Open and yes on the Security Alert.

Do this for all of your servers (I had 6 in total)



4. Configure and update your new servers and install the BOINC client Configure, update and install BOINC (Click to show)
Note, to do this you can either type the command directly into the text prompt or you can select the text from this post, copy to clipboard (Ctrl + C) and then right click in the Linux terminal window to paste. All commands should be entered without the quotes.

Login as root by typing "root"
Enter the following commands in order and select "y" if prompted
Code:
apt-get update
apt-get upgrade -y
apt-get install ia32-libs libstdc++6 freeglut3
apt-get install boinc-client
/etc/init.d/boinc-client restart

The commands will update the OS (1&2), install the needed libraries (3), install the BOINC client (4) and restart the BOINc client (5). Note that these might take some time, I ran all my servers at the same time to make the whole process quicker.



5. Attach to projects Attach to projects (Click to show)
So BOINC should now be installed and ready to go, the last thing to do is to attach to some projects. I will be using Collatz for this example, go ahead and use whatever you want.

Firstly if you do not have an account for the project you want to attach to go to the site and set one up. Make sure to set your computing preferences up in your account, it is a lot easier than doing it through the Linux terminal

Now go to your account and copy your full key to the clipboard:
Full Key (Click to show)
279
Note: This is not really my account key, I made it up.

Now in the terminal window type the following command then right click to paste your account key:
Code:
boinccmd --project_attach http://boinc.thesonntags.com/ 
*now right click to paste your account key*
(Change the URL based on the project of your choice)

Hit enter and it should attach to the project.

You can check if it is running by typing "top" and pressing enter.
Top (Click to show)
373

"Top" is like task manager for windows, it shows what is running and how much resources it is using. Use "q" to quit Top and go back to the prompt, < and > change the sort columns. Use "h" for more commands.

That's it! You should now be BOINCing on the cloud, set up your other servers then sit back and let the credits roll in.

To close the terminal window first quit top if you are in it, then hold down Ctrl and press A then D.



6. Controlling how BOINC is running Controlling BOINC (Click to show)
You can set a lot of computing preferences on your project account page, but some things, like attaching and detaching projects, need to be done through the terminal window. I have summararized a few basic commands below. Check out this page for a full list of BOINC commands (boinccmd)

Useful commands are:

Code:
boinccmd --set_run_mode {always | auto | never} [ duration ]
Set run mode. You pretty much want this to be "always", if BOINC isn't running a few minutes after attaching to a project it is worth checking this.
always: do CPU work always
auto: do work only when allowed by preferences
never: don't do work

Code:
boinccmd --get_state | more
Get overall status, space bar to page through


Code:
boinccmd --get_results | more
Show tasks, space bar to page through


Code:
boinccmd --project {URL} {operation}
Do operation on a project, identified by its URL.
Operations:
reset: delete current work and get more;
detach: delete current work and don't get more;
update: contact scheduling server;
suspend: stop work for project;
resume: resume work for project;
nomorework: finish current work but don't get more;
allowmorework: undo nomorework
detach_when_done: detach project

Code:
| more
Note: Using the "more" tag the end of a command allows you to page through the output using the space bar. You can add it to any command that produces a lot of text.

The "|" key is between enter and backspace on US keyboards, to the right of left shift on UK keyboards.

Quote:
Originally Posted by jetpak12 View Post

Tip: One thing that we folders have found with HPCS, is that due to the virtual nature of the compute instances, you are not always given complete access over the cores you are assigned. This results in drastic differences in performance between two instances. For example, one of my 4-core instances is running at 11,000 ppd, but another is running at 1,000 ppd. If you find one of your instances is not performing up to your standards, simple terminate it and make it again. For some, it may not be worth the hassle, but its the only way to improve under-performing instances.



7. How to end your account End your account (Click to show)
Go to the HPCS website and log into your account

Go to "Account" near the top of the page and then the "Miscellaneous" tab.

Click on the big red button
Warning: Spoiler! (Click to show)
297

You will need to call a 1-888 number with your account details to cancel the account.

Do this before May 10th to avoid being charged.



Credits

I'm not all that clever, credit for most of this goes to these two guides on EVGA forums:
How to set up crunching in the HP Cloud
HP Cloud Services SMP client setup

Something missing?

I tried to be thorough, but If you think I have missed a step or some vital information please post in this thread and I will update the OP.


Questions?

Please note, as I said I am no expert. In fact I am a n00b, but who better to write a n00b guide?

As such if you have any questions please post them in this thread rather than PMing me, as that will allow much more knowledgeable people to answer too.
Edited by GingerJohn - 5/10/12 at 9:26am
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
post #2 of 31
Send a copy of this post to MegaUpload and maybe they can use it too...

biggrin.gif
Blue Beast
(13 items)
 
  
CPUMotherboardGraphicsRAM
W3670 4.0 GHz (HT On) 1.345v 24/7 Asus P6X58D Premium DUAL EVGA GTX 560 Ti SC 1G in SLI 12G Corsair Dominator GT2000MHz 
Hard DriveOptical DriveOSMonitor
Kingston V300 120G SSD, 2x 1TB Barracuda HD's LG BlueRay/LightScribe Burner Windows 7 Pro 64Bit 3x24", 1x22" LCD 
KeyboardPowerCaseMouse
Wireless Logitech K320 ULTRA X4 1200W Custom Danger Den LDR-29 Wireless Logitech M310 
Mouse Pad
Custom 
  hide details  
Reply
Blue Beast
(13 items)
 
  
CPUMotherboardGraphicsRAM
W3670 4.0 GHz (HT On) 1.345v 24/7 Asus P6X58D Premium DUAL EVGA GTX 560 Ti SC 1G in SLI 12G Corsair Dominator GT2000MHz 
Hard DriveOptical DriveOSMonitor
Kingston V300 120G SSD, 2x 1TB Barracuda HD's LG BlueRay/LightScribe Burner Windows 7 Pro 64Bit 3x24", 1x22" LCD 
KeyboardPowerCaseMouse
Wireless Logitech K320 ULTRA X4 1200W Custom Danger Den LDR-29 Wireless Logitech M310 
Mouse Pad
Custom 
  hide details  
Reply
post #3 of 31
I'm someone who's been using the HPCS for F@H, and am looking to get started in the OCN BOINC team as well.

Thanks to this guide, I'll be shutting down about half of my HPCS servers that are Folding and putting them towards BOINC. thumb.gif

Anyways, I haven't tried it yet, but I have a couple questions and a tip, based on my experience running F@H on HPCS:

Question 1: I noticed that you recommend selecting Ubuntu 10.10, but then state that you should run the "apt-get upgrade" command. Its my understanding that this line upgrades you to the latest version of Ubuntu, so why not start with version 11.10 in the first place? For folders, Ubuntu 10.10 is recommended over the newer versions of Ubuntu because it performs better. If this is the case with BOINC as well, then you might want to leave that step out. 10.10 is still being supported by Canonical, so it will still receive all the security updates that the latest version gets.

Question 2: Does BOINC generally require a lot of RAM?

Tip: One thing that we folders have found with HPCS, is that due to the virtual nature of the compute instances, you are not always given complete access over the cores you are assigned. This results in drastic differences in performance between two instances. For example, one of my 4-core instances is running at 11,000 ppd, but another is running at 1,000 ppd. If you find one of your instances is not performing up to your standards, simple terminate it and make it again. For some, it may not be worth the hassle, but its the only way to improve under-performing instances.

I'll be following this guide as soon as a few of my servers finish up their work units, and let everyone know how it goes. smile.gif
post #4 of 31
Thread Starter 
Quote:
Originally Posted by jetpak12 View Post

Question 1: I noticed that you recommend selecting Ubuntu 10.10, but then state that you should run the "apt-get upgrade" command. Its my understanding that this line upgrades you to the latest version of Ubuntu, so why not start with version 11.10 in the first place? For folders, Ubuntu 10.10 is recommended over the newer versions of Ubuntu because it performs better. If this is the case with BOINC as well, then you might want to leave that step out. 10.10 is still being supported by Canonical, so it will still receive all the security updates that the latest version gets.

I'm not sure what the command updates, but it looks like I am still running 10.10: 10.10 (Click to show)
378

You can try leaving it out and see what happens if you want. It is possible that when the guide I was following was written the later versions were unavailable when creating servers.

Edit:

Cehck out this article, it seems that the apt -get_upgrade command does not update your distro, it "is used only to install all of the newest versions of the packages already installed on your machine".

/Edit
Quote:
Originally Posted by jetpak12 View Post

Question 2: Does BOINC generally require a lot of RAM?

Depends on what project you are running. Collatz requires very little (look at my "top" screenshot), but other projects can require significantly more. Rosetta when running on my rig takes up ~0.5GB per instance.

I suppose, if the RAM usage wasn't an issue, it would be possible to create 20 single core servers per location, giving 40 CPUs in total to play with, but that would require a lot more work to set up.

Quote:
Originally Posted by jetpak12 View Post

Tip: One thing that we folders have found with HPCS, is that due to the virtual nature of the compute instances, you are not always given complete access over the cores you are assigned. This results in drastic differences in performance between two instances. For example, one of my 4-core instances is running at 11,000 ppd, but another is running at 1,000 ppd. If you find one of your instances is not performing up to your standards, simple terminate it and make it again. For some, it may not be worth the hassle, but its the only way to improve under-performing instances.

Good to know. I have not had a problem with that, my instances are running evenly with the dual core ones producing half the PPD of the quad cores. I'll add it to the OP though.
Edited by GingerJohn - 2/28/12 at 11:16pm
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
post #5 of 31
Quote:
Originally Posted by GingerJohn View Post

I'm not sure what the command updates, but it looks like I am still running 10.10: 10.10 (Click to show)
378

You can try leaving it out and see what happens if you want. It is possible that when the guide I was following was written the later versions were unavailable when creating servers.

Edit:

Cehck out this article, it seems that the apt -get_upgrade command does not update your distro, it "is used only to install all of the newest versions of the packages already installed on your machine".

/Edit

Thanks, you're right. redface.gif I found this page from Ubuntu that explains it too. Apt-get update only updates the lists of available packages, while apt-get upgrade actually does the upgrading. And its a completely different command to upgrade the OS, like you mention. I'm no Linux master; sorry for confusion.
Quote:
Depends on what project you are running. Collatz requires very little (look at my "top" screenshot), but other projects can require significantly more. Rosetta when running on my rig takes up ~0.5GB per instance.

I suppose, if the RAM usage wasn't an issue, it would be possible to create 20 single core servers per location, giving 40 CPUs in total to play with, but that would require a lot more work to set up.

Thanks for the info, I guess I should have guessed that different projects would have different requirements.


I've now followed the guide and have set up my first official BOINC project. thumb.gif Its all very good, but I ran into confusion at a couple points:

When I attached a new project, I got an error saying I needed to add my username and password to the command-line:
Code:
root@server-61579:~# boinccmd --project_attach http://sudoku.nctu.edu.tw/
Missing command-line argument

usage: boinccmd [--host hostname] [--passwd passwd] command

I suppose certain projects require that you put in the password and username right away? And I take it "hostname" is your username, and "passwd" is the full account key you mentioned? That's what I used and it seemed to work, so I'd just like to know that I did it right. smile.gif

EDIT: I missed your line about right-clicking at the end of that line to add your key. I got it working now correctly.

And my other question: I take it BOINC always knows how many cores to assign? BOINC seemed to know without being told that there were two cores available on my instance.
Edited by jetpak12 - 2/29/12 at 12:37am
post #6 of 31
tl;dr, signed up and if I get invited I'll set it up when I get the invite and read this more carefully. But this is awesome lol
Edited by lagittaja - 2/29/12 at 4:46am
Butterfly effect
(19 items)
 
Helios
(18 items)
 
 
CPUMotherboardGraphicsRAM
Intel Core i7 3770K Asus Maximus IV GENE-Z Club3D HD 7850 8GB Samsung 30nm 
RAMHard DriveHard DriveOptical Drive
8GB Team Group Crucial BX100 250GB WD30EFRX Samsung SH-B083L 
CoolingOSMonitorKeyboard
Thermalright HR-02 Macho Windows 10 Professional 64bit Benq G2410HD Cherry G80-3000 
PowerCaseMouseAudio
Seasonic 660w XP2 Fractal Design Define R5 Logitech Performance Mouse MX SMSL SD-793ii 
OtherOtherOther
2x Fractal Design GP-14 1x Noctua NF-A14 PWM Kensington Expert Mouse 
CPUMotherboardGraphicsRAM
Intel Pentium G2120 Intel DH77EB Gigabyte 5670 DDR3 1Gb (GV-R567D3-1GI) Kingston 2x4GB 1600Mhz CL9 1.35v 
Hard DriveHard DriveOptical DriveCooling
WD1001FALS Samsung 830 64GB Optiarc AD5280S Noctua NH-U12P 
CoolingCoolingCoolingOS
Arctic Cooling Accelero S1 Rev. 2 Scythe Slip Stream PWM Scythe Gentle Typhoon AP-14 Windows 7 Ultimate 64bit 
MonitorKeyboardPowerCase
LG 32LH2000 Logitech MK520 Seasonic SS-400FL V2 Lian Li PC-A05NB 
Other
HDHomerun HDHR3/EU 
  hide details  
Reply
Butterfly effect
(19 items)
 
Helios
(18 items)
 
 
CPUMotherboardGraphicsRAM
Intel Core i7 3770K Asus Maximus IV GENE-Z Club3D HD 7850 8GB Samsung 30nm 
RAMHard DriveHard DriveOptical Drive
8GB Team Group Crucial BX100 250GB WD30EFRX Samsung SH-B083L 
CoolingOSMonitorKeyboard
Thermalright HR-02 Macho Windows 10 Professional 64bit Benq G2410HD Cherry G80-3000 
PowerCaseMouseAudio
Seasonic 660w XP2 Fractal Design Define R5 Logitech Performance Mouse MX SMSL SD-793ii 
OtherOtherOther
2x Fractal Design GP-14 1x Noctua NF-A14 PWM Kensington Expert Mouse 
CPUMotherboardGraphicsRAM
Intel Pentium G2120 Intel DH77EB Gigabyte 5670 DDR3 1Gb (GV-R567D3-1GI) Kingston 2x4GB 1600Mhz CL9 1.35v 
Hard DriveHard DriveOptical DriveCooling
WD1001FALS Samsung 830 64GB Optiarc AD5280S Noctua NH-U12P 
CoolingCoolingCoolingOS
Arctic Cooling Accelero S1 Rev. 2 Scythe Slip Stream PWM Scythe Gentle Typhoon AP-14 Windows 7 Ultimate 64bit 
MonitorKeyboardPowerCase
LG 32LH2000 Logitech MK520 Seasonic SS-400FL V2 Lian Li PC-A05NB 
Other
HDHomerun HDHR3/EU 
  hide details  
Reply
post #7 of 31
I actually started 2 day ago setting up everything up. I wanted to test it and make sure everything works as described before posting. Great guide by the way, would have come in handy a few days ago.

After 24 hours of crunching, I have 20 cores completing Go Fight Against Malaria tasks. The cores are pretty fast too biggrin.gif anywhere from 3 to 7 hours

I just hope I can get a solid month of crunching in before they make me pay up. I figure I'm helping HP donate to humanitarian causes thumb.gif
BOINC Cruncher
(15 items)
 
BOINC Monster
(9 items)
 
 
CPUMotherboardGraphicsGraphics
Dual Xeon 5410 @ 2.33 intel dx5400xs "Skulltrail" AMD Saphire 5850 AMD Saphire 5850 
Hard DriveCoolingOSPower
western digital caviar black 500 Stock windows xp corsair hx850 
Case
cooler master stacker 830 
  hide details  
Reply
BOINC Cruncher
(15 items)
 
BOINC Monster
(9 items)
 
 
CPUMotherboardGraphicsGraphics
Dual Xeon 5410 @ 2.33 intel dx5400xs "Skulltrail" AMD Saphire 5850 AMD Saphire 5850 
Hard DriveCoolingOSPower
western digital caviar black 500 Stock windows xp corsair hx850 
Case
cooler master stacker 830 
  hide details  
Reply
post #8 of 31
After reading the great guide, I have decided to see if they will let me in on this biggrin.gif
Precious
(23 items)
 
Intel 4P Rig
(16 items)
 
AMD 4P Rig
(9 items)
 
CPUCPUCPUCPU
Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 
MotherboardGraphicsRAMHard Drive
SuperMicro X9QRi-F+ G200 on board  4GB PC3-12800R x16 OCZ Deneva  
Optical DriveCoolingOSMonitor
Samsung CD/DVD burner 4 x Cooler Master Hyper 212 Server 2012 Standard None 
PowerCaseMouse
OCZ ZX 1250 Rosewill Blackhawk Ultra None 
CPUCPUCPUCPU
Opteron 6376 Opteron 6376 Opteron 6376 Opteron 6376 
MotherboardRAMHard DriveCooling
H8QGi+-F Hynix 16 x HMT151R7BFR4C-H9 4GB 2RX4 PC3-10600R Silicon Power Slim S55 2.5" 480GB SATA III SSD ... Cooler Master Hyper 212 
OS
Ubuntu Server 14.04 
  hide details  
Reply
Precious
(23 items)
 
Intel 4P Rig
(16 items)
 
AMD 4P Rig
(9 items)
 
CPUCPUCPUCPU
Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 Xeon E5-4650 (ES) C0 
MotherboardGraphicsRAMHard Drive
SuperMicro X9QRi-F+ G200 on board  4GB PC3-12800R x16 OCZ Deneva  
Optical DriveCoolingOSMonitor
Samsung CD/DVD burner 4 x Cooler Master Hyper 212 Server 2012 Standard None 
PowerCaseMouse
OCZ ZX 1250 Rosewill Blackhawk Ultra None 
CPUCPUCPUCPU
Opteron 6376 Opteron 6376 Opteron 6376 Opteron 6376 
MotherboardRAMHard DriveCooling
H8QGi+-F Hynix 16 x HMT151R7BFR4C-H9 4GB 2RX4 PC3-10600R Silicon Power Slim S55 2.5" 480GB SATA III SSD ... Cooler Master Hyper 212 
OS
Ubuntu Server 14.04 
  hide details  
Reply
post #9 of 31
Thread Starter 
Quote:
Originally Posted by jetpak12 View Post

EDIT: I missed your line about right-clicking at the end of that line to add your key. I got it working now correctly.

Yes, that probably wasn't as clear as I could have made it. I have edited the OP to try to make that more obvious
Quote:
Originally Posted by jetpak12 View Post

And my other question: I take it BOINC always knows how many cores to assign? BOINC seemed to know without being told that there were two cores available on my instance.

You can set your "computing preferences" in your project account page (through your browser) to use a certain parcentage of the available cores otherwise BOINC will use all the cores it can find. This will affect all your projects.

If you are running multiple projects you can determine which projects get preference through the individual project preferences, again in your account page. By setting the "resource share" you can change what portion of CPU time each project gets. It allocates resources based on the amount of "resource share" each project has divided by the total resource share. For example:
Code:
Collatz: 100
Rosetta: 100
Primegrid: 100

Each project gets 1/3 of the available resources
Code:
Collatz: 200
Rosetta: 50
Primegrid: 50

Collatz gets 200/300 = 2/3 of the resources
Rosetta gets 50/300 = 1/6 of the resources
Primegrid gets 50/300 = 1/6 of the resources

Personally I think the best way of allocating certain resources to certain projects when on HPCS is to allocate by servers. If I wanted to split half my resources between Rosetta and Collatz I would have 3 servers (20 cores) running Collatz and 3 servers (20 cores) running Rosetta. This avoids any inequality in scheduling and so on.
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
Main
(21 items)
 
HTPC
(10 items)
 
 
CPUMotherboardGraphicsRAM
i5 2550k P8P67 Pro Sapphire HD 7950 G.Skill RipJaws X 1600 Cas 9 
Hard DriveHard DriveHard DriveCooling
Corsair Force 120 WD Blue 500GB WD Caviar Green 1TB XSPC RayStorm 
CoolingCoolingCoolingCooling
RX240 MCR 220 EK 7950 Copper Acetal  DDC-1T 
OSMonitorMonitorKeyboard
Windows 7 64-bit Dell U2311H Oculus Rift DK2 Ducky Shine 3 MX Brown 
PowerCaseMouseAudio
Corsair TX 750W CoolerMaster CM690 II G500 Klipsch ProMedia 2.1 
Audio
Asus Xonar DX 
CPUMotherboardRAMHard Drive
A10-6800K Gigabyte GA-F2A85XN-WIFI G Skill 1600 CAS9 Kingston SSD Now 60GB 
Hard DriveOptical DriveCoolingOS
WD Caviar Blue 1TB LG Slim Blu-Ray player Silverstone NT06-PRO  Widows 7 Home Premium 
PowerCase
Silverstone Sfx Series ST45SF 450W Silverstone SG05 
  hide details  
Reply
post #10 of 31
Ok, I got one server running one project, and I've now tried to set up a second project. Where can I find the full account key for the second project (Milkyway@home)? It didn't give me one. Do I use the old one, or is it somewhere on the BAM Boincstats (which I also signed up for)?

Nothing I tried seems to work.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Overclock.net BOINC Team
Overclock.net › Forums › Overclockers Care › Overclock.net BOINC Team › [Guide] How to run BOINC on HP Cloud Services Free Beta