Name: campusrocks.cse.ucsc.edu
Address: 128.114.48.114
Machine | GB RAM | CPUs | Cores | MHz | Model |
---|---|---|---|---|---|
campusrocks.local(head node?) | 30 | 4 | 1 | 2600 | Dual-Core AMD Opteron™ Processor 2218 HE |
campusrocks-0-2(through 30).local | 15 | 4 | 2 | 2664 | Intel(R) Xeon(R) CPU 5150 |
campusrocks-0-(31,32).local | 15 | 8 | 2 | 3192 | Intel(R) Xeon(TM) CPU |
campusrocks-1-0(through 31).local | 15 | 2 | 1 | 2592 | Dual-Core AMD Opteron™ Processor 2218 HE |
campusrocks-2-(0,1).local | 192 | 48 |
campusrocks-0-6.local exceptionally has 30 GB RAM
/campusdata
If you would like to change your shell to bash, make a file called .profile which you will put all of your environmental settings into, and add this to the beginning:
if [ "$SHELL" = "/bin/sh" ] then SHELL=/bin/bash export SHELL fi
If you would like to add our classes bin directory (or any other directory for that matter) then add this to your .profile:
PATH=/campusdata/BME235/bin:$PATH export PATH
To use the gcc-4.5 family of compilers stored in our bin directory then you can specify the LD_LIBRARY_PATH environmental variable:
LD_LIBRARY_PATH=/campusdata/BME235/lib:/campusdata/BME235/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH
To make your default folder creation settings group readable and writable then add this line to your .profile:
umask 007
Campusrocks uses sungrid for scheduling jobs. The techstaff request that jobs only be run on the compute nodes, not the head node, as jobs run there slow down the entire cluster. They threaten to kill jobs found running on the head node.
There is online documentation of how to use sungrid to submit jobs on the cluster. The main commands are
The web site is locked so that it can only be accessed oncampus.
Useful submit-script settings for SGE:
#$ -N some_name_here
Apparently, you must have a single word after -N
This gives a nicer name to your script when you run qstat.
#$ -V
Your jobs use your environment variables.
#$ -l mem_free=15g
Your job goes to a node with at least 15g of free memory. There is probably something like '-l mem_avail' that works if you would like to schedule your job on a node with a certain amount of max memory available, even if some of it is currently in use.
#$ -pe mpi [number of desired cores to use]
Your job goes to a set of nodes with a combined total of a least 4 cores available.
#$ -cwd
Your job will be launched at the directory where the qsub command was given. Useful for setting predictable bash commands. Though not recommended for portability as other users of the script would need to also know where to launch the qsub command.
#$ -q [Name of specific server queue]
Used to specify which set of servers you would like the sungrid scheduler to schedule/run your jobs in. Names on campus rocks include 'all.q' and 'small.q'. Default: 'all.q'
#$ -M ADolichophallus@ucsc.edu
#$ -m e
Used to have campus rocks email you at the end of your job.
#$ -j y
Used to combine stderr & stdout into stdout.
Example qsub commands:
Useful commands for viewing currently run jobs:
#$ qhost
Displays the compute nodes available on the cluster including hardware statistics.
#$ qstat
Displays the status of the jobs submitted to qsub.
Primary job status codes:
#$ qacct -j [job id]
Displays resource usage and runtime statistics for a job that has run to completion.
Very useful for determining optimal resource allocation for tasks.