![]() You can choose the target queue using the -q option to the qsub command or from within the job script (q.v. bigmem.q: A queue with two slots and a maximum virtual memory limit of 20G.debug.q: Supports the debugging of a live or past job on a specific server t3wnXX.short.q: Supports testing or small jobs.long.q: Supports jobs with up to 96h running time.all.q: Supports jobs with up to 10h running time.We offer the following batch system queues: # if you did not want them in the submission directoryįor queue limits and policy, please have a look here # here you could change location of the job report stdout/stderr files # written to this directory, if you do not override it (below). This will also result in the job report stdout/stderr being # Change to the current working directory from which the job got # Job name (defines name seen in monitoring by qstat and the If no option at all is given, the files will be copied to the user's home directory. It is also possible to define the paths for these file explictely using the -o and -e flags. The job's stdout/stderr will be copied to the directory from which you submitted, if you used the -cwd flag to the qsub command or if you placed #$ -cwd into your job file (q.v. This is the preferred way used in the templates below). q short.q for submitting to the short queue) or from within the job script (using a line beginning with #$, e.g. ![]() Note: various options can be passed to the qsub command, either on the command line (e.g. You can get extensive information through the man pages on the UI, or by referring to the documentation on the SGE home site. The next sections will provide you with example scripts that observe all of the mentioned points. Use the -valid flag instead of the older -hours flag, since this gives an extended lifetime for both your proxy and the VOMS extensions given by CMS). So, if you plan to run a job which lasts for 30 hours, you need a Grid proxy valid for at least this time: voms-proxy-init -voms cms -valid 32 (Your proxy gets saved to your shared home directory at $HOME/.x509up_u$, and so it is seen from all the worker nodes. You need a valid Grid proxy certificate for interacting with the SE.Your user area on the element is located here srm://:8443/srm/managerv2?SFN=/pnfs/psi.ch/cms/trivcat/store/user/$your-CMS-hn-name Big result files should be written to our dCache storage element (SE).Intensive IO should be done locally in the /scratch/$username directory on the worker node. ![]() ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |