Please enter your password of your account at the remote wiki below.
/!\ You should trust both wikis because the password could be read by the particular administrators.

Clear message
Locked History Actions

Computing/LIP_Lisbon_Farm/6_SLURM/6.1_From_SGE_to_SLURM_Conversion

Commands in SGE and equivalents in SLURM

User Command

SGE

SLURM

Interactive login

qlogin

srun -p lipq -q inter --pty bash -i

Job submission

qsub [script_file]

sbatch [script_file]

Job status

qstat

squeue

Job status by job id

qstat -j [job_id]

squeue -j [job_id]

Job status by user

qstat -u [username]

squeue -u [username]

Job deletion

qdel [job_id]

scancel [job_id]

Job hold

qhold [job_id]

scontrol hold [job_id]

Job release

qrls [job_id]

scontrol release [job_id]

Queue list

qconf -sql

sinfo

Cluster status

qhost -q

sinfo

Nodes list

qhost

sinfo -Nl

scontrol show nodes

GUI

qmon

sview

Common environment

User Command

SGE

SLURM

Job ID

$JOB_ID

$SLURM_JOBID

Submit directory

$SGE_O_WORKDIR

$SLURM_SUBMIT_DIR

Submit host

$SGE_O_HOST

$SLURM_SUBMIT_HOST

Node list

$PE_HOSTFILE

$SLURM_JOB_NODELIST

Job array index

$SGE_TASK_ID

$SLURM_ARRAY_TASK_ID

Job directives

User Command

SGE

SLURM

queue/partition

#$ -q [queue]

#SBATCH -p [partition]

count of nodes

N/A

#SBATCH -N [min[-max]]

CPU count

#$ -pe [PE] [count]

#SBATCH -n [count]

Wall clock limit

#$ -l h_rt=[seconds]

#SBATCH -t [min]

#SBATCH -t [days-hh:mm:ss]

Standard output file

#$ -o [file_name]

#SBATCH -o [file_name]

Standard error file

#$ -e [file_name]

#SBATCH -e [file_name]*

Combine stdout and stderr

#$ -j yes

(use -o withou -e)

Copy environment

#$ -V

#SBATCH --export=[ALL/NONE/varnames]

Job stage in transfers

#$ -V SGEIN=name[:name]

#IN=name[:name]

# IN name[:name]

Job stage out transfers

#$ -V SGEOUT=name[:name]

#OUT=name[:name]

# OUT name[:name]

Job name

#$ -N [name]

#SBATCH --job-name=[name]

Restart job

#$ -r [yes/no]

#SBATCH --requeue

#SBATCH --no-requeue (default)

Set working directory

#$ -wd [dir_name]

#SBATCH --workdir=[dir_name]**

Resource sharing

#$ -l exclusive

#SBATCH --exclusive

#SBATCH --shared

Memory size

#$ -l mem_free=[mem(KMG]

#SBATCH #SBATCH --mem=[mem(KMG)]

#SBATCH --mem-per-cpu=[mem(KMG)]

Tasks per node

(fixed in PE)

#SBATCH --tasks-per-node=[count]

#SBATCH --cpus-per-task=[count]

*

On lipq partition, for the time being, the standard error is always merged with standard output

**

On lipq partition the starting working subdirectory on workernodes is always a volatil uniq directory on local home