Revision 3 as of 2016-12-07 15:17:33

Clear message
Locked History Actions

Computing/LIP_Lisbon_Farm/3_Data_Management

Directories and filesystems

  • The ui6 machines provide a wide set of filesystems so that users can access their data and applications.
  • The available filesystems in LIP-Lisbon login machines are:

# df -h
Filesystem              Size  Used Avail Use% Mounted on
mdt02@tcp:/atlas         12T  8.4T  2.7T  76% /lustre/atlas   ---> Lustre FS for ATLAS Tier-2 and Tier-3 grid activities
mdt02@tcp:/auger         28T   24T  2.9T  90% /lustre/auger   ---> Lustre FS for AUGER local users
mdt02@tcp:/calor         72T   37T   32T  54% /lustre/calo    ---> Lustre FS for CALO local users
mdt02@tcp:/pem          186G  118G   60G  67% /lustre/pet     ---> Lustre FS for PET local users
mdt02@tcp:/sno           10T  1.1T  8.5T  12% /lustre/sno     ---> Lustre FS for SNO local users
st011:/exports/soft      20G   12G  7.3G  61% /soft           ---> NFS fs for local software
st011:/exports/lip-tmp  2.0T  126G  1.9T   7% /hometmp        ---> NFS fs for temporary / scratch storage
st011:/exports/home      20G   12G  7.3G  61% /v/home         ---> NFS fs LIP homes and data
st002:/exports/home      22G  6.2G   14G  31% /u/home         ---> NFS fs LIP homes and data
st002:/exports/data      22G  6.2G   14G  31% /u/data         ---> NFS fs LIP homes and data
st002:/exports/x         12T  9.7T  2.2T  82% /x              ---> NFS fs LIP homes and data
st012:/exports           20G  3.8G   15G  20% /z              ---> NFS fs LIP homes and data
  • The available filesystems in LIP-Coimbra login machines are:

$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
lustre@tcp:/gstore                101T   71T   25T  75% /gstore                     ---> Lustre fs dedicated for atlas tier2 grid activities
llustre@tcp:/lstore-1              34T  9.4T   23T  30% /lstore/atlaslocalgroupdisk ---> Lustre fs dedicated for atlas tier3 grid users
llustre@tcp:/lstore-2              43T   38T  2.9T  93% /lstore/atlas               ---> Lustre fs dedicated for atlas local users
llustre@tcp:/lstore-3              18T   12T  5.5T  68% /lstore/lip                 ---> Lustre fs dedicated for lip local users
192.168.2.44:/software            3.9T  882G  3.0T  23% /software                   ---> NFS fs for local software
192.168.2.30:/exports/home-atlas   13T   12T  1.5T  89% /home/local/atlas           ---> NFS fs for ATLAS local homes
192.168.2.30:/exports/home-lip     13T   12T  1.4T  90% /home/local/lip             ---> NFS fs for LIP local homes

Data Management

LIP-Lisbon use cases

  • At LIP-Lisbon, the home filesystem is not shared between the submission hosts and the execution hosts. As a result, it is the user responsibility to transfer data and applications to/from the execution machines.
  • There are several ways to manage data in LIP-Lisbon FARM:
    1. Automatic transfers via scp
    2. Data access via /hometmp (NFS)
    3. Data access via /lustre

Automatic transfers via scp

  • SCOPE: This is the most appropriate method to transfer a small number of small files.

  • The automatic transfer of data and application via scp is triggered by declaring files (or directories) to transfer in dedicated system variables defined in the submission script.
    • SGEIN{1...N}: Define one variable for each file or directory to transfer from the submission machine to the execution machine

    • SGEOUT{1...N}: Define one variable for each file or directory to transfer from the execution machine to the submission machine

# Transfer input file (MyMacro.c) to the execution machine
#$ -v SGEIN1=MyMacro.c

# Transfer output file (graph_with_law.pdf) from the execution machine
#$ -v SGEOUT1=graph_with_law.pdf
  • The full syntax for scp automatic transfers is described hereafter. Keep in mind that all paths should be relative to the current working directory (where you are submitting the job):

# My input file is called input_file1.txt and it will have the same name in the execution host
#$ -v SGEIN1=input_file1.txt

# My input file is called input_file2.txt but it will be called inputfile2.txt in the execution host
#$ -v SGEIN2=input_file2.txt:inputfile2.txt

# My input is a full directory (The directory INPUT3 must exist in the submission host)
#$ -v SGEIN3=INPUT3

# My input is the file INPUT4/input_file4.txt, and it will exist in the execution host in INPUT4/inputfile4.txt
#$ -v SGEIN4=INPUT4/input_file4.txt:INPUT4/inputfile4.txt

# My input is the directory INPUT5 and it will be called INPUT_AT_WORKERNODE1 in the execution host
#$ -v SGEIN5=INPUT5:INPUT_AT_WORKERNODE1

# My input is the file INPUT6/input_file6.txt, and it will exist in the execution host in INPUT_AT_WORKERNODE2/inputfile6.txt
#$ -v SGEIN6=INPUT6/input_file6.txt:INPUT_AT_WORKERNODE2/inputfile6.txt

# My input is the directory INPUT7 which will pass to the execution host as the tree of directories
#    INPUT_AT_WORKERNODE3/INPUT_AT_WORKERNODE4
#$ -v SGEIN7=INPUT7:INPUT_AT_WORKERNODE3/INPUT_AT_WORKERNODE4

# My input is the file INPUT8/input_file8.txt which will pass to the execution host
#    as INPUT_AT_WORKERNODE5/INPUT_AT_WORKERNODE6/inputfile8.txt
#$ -v SGEIN8=INPUT8/input_file8.txt:INPUT_AT_WORKERNODE5/INPUT_AT_WORKERNODE6/inputfile8.txt

Data access via /hometmp (NFS)

  • SCOPE: Same input files and applications are used by multiple jobs

  • If the same input files should serve multiple jobs, users should store those files under the /hometmp directory, shared between the submission hosts and the execution hosts. This is more efficient than copying the same files over and over again.

  • Simultaneously, users can use /hometmp to check the status of running jobs using, for example, dedicated logs. Check the following example:

# ! /bin/bash

MY_HOMETMP=/hometmp/csys/goncalo

OUTPUT_FILE=output_file1.txt
INPUT_FILE=input_file1.txt
OUTPUT_FILE=output_file1.txt
MyLOG=mylog.txt

echo "Starting second test on `date`"> $MY_HOMETMP/$MyLOG

tr -s 'a-z' 'A-Z' < $MY_HOMETMP/$INPUT_FILE >> $OUTPUT_FILE
mv -f $OUTPUT_FILE $MY_HOMETMP/$OUTPUT_FILE

echo "Finishing second test on `date`" >> $MY_HOMETMP/$MyLOG
  • While the job is running, the user can check the job status consulting the mylog.txt log in /hometmp

Important Disclaimer
  • Users should be aware of the following issues:
    1. Be cautious so that files are not squeezed when writing to the /hometmp, specially while sending arrays of jobs.
    2. It is preferable that users do not write OUTPUT results directly to /hometmp (due to performance degradation generated by lock management mechanisms). It is better to write OUTPUT results to the local disk (where the jobs is executing), and copy it at the end of your job to /hometmp
    3. Data in /hometmp will be deleted after 30 days.

Access data via '''/lustre'''

  • SCOPE: Store and access big/huge data files.

  • /lustre is a shared filesystem (present in the execution hosts and in the submission hosts) dedicated for the storage of big/huge files. The following directories are accessible for the local LIP groups:

    1. /lustre/lip.pt/data/calo
    2. /lustre/lip.pt/data/cosmo
    3. /lustre/lip.pt/data/pet
    4. /lustre/lip.pt/data/sno
  • Groups involved in WLCG transfer data using grid technologies to the following locations
    1. ATLAS: /lustre/lip.pt/data/atlas/atlaslocalgroupdisk (calo group has read access to this filesystem)

Important Disclaimer
  • Manipulating huge sets of small files generates performance degradation issues in /lustre due to the lock management. Therefore, you should not
    • Compile anything under /lustre

    • Store and access databases under /lustre