M3C Configuration

All nf-core pipelines have been successfully configured for use on the M3 cluster at the M3 Research Center here.

To use, run the pipeline with -profile m3c. This will download and launch the m3c.config which has been pre-configured with a setup suitable for the M3 cluster. Using this profile, for DSL1 pipelines a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. For pipelines in DSL2, the individual Singularity images will be downloaded.

Before running the pipeline you will need to install Nextflow on the M3 cluster. You can do this by following the instructions here.

A local copy of the AWS-iGenomes resource has been made available on M3C so you should be able to run the pipeline against any reference available in the igenomes.config specific to the nf-core pipeline. You can do this by simply using the --genome <GENOME_ID> parameter.

[!Note] You will need an account to use the M3 HPC cluster in order to run the pipeline. If in doubt contact IT. [!Note] Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT. [!Note] Each group needs to configure their singularity cache directory.

Config file

See config file on GitHub

m3c.config
//Profile config names for nf-core/configs
params {
    config_profile_description = 'The M3 Research Center HPC cluster profile provided by nf-core/configs'
    config_profile_contact     = 'Sabrina Krakau (@skrakau)'
    config_profile_url         = 'https://www.medizin.uni-tuebingen.de/de/das-klinikum/einrichtungen/zentren/m3'
}
 
singularity {
    enabled = true
}
 
process {
    resourceLimits = [
        memory: 1843.GB,
        cpus: 128,
        time: 14.d
    ]
    executor         = 'slurm'
    queue            = { task.time > 23.h ? 'cpu3-long' : (task.memory > 460.GB || task.cpus > 64 ? 'cpu2-hm' : 'cpu1') }
    scratch          = 'true'
    containerOptions = '--bind $TMPDIR'
}
 
params {
    igenomes_base = '/mnt/lustre/datasets/igenomes'
    max_memory    = 1843.GB
    max_cpus      = 128
    max_time      = 14.d
}