nf-core/configs: vsc_kul_uhasselt
HPC_GENIUS profile for use on the genius cluster of the VSC HPC.
KU Leuven/UHasselt Tier-2 High Performance Computing Infrastructure (VSC)
NB: You will need an account to use the HPC cluster to run the pipeline.
- Install Nextflow on the cluster
A nextflow module is available that can be loaded module load Nextflow
but it does not support plugins. So it’s not recommended
- Set up the environment variables in
~/.bashrc
or~/.bash_profile
:
The current config is setup with array jobs. Make sure nextflow version >= 24.04.0, read array jobs in nextflow you can do this in
- Make the submission script.
NB: you should go to the cluster you want to run the pipeline on. You can check what clusters have the most free space using following command
sinfo --cluster wice|genius
.
NB: You have to specify your credential account, by setting
export SLURM_ACCOUNT="<your-credential-account>"
else the jobs will fail!
Here the cluster options are:
- genius
- wice
- superdome
NB: The vsc_kul_uhasselt profile is based on a selected amount of SLURM partitions. Should you require resources outside of these limits (e.g.gpus) you will need to provide a custom config specifying an appropriate SLURM partition (e.g. ‘gpu*’).
Use the --cluster
option to specify the cluster you intend to use when submitting the job:
All of the intermediate files required to run the pipeline will be stored in the work/
directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/
directory anyway.