Finding a compromise between researchers’ needs, their skills in data management, data access restrictions, and limited funding for RDM is a complex but highly relevant and timely challenge.
HPC systems have particular hard- and software configurations that introduce specific challenges for the implementation of reproducible data processing workflows.
In this lightning talk, I will share my experience using DataLad, git-annex and ReproMan to run software pipelines on hundreds of fMRI datasets on an HPC cluster.
In the complex realm of network engineering design, optimisation methods have been instrumental, using a range of components across different systems and scenarios.