User documentation for the PLEIADES cluster at the University of Wuppertal
Note:
Your files on BeeGFS are not backed up! Make sure to regularly store important results and data on another device. In case of a catastrophic failure or event, we won’t be able to restore the data.
Many and frequent accesses to files on /beegfs
can produce significant load on the metadata servers.
In consequence, the responsiveness of the shared filesystem goes bad for all users.
There are a couple of things you should avoid when working on /beegfs
, because of this:
ls
or implicitly through any readdir
operation in your program or programming language. The lookup results in a locked operations in the metadata server. This could happen if you frequently check for file status in your job scripts./beegfs
as your working directory for frequent file I/O in your job. Please consider using the local /tmp
storage. Every worker node is equipped with fast 2TB SSDs for exactly this purpose.
/beegfs
.tar
file, since a single large file is better to digest on a parallel filesystem than many small filesWe have a job script example that automatically cleans up the /tmp
directory at the end of a job.
If left files in /tmp
that you want to rescue or remove manually, the best approach is to book an interactive shell on the node via:
srun -p short -w wn21053 -n1 -t 60 --pty /bin/bash