sshas well as Slurm. However, due to unfinished maintenance caused by COVID-19, some nodes are inaccessible indefinitely.
infocubeor an available node.
sshinto a cluster node:
remote.tjhsst.eduusing your TJCSL username and password
sshinto a cluster node
ssh borg1(or any available node)
ras2/workstation AFS directories. Instead all user files are stored in CephFS under the directory
/cluster. For example the cluster files for
2021jdoewould be in the directory
infocubeor any cluster node your cluster directory will be
/cluster/<username>. In addition to the cluster nodes, you can access your cluster files on
ras2, under the same directory.
/clusterdirectory is not your default directory and is separate from your default directory files. You may use
mv, or another utility for moving files back and forth from anywhere on
/clusterdirectory. This feature is currently unavailable on workstations. If you wish to copy files from a workstation over to your
/clusterdirectory you may use
scpto copy your files from the workstation over to the target cluster node.
infocube, run some simple commands, specifying what they want to run, how many resources it should have, priority, and other optional arguments, and SLURM takes care of allocating cluster resources for them, and provides job accounting so users know the status of their jobs. More information at our Slurm docs.