infocube
ssh
as well as Slurm. However, due to unfinished maintenance caused by COVID-19, some nodes are inaccessible indefinitely.borg1
, borg2
, and borg3
are available)sinfo
on infocube
or an available node.ssh
. To ssh
into a cluster node:ssh
into remote.tjhsst.edu
using your TJCSL username and passwordssh
into a cluster nodessh borg1
(or any available node) ras1
/ras2
/workstation AFS directories. Instead all user files are stored in CephFS under the directory /cluster
. For example the cluster files for 2021jdoe
would be in the directory /cluster/2021jdoe
. infocube
or any cluster node your cluster directory will be /cluster/<username>
. In addition to the cluster nodes, you can access your cluster files on ras1
/ras2
, under the same directory. ras
your /cluster
directory is not your default directory and is separate from your default directory files. You may use cp
,mv
, or another utility for moving files back and forth from anywhere on ras
to your /cluster
directory. This feature is currently unavailable on workstations. If you wish to copy files from a workstation over to your /cluster
directory you may use sftp
or scp
to copy your files from the workstation over to the target cluster node./cluster/<username>
infocube
, run some simple commands, specifying what they want to run, how many resources it should have, priority, and other optional arguments, and SLURM takes care of allocating cluster resources for them, and provides job accounting so users know the status of their jobs. More information at our Slurm docs.