The default login shell is set to bash.
The basic PATH settings are automatically loaded, but to create a personalized startup file, prepare a shell such as .bashrc (for bash).
Please do not delete the following contents in the home .bashrc, as they are necessary for setting up the bash environment.
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
You can change the login shell from the User Portal.
If you want to execute startup commands only on a particular system, you can write a conditional statement according to the result of the hostname command, as shown in the following example.
For .bashrc – bash
case `hostname` in
camphor*)
#Processing for system A
;;
laurel*)
#Processing for System B, C and Cloud
;;
gardenia*)
#Processing for System G
;;
esac
For .tcshrc – tcsh
switch(`hostname`)
case camphor*:
#Processing for system A
breaksw
case laurel*:
#Processing for System B, C and Cloud
breaksw
case gardenia*:
#Processing for System G
breaksw
endsw
Modules is a software that allows you to set up the necessary environment settings for using compilers, libraries, and applications on each system at once. Please refer to Modules for details.
Each login node has a different module environment that is automatically loaded at login. The batch processing environment and the compiler environment in the following table are loaded.
When the fiscal year changes or the system configuration changes, we review the environment and version to be loaded.
login node | System Environment | Batch Processing Environment | Compile Environment |
---|---|---|---|
camphor.kudpc.kyoto-u.acjp | SysA | slurm | intel, intelmpi, PrgEnvIntel |
laurel.kudpc.kyoto-u.acjp | SysB | slurm | intel, intelmpi, PrgEnvIntel |
cinnamon.kudpc.kyoto-u.ac.jp | SysC | slurm | intel, intelmpi, PrgEnvIntel |
gardenia.kudpc.kyoto-u.ac.jp | SysG | slurm | nvhpc, openmpi, PrgEnvNvidia |
If you want to switch from the login environment to the cloud system environment, you can switch with the following command.
$ module switch SysCL
Notification emails from the system, such as Slurm error notifications, are sent to the local email addresses of system A, B, C, and E. You can check this email with the mutt command, but we recommend that you create a .forward file in your home directory and forward it to your regular email address so that you can check it as soon as possible.
## Set up to forward email to foo@example.com
$ echo "foo@example.com" > ~/.forward
To translate command output and compiler translation messages into Japanese, set ja_JP.UTF-8 to the environment variable LANG as well as set the character encoding of your SSH client (e.g. PuTTY) to UTF-8.
## Set environment variable LANG (for bash)
$ export LANG=ja_JP.UTF-8
## Set environment variable LANG (for tcsh)
$ setenv LANG ja_JP.UTF-8
## The man command is displayed in Japanese.
$ man man
man(1) man(1)
Name
man - Format and display online manual pages.
manpath - Determine the search path for manual pages for each user.
Format
man [-adfhktwW] [-m system] [-p string] [-C config_file] [-M path] [-P
pager] [-S section_list] name ...
Since the login node is shared by many users, CPU and memory utilization is limited. CPU time limit can be extended with the commands described below. Please extend as necessary.
Item | Initial Value | Maximum Value |
---|---|---|
CPU Time / Process | 4 hours | 24 hours |
Number of CPU cores / user | 4 cores | same as on the left |
Memory size / user | 16GB | same as on the left |
If you are using bash, use the ulimit command.
Check the current settings
$ ulimit -a
...
cpu time (seconds, -t) 14400
...
Extend to maximum value
$ ulimit -t 86400 # Extend CPU TIME to the maximum value
$ ulimit -a # Check
...
cpu time (seconds, -t) 86400
...
If you are using tcsh, you can use the limit and unlimit commands to check the settings and extend to the maximum value.
Check the current settings
$ limit
cputime 4:00:00
...
Extend to maximum value
$ unlimit # Extend
$ limit # Check
cputime 24:00:00
...
You can schedule the server to perform tasks automatically using cron.
Login nodes that allow you to set cron of a whole are camphor31 in system A, laurel31 in system B, cinnamon31 in system C and gardenia11 in system G.
## Logging in to the champhor31 that allows to set up cron.
$ ssh b59999@camphor31.kudpc.kyoto-u.ac.jp
## Confirm the cron settings.
$ crontab -l
## Setting up cron (seup editor opens).
$ crontab -e