For Users of the Previous Systems

This information is for users to migrate the previous system to the new system replaced in fiscal 2022.

There are no changes from the previous system in the host name (round robin) and how to log in. If you want to log in to the login node directly without going through the round robin, please refer to Access because the specific host name has been changed.

If the following message appears and you cannot log in, you need to delete the known_hosts information.

@@@  WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!   @@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
c0:30:d6:93:b2:d8:06:4a:6f:9c:d5:00:cc:c5:69:58.
Please contact your system administrator.
Add correct host key in /home/xxx/.ssh/known_hosts to get rid of this
message.
Offending RSA key in /home/xxx/.ssh/known_hosts:3
RSA host key for laurel.kudpc.kyoto-u.ac.jp has changed and you have
requested strict checking.
Host key verification failed.

You can delete the known_host information with the following instruction.

  • Use the ssh-keygen command
  • (Example) Delete known_hosts in laurel
    $ ssh-keygen -R laurel.kudpc.kyoto-u.ac.jp
  • Edit the known_host file directly.
    1. Open the file %homepath%\.ssh\known_hosts (Windows), /Users/(username)/.ssh/known_hosts (Mac, Linux) using an editor.
    2. Delete and save the contents.

  1. Exit MobaXterm.
  2. Open %appdata%\MobaXterm\MobaXterm.ini in an editor.
  3. Delete the information for the relevant host in [SSH_Hostkeys].
    ssh-ed25519@22:laurel.kudpc.kyoto-u.ac.jp=0xd152edcd(omitting the following)
  4. Start MobaXterm.

The configuration of modules, location of applications and environment variables have changed since the previous system. If you used a customized .bashrc in the previous system, please modify the .bashrc as necessary.
You can also initialize the .bashrc by copying /etc/skel/.bashrc to your home directory as follows.
If you cannot log in, please let us know using the Inquiries Form, and we will initialize the shell configuration file with administrator privileges.

  • When you copy /etc/skel/.bashrc to your home directory.
    $ cp /etc/skel/.bashrc $HOME

From the new system, we have changed the operation to manage SSH public keys all together in the user portal. Accordingly, the .ssh directory in the home directory ($HOME) has been moved under $HOME/DOTFILES_20221108/. If you do not need it, please delete it.

User data that had been saved in the previous system was migrated automatically.

/LARGE2 has been consolidated into /LARGE0 and /LARGE3 into /LARGE1. We configure that you can link to /LARGE0 from /LARGE2 and link to /LARGE1 from /LARGE3 with the existing path, however the configuration will be removed in the future, so please update your path.

In addition, quota management for large volume storage has been changed from Group Quota to Project Quota. As a result, capacity is managed in the unit of paths of the large volume storage group, not the group to which the file belongs.

For details on the file system configuration of the new system, please refer to Use of Storage.

CPU time and the amount of memory for each system's login node is extended in order to avoid interruptions during file transfers to PCs.

System CPU time (standard) CPU time (maximum) Amount of memory (standard)
Previous system 4 hours 24 hours 8GB
New system 4 hours 24 hours 16GB

The OS will be changed from CLE/RHEL 7 to RHEL 8.

Compilers will be provided for Intel, NVIDIA HPC SDK, and GNU. The Cray compiler will no longer be provided.

The job scheduler will be changed from PBS to Slurm.

Purpose PBS Slurm
Specify the queue to submit jobs -q QUEUENAME -p QUEUENAME
Specify the execution group -ug GROUPNAME Not required
Specify the elapsed time limit -W HOUR : MIN -t HOUR:MIN
・ Specify the number of processes
・ Specify the number of threads per process
・ Specify the number of CPU cores per process
・ Specify the memory size per process
-A p=X:t=X:c=X:m=X --rsc p=X:t=X:c=X:m=X
Specify the standard output file name -o FILENAME Not changed
Specify standard error output file name -e FILENAME Not changed
Summarize standard error output -j oe(Output to standard output) / eo(Output to standard error) Not changed
Send Email -m a(when a job interrupted) / b(When started) / e(When ended) --mail-type=BEGIN(When started)/END(when ended)/FAIL(When a job interrupted)/REQUEUE(when re-executed)/ALL(All)
Specify email address -M MAILADDR --mail-user=MAILADDR
Specify prohibition of job re-execution when failure occurs -r n --no-requeue
Purpose PBS Slurm
Check the queue where jobs can be submitted qstat -q spartition
Submit a job to the queue qsub sbatch
Check job status qstat squeue
Cancel a submitted job. qdel scancel
Check job details qs sacct -l

Purpose PBS Slurm
Job ID QSUB_JOBID SLURM_JOBID
Name of the queue where the job was submitted QSUB_QUEUE SLURM_JOB_PARTITION
Current directory where the job was submitted QSUB_WORKDIR SLURM_SUBMIT_DIR
Number of processes allocated when executing a job QSUB_PROCS SLURM_DPC_NPROCS
Number of threads allocated per process when executing a job QSUB_THREADS SLURM_DPC_THREADS
Number of CPU cores allocated per process when executing a job QSUB_CPUS SLURM_DPC_CPUS
Upper limit for the amount of memory allocated per process when executing a job QSUB_MEMORY
Number of processes placed per node when executing a job QSUB_PPN

You can convert job script commands and options used in the PBS environment for Slurm with the pbs2slurm command.

pbs2slurm input_script [output_script]

[b59999@camphor1 script]$ cat pbs.sh
#!/bin/bash
#======Option========
#QSUB -q gr19999b
#QSUB -A p=1:t=1:c=1:m=1G
#QSUB -W 12:00
#QSUB -r n
#QSUB -M kyodai.taro.1a@kyoto-u.ac.jp
#QSUB -m be
#====Shell Script====
mpiexec.hydra ./a.out

[b59999@camphor1 script]$ pbs2slurm pbs.sh slurm.sh

[b59999@camphor1 script]$ cat slurm.sh
#!/bin/bash
#======Option========
#SBATCH -p gr19999b
#SBATCH --rsc p=1:t=1:c=1:m=1G
#SBATCH -t 12:00
#SBATCH --no-requeue
#SBATCH --mail-user=kyodai.taro.1a@kyoto-u.ac.jp
#SBATCH --mail-type=BEGIN,END
#====Shell Script====
srun ./a.out

The pbs2slurm command supports conversion of the following options. Options not included below should be modified individually.

Before conversion After conversion Purpose
#QSUB -q #SBATCH -p Specify queues
#QSUB -A #SBATCH --rsc Specify resources
#QSUB -W #SBATCH -t Specify elapsed time
#QSUB -N #SBATCH -J Specify job name
#QSUB -o #SBATCH -o Specify the destination for standard output
#QSUB -e #SBATCH -e Specify the destination for standard error output
#QSUB -m #SBATCH --mail-type Specify the timing of email sending
#QSUB -M #SBATCH --mail-user Specify the recipient of the email
#QSUB -r n #SBATCH --no-requeue prohibition of job re-execution
#QSUB -J #SBATCH -a Specify array job
mpiexec srun MPI Execution(If there is a option, it must be removed manually.)
mpiexec.hydra srun MPI Execution(If there is a option, it must be removed manually.)