user/pages/04.migration/docs.en.md
452be89a
 ---
f8a383b1
 title: 'For Users of the Previous Systems'
452be89a
 taxonomy:
     category:
         - docs
 ---
 
 [toc]
 
f8a383b1
 This information is for users to migrate the previous system to the new system replaced in fiscal 2022.
 
452be89a
 ## Host Name and How to Log In{#login}
 
f8a383b1
 There are no changes from the previous system in the host name (round robin) and how to log in. 
 If you want to log in to the login node directly without going through the round robin, please refer to [Access](/login)  because the specific host name has been changed.
452be89a
 
 ### If you encountered an error when login{#login_error}
f8a383b1
 If the following message appears and you cannot log in, you need to delete the known_hosts information.
452be89a
 
 ```nohighlight
 @@@  WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!   @@@
 IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
 Someone could be eavesdropping on you right now (man-in-the-middle attack)!
 It is also possible that a host key has just been changed.
 The fingerprint for the RSA key sent by the remote host is
 c0:30:d6:93:b2:d8:06:4a:6f:9c:d5:00:cc:c5:69:58.
 Please contact your system administrator.
 Add correct host key in /home/xxx/.ssh/known_hosts to get rid of this
 message.
 Offending RSA key in /home/xxx/.ssh/known_hosts:3
 RSA host key for laurel.kudpc.kyoto-u.ac.jp has changed and you have
 requested strict checking.
 Host key verification failed.
 ```
 
f8a383b1
 You can delete the known_host information with the following instruction.
452be89a
 
f8a383b1
 #### Terminal{#terminal}
 * Use the ssh-keygen command<br>
 * (Example) Delete known_hosts in laurel
 ```nohighlight
 $ ssh-keygen -R laurel.kudpc.kyoto-u.ac.jp
 ```
 *  Edit the known_host file directly. 
 1. Open the file `%homepath%\.ssh\known_hosts` (Windows), `/Users/(username)/.ssh/known_hosts` (Mac, Linux) using an editor.
 3. Delete and save the contents.
 
 #### MobaXterm{#mobaxterm}
 1. Exit MobaXterm.
 2. Open `%appdata%\MobaXterm\MobaXterm.ini` in an editor.
 3.  Delete the information for the relevant host in [SSH_Hostkeys]. 
 ```nohighlight
 ssh-ed25519@22:laurel.kudpc.kyoto-u.ac.jp=0xd152edcd(omitting the following)
 ```
 4. Start MobaXterm.
 
 ## How to initialize $HOME/.bashrc{#bashrc}
 The configuration of modules, location of applications and environment variables have changed since the previous system.
 If you used a customized .bashrc in the previous system, please modify the .bashrc as necessary.<br>
 You can also initialize the .bashrc by copying /etc/skel/.bashrc to your home directory as follows.<br>
 If you cannot log in, please let us know using the [Inquiries Form](https://www.iimc.kyoto-u.ac.jp/en/inquiry/?q=consult), and we will initialize the shell configuration file with administrator privileges.
 * When you copy /etc/skel/.bashrc to your home directory.
 ```nohighlight
 $ cp /etc/skel/.bashrc $HOME
 ```
452be89a
 
f8a383b1
 ## $HOME/.ssh directory{#ssh}
 From the new system, we have changed the operation to manage SSH public keys all together in the user portal. Accordingly, the .ssh directory in the home directory ($HOME) has been moved under $HOME/DOTFILES_20221108/. If you do not need it, please delete it.
452be89a
 
f8a383b1
 ## Data migration{#mv_data}
452be89a
 
f8a383b1
 User data that had been saved in the previous system was migrated automatically.
452be89a
 
f8a383b1
 <!--
 ホームディレクトリの容量は100GBです。パーソナル/グループ/専用クラスタの容量は、2022年度は前システムの容量を継承します。2023年度以降は大型計算機システム利用負担金規程の通りです。
452be89a
 
f8a383b1
 ディレクトリ|前システムの容量|新システムの容量
452be89a
 ---------|----------------|---------------
f8a383b1
 ホームディレクトリ    |100GB            |100GB
 大容量ストレージ(パーソナルコース)   |3TB|8TB
452be89a
 
f8a383b1
 グループコースの大容量ストレージの容量は、以下の計算式に従います。
452be89a
 
f8a383b1
 **システムA,B**
452be89a
 
f8a383b1
 タイプ     |前システムの容量   |新システムの容量
452be89a
 ----------|------------------|----------------
f8a383b1
 準々優先 | - | 6.4TB x 契約ノード数
 準優先   |3.6TB x 契約ノード数|9.6TB x 契約ノード数
 優先     |6.0TB x 契約ノード数|16.0TB x 契約ノード数
 占有     |6.0TB x 契約ノード数|16.0TB x 契約ノード数
452be89a
 
f8a383b1
 **システムC**
452be89a
 
f8a383b1
 タイプ     |前システムの容量   |新システムの容量
452be89a
 ----------|------------------|----------------
f8a383b1
 準々優先 | - | 6.4 x 契約ノード数
 優先     |24TB x 契約ノード数|16TB x 契約ノード数
452be89a
 
f8a383b1
 **システムG**
452be89a
 
f8a383b1
 タイプ     |前システムの容量   |新システムの容量
 ----------|------------------|----------------
 準々優先 | - | 6.4 x 契約ノード数
 優先     | - |16TB x 契約ノード数
 -->
 ### Large Volume Storage (LARGE){#large}
 /LARGE2 has been consolidated into /LARGE0 and /LARGE3 into /LARGE1. We configure that you can link to /LARGE0 from /LARGE2 and link to /LARGE1 from /LARGE3 with the existing path, however the configuration will be removed in the future, so please update your path.
452be89a
 
f8a383b1
 In addition, quota management for large volume  storage has been changed from Group Quota to Project Quota.
 As a result, capacity is managed in the unit of paths of the large volume storage group, not the group to which the file belongs.
452be89a
 
f8a383b1
 For details on the file system configuration of the new system, please refer to  [Use of Storage](/filesystem).
452be89a
 
 
f8a383b1
 ## Process limit of login node{#process_limit}
452be89a
 
f8a383b1
 CPU time and the amount of memory for each system's login node is extended in order to avoid interruptions during file transfers to PCs.
452be89a
 
f8a383b1
 System  | CPU time (standard)| CPU time (maximum) |  Amount of memory (standard)
 -------- | ------------- | ------------- | -------------
 Previous system | 4 hours      | 24 hours   | 8GB          
 New system |4 hours | 24 hours | 16GB
452be89a
 
f8a383b1
 ## Changes{#compatibility_incompatibility}
452be89a
 
f8a383b1
 ### OS
 The OS will be changed from CLE/RHEL 7 to RHEL 8.
452be89a
 
 ### Compilers and Libraries{#compiler}
 
f8a383b1
 Compilers will be provided for Intel, NVIDIA HPC SDK, and GNU. The Cray compiler will no longer be provided.
452be89a
 
 
 ### Batch Job Scheduler{#lsf}
 
f8a383b1
 The job scheduler will be changed from PBS to Slurm.
 
452be89a
 #### Comparisons of Job Script Options
 
f8a383b1
 Purpose| PBS | Slurm 
452be89a
 :--------------:|:-------------:|:-------------:
f8a383b1
  Specify the queue to submit jobs|-q _QUEUENAME_ | -p _QUEUENAME_
  Specify the execution group | -ug _GROUPNAME_ | Not required
  Specify the elapsed time limit | -W _HOUR_ : _MIN_ | -t _HOUR_:_MIN_
 ・ Specify the number of processes <br>・ Specify the number of threads per process<br>・ Specify the number of CPU cores per process<br>・ Specify the memory size per process |-A p=_X_:t=_X_:c=_X_:m=_X_ | --rsc p=_X_:t=_X_:c=_X_:m=_X_ 
  Specify the standard output file name | -o _FILENAME_ | Not changed
  Specify standard error output file name| -e _FILENAME_| Not changed 
 Summarize standard error output | -j oe(Output to standard output) / eo(Output to standard error) | Not changed
 Send Email|-m a(when a job interrupted) / b(When started) / e(When ended) | --mail-type=BEGIN(When started)/END(when ended)/FAIL(When a job interrupted)/REQUEUE(when re-executed)/ALL(All)
 Specify email address|-M _MAILADDR_ | --mail-user=_MAILADDR_
 Specify prohibition of job re-execution when failure occurs | -r n | --no-requeue
452be89a
  
  
  
f8a383b1
 #### Comparison of job-related commands
452be89a
 
f8a383b1
 Purpose| PBS | Slurm
452be89a
 :-----------------------:|:-----------------------------:|:-----------------------------:
f8a383b1
 Check the queue where jobs can be submitted | qstat -q | spartition
 Submit a job to the queue| qsub | sbatch
 Check job status |qstat | squeue
 Cancel a submitted job. | qdel | scancel
 Check job details|qs | sacct -l
 
 
452be89a
 
 
 #### Comparisons of Environment Variables 
 
f8a383b1
 Purpose | PBS | Slurm
452be89a
 :--------------------------:|:------------------------------------------:|:--------------------------------------:
f8a383b1
  Job ID | QSUB_JOBID | SLURM_JOBID
  Name of the queue where the job was submitted| QSUB_QUEUE | SLURM_JOB_PARTITION
  Current directory where the job was submitted| QSUB_WORKDIR | SLURM_SUBMIT_DIR
  Number of processes allocated when executing a job|QSUB_PROCS | SLURM_DPC_NPROCS
  Number of threads allocated per process when executing a job| QSUB_THREADS | SLURM_DPC_THREADS
  Number of CPU cores allocated per process when executing a job| QSUB_CPUS | SLURM_DPC_CPUS
  Upper limit for the amount of memory allocated per process when executing a job |QSUB_MEMORY|ー
  Number of processes placed per node when executing a job | QSUB_PPN | ー
 
 ## Job Script Conversion{#pbs2slurm}
 You can convert job script commands and options used in the PBS environment for Slurm with the **pbs2slurm** command.
 
 #### Format
 ```nohighlight
 pbs2slurm input_script [output_script]
 ```
 
 #### Examples
 ```nohighlight
 [b59999@camphor1 script]$ cat pbs.sh
 #!/bin/bash
 #======Option========
 #QSUB -q gr19999b
 #QSUB -A p=1:t=1:c=1:m=1G
 #QSUB -W 12:00
 #QSUB -r n
 #QSUB -M kyodai.taro.1a@kyoto-u.ac.jp
 #QSUB -m be
 #====Shell Script====
 mpiexec.hydra ./a.out
 
 [b59999@camphor1 script]$ pbs2slurm pbs.sh slurm.sh
 
 [b59999@camphor1 script]$ cat slurm.sh
 #!/bin/bash
 #======Option========
 #SBATCH -p gr19999b
 #SBATCH --rsc p=1:t=1:c=1:m=1G
 #SBATCH -t 12:00
 #SBATCH --no-requeue
 #SBATCH --mail-user=kyodai.taro.1a@kyoto-u.ac.jp
 #SBATCH --mail-type=BEGIN,END
 #====Shell Script====
 srun ./a.out
 ```
 
 #### Options for conversion
 The pbs2slurm command supports conversion of the following options.
 Options not included below should be modified individually.
 
 | Before conversion | After conversion | Purpose |
 |------- | ------ | -------|
 |#QSUB -q | #SBATCH -p | Specify queues |
 |#QSUB -A | #SBATCH --rsc | Specify resources|
 |#QSUB -W | #SBATCH -t | Specify elapsed time|
 |#QSUB -N | #SBATCH -J | Specify job name|
 |#QSUB -o | #SBATCH -o | Specify the destination for standard output|
 |#QSUB -e | #SBATCH -e | Specify the destination for standard error output|
 |#QSUB -m | #SBATCH --mail-type | Specify the timing of email sending|
 |#QSUB -M | #SBATCH --mail-user | Specify the recipient of the email|
 |#QSUB -r n | #SBATCH --no-requeue | prohibition of job re-execution |
 |#QSUB -J | #SBATCH -a | Specify array job|
 |mpiexec | srun | MPI Execution(If there is a option, it must be removed manually.)|
 |mpiexec.hydra | srun | MPI Execution(If there is a option, it must be removed manually.)|