Browse code

20230914: various pages update

root authored on2023-09-14 16:24:07
Showing5 changed files
... ...
@@ -7,11 +7,15 @@ taxonomy:
7 7
 
8 8
 [toc]
9 9
 
10
-## Activation of Login Account
10
+
11
+## Activation of Login Account{#init}
12
+
11 13
 First-time users of the system are required to complete the User Portal start-up procedures after completing the application process.
12 14
 
13 15
 For details, please refer to the [Procedure to Start the Service](/misc/portal_init).
14 16
 
17
+
18
+
15 19
 ## How to access to the system{#login}
16 20
 
17 21
 Login to the supercomputer is limited to SSH (Secure SHell) key authentication. 
... ...
@@ -19,35 +23,41 @@ Login to the supercomputer is limited to SSH (Secure SHell) key authentication.
19 23
 * Access method: SSH public key authentication
20 24
 * SSH public key :Please register from [User Portal](https://web.kudpc.kyoto-u.ac.jp/portal/).
21 25
 If you have registered the key in the previous system, it has been taken over and you do not need to register it.
22
-* Access point:
23
-  * File transfer server:hpcfs.kudpc.kyoto-u.ac.jp
24
-  * SysB/Cloud:laurel.kudpc.kyoto-u.ac.jp
25
-  * SysC: cinnamon.kudpc.kyoto-u.ac.jp
26
-  * SysG: gardenia.kudpc.kyoto-u.ac.jp
26
+* Connection Destination:
27
+  * SysA:camphor.kudpc.kyoto-u.ac.jp
28
+  * SysB/Cloud:laurel.kudpc.kyoto-u.ac.jp
29
+  * SysC:cinnamon.kudpc.kyoto-u.ac.jp
30
+  * SysG:gardenia.kudpc.kyoto-u.ac.jp
31
+  * File Transfer Server:hpcfs.kudpc.kyoto-u.ac.jp
27 32
 
28 33
 !! * Please be sure to attach a passphrase to the private key. If a private key with no passphrase is placed on the login node, it will be deleted automatically.
29 34
 !! * It is strictly prohibited to share the same account (user ID) with more than one person.
30 35
 
31 36
 For details, please refer to [Access](/login).
32 37
 
33
-## Login Environment{#login}
38
+## Login Environment{#env}
39
+
34 40
 Each login node has a different module environment that is automatically loaded. The batch processing and compiler environments shown in the table are loaded, respectively. Switching the system environment from any login node makes it possible to submit batch jobs to each other.
35 41
 
36 42
 Note that the table below is information for the year 2023.
37 43
 
38
-| Login Node | System | Batch Processing | compile environment | 
39
-| ---------- | ------ | ---------------- | ------------------- |
40
-|laurel.kudpc.kyoto-u.ac.jp|SysB|Slurm|intel,intelmpi,PrgEnvIntel|
41
-|cinnamon.kudpc.kyoto-u.ac.jp|SysC|Slurm|intel,intelmpi,PrgEnvIntel|
42
-|gardenia.kudpc.kyoto-u.ac.jp|SysG|Slurm|nvhpc,openmpi,PrgEnvNvidia|
44
+| Login Node               | System     | Batch Processing | compile environment
45
+|----------                    | ---------------- | -------------  | ---------------
46
+| camphor.kudpc.kyoto-u.ac.jp | SysA | slurm | intel, intelmpi, PrgEnvIntel
47
+| laurel.kudpc.kyoto-u.ac.jp    |  SysB            | slurm          | intel,   intelmpi,   PrgEnvIntel
48
+| cinnamon.kudpc.kyoto-u.ac.jp |  SysC            | slurm          | intel,   intelmpi,   PrgEnvIntel
49
+| gardenia.kudpc.kyoto-u.ac.jp |  SysG            | slurm          | nvhpc,   openmpi,    PrgEnvNvidia
50
+
43 51
 
44
-If you want to switch to the cloud system environment, you can do so with the following command.
52
+If you want to switch to the cloud system environment, it is available with the following command.
45 53
 ```nohighlight
46 54
 $ module switch SysCL
47 55
 ```
48 56
 
49 57
 See [Modules](/config/modules) for more information on the module command.
50 58
 
59
+
60
+
51 61
 ## Use of Storage{#filesystem}
52 62
 
53 63
 The home directory is available to all users for data storage.
... ...
@@ -55,7 +65,9 @@ Large volume storage is available for users of Personal Course, Group Course, an
55 65
 Both storage areas can be accessed by all login nodes and computing nodes with the same PATH.
56 66
 
57 67
 * Home directory (/home):100G
58
-* Large volume storage (/LARGE0, /LARGE1):Several TB~several hundred TB (depending on the amount of resources applied for)
68
+* Large volume storage (/LARGE0, /LARGE1):Several TB ~ several hundred TB (set according to the amount of resources applied for)
69
+
70
+* High-speed storage   (/FAST):Several hundred GB ~ several tens of TB(set according to the amount of resources applied for; 500 GB will be provided free of charge in FY2023)
59 71
 
60 72
 For details, please refer to [Use of Storage](/filesystem).
61 73
 
... ...
@@ -65,6 +77,7 @@ The Intel compiler is set by default on the cloud system.
65 77
 
66 78
 For details, please refer to [Compilers・Libraries](/compilers).
67 79
 
80
+
68 81
 ## Execution of the Program{#run}
69 82
 
70 83
 We provide a program execution environment using the Slurm job scheduler.
... ...
@@ -78,3 +91,4 @@ The list is available from [Software / Libraries](/software).
78 91
 ## Contact Information{#inquiry}
79 92
 
80 93
 If you have any inquiries, please contact us from [Inquiry form](https://www.iimc.kyoto-u.ac.jp/en/inquiry/?q=consult).
94
+
... ...
@@ -56,7 +56,7 @@ external_links:
56 56
 
57 57
 
58 58
 クラウドシステムの環境に切り替えたい場合は次のコマンドで切り替えることができます。
59
-```bash
59
+```nohighlight
60 60
 $ module switch SysCL
61 61
 ```
62 62
 
... ...
@@ -32,7 +32,7 @@ You can use it from dedicated client software (Windows, Mac, Linux).
32 32
 
33 33
 NiceDCV can be used with the following application servers.
34 34
 
35
-Access Point | Address
35
+Connection Destination | Address
36 36
 ---------- | ------------
37 37
 Application Server| app.kudpc.kyoto-u.ac.jp
38 38
 
... ...
@@ -20,117 +20,120 @@ This page explains how to use Academic Center for Computing and Media Studies, K
20 20
 For those who have completed registration for use of our supercomputer, we will send a Notification of the Registration Completion by email. 
21 21
 Once you receive the Notification of the Registration Completion, please follow [the Procedure to Start the Service](/misc/portal_init).
22 22
 
23
+
23 24
 ### Usage of User ID{#usage_usernum}
24 25
 The user ID notified in the “Notification of the Registration Completion” is used for the purposes shown in the table below. It may fall into both categories. You can confirm the primary center and computing resources from [HPCI online application system](https://www.hpci-office.jp/entry/) .
25 26
 
26 27
 | Category | Usage of user ID  |
27 28
 |------------------------|---------------------------------------------------------------|
28
-| Those who designate Kyoto University as the primary center  | Use this user ID as a **HPCI account** when WEB authentication is required, such as issuing a certificate. |
29
+| Those who designate Kyoto University as the primary center | Use this user ID as a **HPCI account** when WEB authentication is required, such as issuing a certificate. |
29 30
 <!--
30 31
 | 京都大学の計算資源を利用の方 | 京都大学の計算資源内でのみ有効なログインIDです。HPCIの利用において意識する必要性は低いですが、直接SSH接続することも可能です。|
31 32
 -->
32 33
 ## How to use the system{#use_system}
33 34
 ### How to log in to the system{#system_login}
34
-Please refer to the [Manual provided by HPCI](https://www.hpci-office.jp/en/for_users/hpci_info_manuals) for how to issue client certificates and log in.
35
-<!--
36
-京都大学の計算資源を使用する場合のホスト名は、以下の通りです。
37
-なお、電子証明書発行後にログインを許可する登録処理を行いますので、15分程度時間を空けてログインしてください。
35
+Please refer to the [Manual provided by HPCI](https://www.hpci-office.jp/en/for_users/hpci_info_manuals) on how to issue client certificates and log in.
36
+The host names for using Kyoto University computing resources are as follows.
37
+We will process the registration process to allow you to log in after the issuance of the client certificates. Please wait 15 minutes to log in.
38 38
 
39
-| システム名 | ホスト名 | 
39
+| System | Host | 
40 40
 |-------------------------------|-----------------------------------------|
41
-| システムA(Cray XC40) | camphor.kudpc.kyoto-u.ac.jp |
41
+| System A | camphor.kudpc.kyoto-u.ac.jp |
42 42
 HPCI uses client certificates to log in to resource providers via SSH (GSI-SSH) with GSI authentication (Grid Security Infrastructure).
43 43
 
44
-<!--
45
-### 京都大学の計算資源経由でHPCIを利用する場合{#hpci_via_kuresources}
46
-京都大学の計算資源には、GSI-SSHに必要となる gsissh コマンドおよび myproxy-logon コマンドを用意してあります。
47 44
 
48
-京都大学の計算資源を利用可能な方は [こちら](https://web.kudpc.kyoto-u.ac.jp/manual/ja/login) のページを参考にログインいただくと、
49
-HPCI用の環境構築を行わなくとも、京都大学の計算資源経由で他のシステム構成機関にログインすることが可能です。
50
-その場合は、以下のように myproxy-logon(代理証明書の取得)および gsissh コマンドを利用してください。
45
+### When using HPCI via Kyoto University's computing resources{#hpci_via_kuresources}
46
+Kyoto University computing resources provide the gsissh command and myproxy-logon command required for GSI-SSH.
47
+
48
+Users who are available for Kyoto University's computing resources can log in to other system institutions via Kyoto University's computing resources without building an environment for HPCI by referring to the page [here](https://web.kudpc.kyoto-u.ac.jp/manual/ja/login). 
49
+In that case, please use myproxy-logon (obtain a proxy certificate) and the gsissh command as follows
51 50
 
52 51
 ```nohighlight
53
-## 代理証明書の取得 (hpci00XXXX は 自身のHPCI-IDに置き換え)
52
+## obtain a proxy certificate (please replace hpci00XXXX with your own HPCI-ID)
54 53
 $ myproxy-logon -s portal.hpci.nii.ac.jp -l hpci00XXXX
55 54
 
56
-## 他の資源提供機関へのログイン{#login_of_other_facility}
55
+## Log in to other resource providers{#login_of_other_facility}
57 56
 $ gsissh host01.example.jp 
58 57
 ```
59 58
 
60
-### システムの利用方法{#using_system}
61
-システムの利用方法は、[システムへの接続方法](https://web.kudpc.kyoto-u.ac.jp/manual/ja/login) などをご覧ください。
62
-HPCIで利用できるシステムはシステムA(Cray XC40)となっています。
59
+### How to use the system{#using_system}
60
+Please refer to [Access](https://web.kudpc.kyoto-u.ac.jp/manual/ja/login) for how to use the system.
61
+The system available for  HPCI is System A.
62
+
63
+#### Use of computing resources{#using_computing}
64
+To use HPCI's computing resources, the following queue name must be specified when submitting jobs in batch.
65
+Please refer to [Batch Processing](https://web.kudpc.kyoto-u.ac.jp/manual/ja/run/batch) on how to use the batch system, .
63 66
 
64
-#### 計算資源の利用{#using_computing}
65
-HPCIの計算資源を利用するには、バッチでのジョブ投入時に以下のキュー名を指定する必要があります。
66
-バッチシステムの詳しい利用方法は、[バッチ処理(システムA)](https://web.kudpc.kyoto-u.ac.jp/manual/ja/run/systema) をご覧ください。
67
-**キュー名に含まれるHPCI課題IDは初回採択時課題IDが利用されます。**
67
+**The HPCI proposal ID included in the queue name is the ID of the proposal that was firstly approved.**
68 68
 
69
-| 分類 | システム | 種別 | キュー名 | ノード数(2022年度) | 備考|
69
+ Classification | System | Type | Queue Name | Number of Nodes (FY2023) |Notes
70 70
 |-----------|---------------|-----------------|-----------------|-------------------|---------------------|
71
+| HPCI-JHPCN | A | All year use | jha | 20 Nodes | Shared by HPCI-JHPCN users. Please do not occupy  for a long period of time, as it will be shared and used for multiple proposals. |
72
+| HPCI-JHPCN | A | Intensive use | jhXXXXXXa | - | Please replace hpci00XXXX with your own HPCI-ID. The period of use will be notified individually to the proposal representatives.|
73
+<!--
71 74
 | HPCI | A | 通期利用 | hpa | 200 ノード | HPCIの利用者で共有します。複数の課題で共有して使用しますので、長期間の占有利用は控えてください。|
72 75
 | HPCI-JHPCN | A | 通期利用 | jha | 52 ノード | HPCI-JHPCNの利用者で共有します。複数の課題で共有して使用しますので、長期間の占有利用は控えてください。|
73 76
 | HPCI-JHPCN | A | 集中利用 | jhXXXXXXa | 64 ノード | 「jhXXXXXX」は課題IDに置き換えてください。利用期間は課題代表者に個別に通知します。|
77
+-->
74 78
 
75
-#### 課題ID(グループ)の指定{#group_assign}
76
-京都大学では、HPCIで複数課題に採択された場合でも同じのログインID(利用者番号)を使用します。
77
-バッチでのジョブ投入時に、キュー名およびグループ名を明示的に指定することで課題を切り替えてください。
79
+#### Assignment of Proposal ID (Group){#group_assign}
80
+At Kyoto University,  we use the same login ID (user number) even when multiple proposals are approved by HPCI.
81
+Please switch proposals by explicitly specifying the queue name and group name when submitting jobs in batches.
78 82
 
79
-下の例はバッチジョブの投入に必要となるジョブスクリプトのシステムAでのサンプルです。
80
-「#QSUB -ug 」でグループ名にhp189999を指定しています。グループ名のHPCI課題IDには初回採択時課題IDが利用されます。
83
+The example below is a sample of the job script required to submit a batch job.
84
+The queue name is specified as jha with "#SBATCH -p".
81 85
 
82 86
 ```nohighlight
83 87
 $ cat sample.sh  
84 88
 #!/bin/bash 
85
-#============ PBS Options ============ 
86
-#QSUB -q hpa
87
-#QSUB -ug hp189999 
88
-#QSUB -W 2:00 
89
-#QSUB -A p=4:t=8:c=8:m=1800M  
89
+#============ Slurm Options ============ 
90
+#SBATCH -p jha
91
+#SBATCH -t 2:00:00
92
+#SBATCH --rsc p=4:t=8:c=8:m=1800M  
90 93
 #============ Shell Script ============ 
91
-aprun -n $QSUB_PROCS -d $QSUB_THREADS -N $QSUB_PPN ./a.out
94
+srun ./a.out
92 95
 ```
93 96
 
94
-### ストレージの利用{#use_storage}
95
-#### 京都大学のストレージ{#ku_storage}
96
-HPCIの課題で利用できる京都大学のストレージのパスは以下の通りです。
97
-/LARGE0, /LARGE1 または /LARGE2, /LARGE3 のどちらかの組み合わせを利用いただけます。
98
-利用可能なLARGE領域は課題代表者及び連絡責任者の方にメールで通知しています。
97
+### Use of Storage {#use_storage}
98
+#### Storage of Kyoto University{#ku_storage}
99
+The storage paths of Kyoto University available for HPCI proposals are as follows.
100
+/LARGE0, /LARGE1 combinations are available.
101
+The available LARGE areas are notified by e-mail to the proposal representative and the contact person.
99 102
 
100
-課題あたりに利用可能な容量は以下の通りです。
103
+Available capacities per proposals are as follows
101 104
 
102
-| 課題 | 利用可能な容量 |
105
+| Proposal | Available  Capacity |
103 106
 |--------------------|------------------------------------------------|
104
-|HPCIのシステムA利用課題| 資源提供通知の大容量ディスク欄に記載 (/LARGE0, /LARGE1 または /LARGE2, /LARGE3 に半分ずつ割当)|
105
-|HPCI-JHPCNのシステムA利用課題| 資源提供通知の大容量ディスク欄に記載 (/LARGE0, /LARGE1 または /LARGE2, /LARGE3 に半分ずつ割当)|
107
+|HPCI proposal uses System A | Listed in the Large Disk column of the Resource Offer Notice (Allocate half to /LARGE0 and /LARGE1)|
108
+|HPCI-JHPCN proposal uses System A| Listed in the Large Disk column of the Resource Offer Notice (Allocate half to /LARGE0 and /LARGE1)|
106 109
 
107
-/LARGE1および/LARGE3は、初期状態で/LARGE0 ならびに /LARGE2のバックアップ先となっています。
108
-/LARGE1および3はバックアップを解除することで利用可能になります。
109
-バックアップ設定の変更は、[お問い合わせフォーム](http://www.iimc.kyoto-u.ac.jp/ja/inquiry/?q=consult) よりご依頼ください。
110
+/LARGE1 is the backup destination for /LARGE0 by default. 
111
+/LARGE1 is available by removing the backup.
112
+To change your backup settings, please contact us via the [Inquiry Form](http://www.iimc.kyoto-u.ac.jp/ja/inquiry/?q=consult).
110 113
 
111
-この他にホームディレクトリが、100GBまで利用できます。
112
-ストレージの詳しい利用方法は、[ファイルシステムの利用](https://web.kudpc.kyoto-u.ac.jp/manual/ja/filesystem) をご覧ください。
114
+In addition to this, home directories are available up to 100 GB.
115
+For more details on how to use storage, please refer to [Using the File System](https://web.kudpc.kyoto-u.ac.jp/manual/ja/filesystem).
113 116
 
114
-#### 共用ストレージ{#shared_storage}
115
-HPCI共用ストレージの利用が可能な課題の利用者向けのマウントポイントは以下の通りです。
116
-詳しい利用方法は、[HPCI共用ストレージ利用マニュアル](https://www.hpci-office.jp/pages/hpci_info_manuals) をご覧ください。
117
+#### Shared Storage{#shared_storage}
118
+The mount points for users of the proposals for which HPCI shared storage is available are as follows.
119
+Please refer to the [HPCI Shared Storage User Manual](https://www.hpci-office.jp/pages/hpci_info_manuals) for details on how to use. 
117 120
 
118
-| マウントポイント |
121
+| Mount Point |
119 122
 | ----------------------- |
120
-| /gfarm/課題ID/利用者番号 |
123
+| /gfarm/Proposal ID/User ID |
121 124
 
122
-#### 利用状況の確認{#check_using_status}
123
-[利用者ポータル](https://web.kudpc.kyoto-u.ac.jp/portal/) にログインし、
124
-上部メニューの統計情報から、左メニューのHPCI統計をクリックすると利用状況が確認できます。
125
+#### Confirmation of usage status{#check_using_status}
126
+You can confirm the usage status by logging in to the [User Portal](https://web.kudpc.kyoto-u.ac.jp/portal/), clicking on HPCI in the left menu from the Statistics section of the upper menu.
125 127
 
126 128
 ![](hpci_statistics_new.png)
127 129
 
128
-コア経過時間(秒)の値を3600で割って単位を「時間」にし、68(システムAのノード当たりコア数) で割っていただければ、
129
-利用可能枠のノード時間に対する利用実績を算出することができます。
130
-なお、このページではキューを利用可能な全ユーザの情報が表示されますので、それらを合計した値と課題割当時間を比較していただく必要があります。
130
+If you divide the value of core elapsed time (in seconds) by 3600 and set the unit to "hours" and divide by 112 (the number of cores per node in System A), you can calculate the actual utilization against the node time of the available quota.
131
+Please note that information on all users who can use the queue is displayed on this page, so it is necessary to compare the total value with the time allocated for the proposals.
132
+
133
+
134
+
135
+
131 136
 
132 137
 #### Information Sharing CMS{#info_cms}
133
-For HPCI users, we are operating a management system for disseminating information to users from each system component organization and for sharing documents within the projects.
138
+HPCI has a management system for users of the HPCI system to share information and documents from each system institutions.
134 139
 * [Information Sharing CMS](https://www.hpci-office.jp/pages/info_cms)
... ...
@@ -64,7 +64,7 @@ Tine zone of Device | UTC
64 64
 
65 65
 Item|Content
66 66
 -------| -----------------------------------
67
-Access point | s3.kudpc.kyoto-u.ac.jp
67
+Connection Destination | s3.kudpc.kyoto-u.ac.jp
68 68
 Access key | Please check the user portal.
69 69
 Secret key | Please check the user portal.
70 70
 Bucket name | Please check the user portal.
... ...
@@ -148,7 +148,7 @@ Secret access key | S3 secret key
148 148
 #### Connection Information
149 149
 Item|Content
150 150
 -------| -----------------------------------
151
-Access point | s3.kudpc.kyoto-u.ac.jp
151
+Connection Destination| s3.kudpc.kyoto-u.ac.jp
152 152
 Account name | Please check the user portal.
153 153
 Password |Please check the user portal.
154 154