Browse code

20230801: Various page changed

root authored on2023-08-31 17:16:32
Showing8 changed files
... ...
@@ -25,12 +25,13 @@ $ ssh username@hostname
25 25
 ```
26 26
 3.  If a prompt of the form [username@hostname ~] is displayed, the login is successful.
27 27
 
28
-### Access{#access}
28
+### Connection Destination{#access}
29 29
 
30 30
 The host names for each system are as follows.
31 31
 
32 32
 | System Name | Host Name   | Note |
33 33
 | --- | --- | --- |
34
+| System A | camphor.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
34 35
 | System B/Cloud | laurel.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
35 36
 | System C | cinnamon.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
36 37
 | System G | gardenia.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
... ...
@@ -68,8 +68,9 @@ DEK-Info: AES-128-CBC,7A30F993641D4093A373703A3F644D2D
68 68
 -->
69 69
 1. Start your browser and go to one of the following addresses.
70 70
 
71
-    Destination | Address
71
+    Connection Destination | Address
72 72
     ---------- | ------------
73
+    System A | [https://camphor31.kudpc.kyoto-u.ac.jp/](https://camphor31.kudpc.kyoto-u.ac.jp/)
73 74
     System B/C  | [https://laurel31.kudpc.kyoto-u.ac.jp/](https://laurel31.kudpc.kyoto-u.ac.jp/)
74 75
     System G  | [https://gardenia11.kudpc.kyoto-u.ac.jp/](https://gardenia11.kudpc.kyoto-u.ac.jp/)
75 76
     Application Server  | [https://app.kudpc.kyoto-u.ac.jp/](https://app.kudpc.kyoto-u.ac.jp/)
... ...
@@ -17,12 +17,13 @@ Generate a public/private key pair by following [Generating Key Pair and Registe
17 17
 ## How to transfer files{#transfer}
18 18
 We provide instructions for connecting using scp and sftp commands. On Windows, you can also connect using [MobaXterm](/login/mobaxterm). MobaXterm allows file transfers via GUI. 
19 19
 
20
-### Access
20
+### Connection Destination
21 21
 The host names for each system are as follows.
22 22
 
23 23
 | System Name | Host Name | Note |
24 24
 | --- | --- | --- |
25 25
 | File Transfer Server | hpcfs.kudpc.kyoto-u.ac.jp | **recommended**<br>Consists of two servers. Dedicated SFTP and RSYNC servers with no time limits. |
26
+| System A | camphor.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
26 27
 | System B/Cloud | laurel.kudpc.kyoto-u.ac.jp |Consists of three login nodes. |
27 28
 | System C | cinnamon.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
28 29
 | System G | gardenia.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
... ...
@@ -62,7 +63,7 @@ Enter passphrase for key 'id_rsa':
62 63
 
63 64
 ### File transfers using the sftp command{#sftp}
64 65
 ```nohighlight
65
-$ sftp [Option] [Destination]
66
+$ sftp [Option] [Connection Destination]
66 67
 ```
67 68
 
68 69
 #### Example
... ...
@@ -33,7 +33,7 @@ MobaXterm is an extended terminal for Windows with SSH client, X11 server, netwo
33 33
 ![sshadd_2](sshadd_2.png?lightbox=80%&resize=500)
34 34
 6. Enter the passphrase for the private key on start-up and click OK.<br>※Thereafter, the passphrase for the private key is entered when starting MobaXterm.
35 35
  
36
-## How to Access{#login}
36
+## How to log in{#login}
37 37
 1. After clicking on the 'Sessions' icon, click on 'SSH'.
38 38
 2.  Enter the host name according to the system you are logging in to in the Remote host field.
39 39
 3.  Check 'Specify username' and enter the supercomputer system user number (ID) in the box.
... ...
@@ -73,13 +73,14 @@ Log in to the system as described in [How to Access](#login). No special configu
73 73
 5. If successful, your home directory on the supercomputer will be displayed. You can upload on the supercomputer with the up arrow and download to your local machine with the down arrow.
74 74
 ![sftp_2](sftp_2.png?lightbox=80%&resize=500)
75 75
 
76
-## Access Point{#access}
76
+## connection destination{#access}
77 77
 The host names for each system are as follows.
78 78
 
79
-| System Name | Host Name | Note |
79
+| System | Host | Note |
80 80
 | --- | --- | --- |
81
-| File Transfer Server | hpcfs.kudpc.kyoto-u.ac.jp | Consists of two servers.<br>Dedicated SFTP and RSYNC servers with no time limits. |
81
+| System A | camphor.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
82 82
 | System B/Cloud  | laurel.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
83 83
 | System C  | cinnamon.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
84 84
 | System G | gardenia.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
85 85
 | Application Server | app.kudpc.kyoto-u.ac.jp | |
86
+| File Transfer Server | hpcfs.kudpc.kyoto-u.ac.jp | Consists of two servers.<br>Dedicated SFTP and RSYNC servers with no time limits. |
... ...
@@ -37,29 +37,31 @@ The host name for each system is as follows. Please log in with the login name (
37 37
 
38 38
 |System Name | Host Name | Notes |
39 39
 | --- | --- | --- |
40
-| File Transfer Server | hpcfs.kudpc.kyoto-u.ac.jp | Consists of two servers. Dedicated SFTP and RSYNC servers with no time limit.|
41
-| SysB/Cloud | laurel.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
42
-| SysC | cinnamon.kudpc.kyoto-u.ac.jp | Connect to the laurel login node with the Cinnamon environment loaded. |
40
+| SysA | camphor.kudpc.kyoto-u.ac.jp | Consists of two login nodes.  |
41
+| SysB/Cloud  | laurel.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
42
+| SysC | cinnamon.kudpc.kyoto-u.ac.jp | Consists of three login nodes. |
43 43
 | SysG | gardenia.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
44
-<!--
45
-| A | camphor.kudpc.kyoto-u.ac.jp | Consists of two login nodes. |
44
+| File Transfer Server | hpcfs.kudpc.kyoto-u.ac.jp | Consists of two servers. Dedicated SFTP and RSYNC servers with no time limit. |
46 45
 
47 46
 ####  fingerprint
48 47
 
49 48
 The fingerprint for each system is as follows.Please use it to confirm the fingerprint that is displayed when you first login to the system via SSH.
50 49
 
51
- | System     | Method | fingerprint                                    
52
- | ------------ | ------- | -----------------------------
53
- | SysB/SysC/Cloud | RSA | **SHA256:** c3KPtVlPbgGW+jFM8VdGd4yob4HX/VPH7nR26JHS13M 
54
- |   ''                  | ED25519 | **SHA256:** jt5nEfylQ+SjG5PUZtKqP1DXk56p+7ugwF8GHx8Nr/Q
55
- |   ''                 | ECDSA | **SHA256:** yJN7gkHzsrlRBwvYsTCiQkXnzEkdZmjMZLs9TO8YylU
56
- | SysG                  | RSA | **SHA256:** 7GWj0iRCCMpswAWWyIZC+FMX3xlQVRyFiP9hIUYl3rU
57
- |   ''                  | ED25519 | **SHA256:** 8uh/6B14HEz77F1yAq95CyQHLqKqr4DG2Xys7eQr0qk
58
- |   ''                 | ECDSA | **SHA256:** PJX5f4jPqjTM4Awpod8fedTEXknIBamdexYK3fAzl6U
59
- | File Transfer Server | RSA | **SHA256:** /ZeTSRgrxAo4Et1E9NouKs8xyhc/KAjgRpqaEgf37sM
60
- |   ''               | ED25519 | **SHA256:** v0ABGIXX735Ak2By6zq1IuwPAPwLmJC8IwEhzAlm7ds
61
- |   ''               | ECDSA | **SHA256:** RtD7xSYrSL/F6KZpZcjluDFlQ37CIVD6ij2GhEYOaO0
50
+ | System     | Method | fingerprint      
51
+ | ------------ | ------- | ----------------------------- |
52
+ | SysA | RSA | **SHA256:** o/Ef0rC2uksvUax14XF6R9c3WHWypaSDfDjDJ0lkreQ |
53
+ | '' | ED25519 | **SHA256:** yoBITHVW+ENaAAAlW+ZZDCUBnCNFEhBDsnWHjMHFKx0 |
54
+ | '' | ECDSA | **SHA256:** 8j+LvvPha40b2ZwrN3J7s3fyzD+SxHU67/5MwKAUqVU |
55
+ | SysB/SysC/Cloud | RSA | **SHA256:** c3KPtVlPbgGW+jFM8VdGd4yob4HX/VPH7nR26JHS13M |
56
+ |   ''                  | ED25519 | **SHA256:** jt5nEfylQ+SjG5PUZtKqP1DXk56p+7ugwF8GHx8Nr/Q |
57
+ |   ''                 | ECDSA | **SHA256:** yJN7gkHzsrlRBwvYsTCiQkXnzEkdZmjMZLs9TO8YylU |
58
+ | SysG | RSA | **SHA256:** 7GWj0iRCCMpswAWWyIZC+FMX3xlQVRyFiP9hIUYl3rU |
59
+ |   '' | ED25519 | **SHA256:** 8uh/6B14HEz77F1yAq95CyQHLqKqr4DG2Xys7eQr0qk |
60
+ |   '' | ECDSA | **SHA256:** PJX5f4jPqjTM4Awpod8fedTEXknIBamdexYK3fAzl6U |
61
+ | File Transfer Server | RSA | **SHA256:** /ZeTSRgrxAo4Et1E9NouKs8xyhc/KAjgRpqaEgf37sM |
62
+ |   ''               | ED25519 | **SHA256:** v0ABGIXX735Ak2By6zq1IuwPAPwLmJC8IwEhzAlm7ds |
63
+ |   ''               | ECDSA | **SHA256:** RtD7xSYrSL/F6KZpZcjluDFlQ37CIVD6ij2GhEYOaO0 |
64
+ 
62 65
 
63 66
 ## How to Login {#login}
64 67
 
... ...
@@ -80,20 +82,19 @@ Please refer to [File transfer with SSH](/login/transfer) for file transfer usin
80 82
 
81 83
 There are three supercomputer login nodes in Systems B, C, and Cloud standard, and two in System G. They are operated by DNS round robin for load balancing. If a failure occurs on some of those nodes, login by DNS round robin may fail. Please follow the procedure below to login by specifying individual host names.
82 84
 
83
-| システム名         | ホスト名                         |
85
+| System         | Host                         |
84 86
 | ------------------ | ------------------------------   |
85
-| SysB/Cloud         | `laurel31.kudpc.kyoto-u.ac.jp`   |
87
+|SysA | `camphor31.kudpc.kyoto-u.ac.jp` |
88
+| '' | `camphor32.kudpc.kyoto-u.ac.jp` |
89
+| SysB/クラウド | `laurel31.kudpc.kyoto-u.ac.jp`   |
86 90
 | ''                 | `laurel32.kudpc.kyoto-u.ac.jp`   |
87 91
 | ''                 | `laurel33.kudpc.kyoto-u.ac.jp`   |
88
-| SysC               | `cinnamon32.kudpc.kyoto-u.ac.jp` |
92
+| SysC          | `cinnamon32.kudpc.kyoto-u.ac.jp` |
89 93
 | ''                 | `cinnamon32.kudpc.kyoto-u.ac.jp` |
90 94
 | ''                 | `cinnamon33.kudpc.kyoto-u.ac.jp` |
91
-| SysG               | `gardenia11.kudpc.kyoto-u.ac.jp` |
95
+| SysG          | `gardenia11.kudpc.kyoto-u.ac.jp` |
92 96
 | ''                 | `gardenia12.kudpc.kyoto-u.ac.jp` |
93 97
 
94
-
95
-
96
-
97 98
 <!--
98 99
 ### 障害情報の通知{#announce}
99 100
 障害等の情報は、[障害情報ページ](https://kudpc.viewer.kintoneapp.com/public/hpc-problems)でお知らせしています。一部のログインノードに障害が発生した場合、利用いただけないノードのホスト名をお知らせしますので、それ以外のホスト名を指定してログインしてください。なお、障害の発生の時間帯等によっては、お知らせまでに時間を要する場合がありますので、予めご承知おきください。
... ...
@@ -77,6 +77,7 @@ When the fiscal year changes or the system configuration changes, we review the
77 77
 
78 78
 | login node                 | System Environment      | Batch Processing Environment | Compile Environment
79 79
 |----------                    | ---------------- | -------------  | ---------------
80
+| camphor.kudpc.kyoto-u.acjp    |  SysA            | slurm          | intel,   intelmpi,   PrgEnvIntel
80 81
 | laurel.kudpc.kyoto-u.acjp    |  SysB            | slurm          | intel,   intelmpi,   PrgEnvIntel
81 82
 | cinnamon.kudpc.kyoto-u.ac.jp |  SysC            | slurm          | intel,   intelmpi,   PrgEnvIntel
82 83
 | gardenia.kudpc.kyoto-u.ac.jp |  SysG            | slurm          | nvhpc,   openmpi,    PrgEnvNvidia
... ...
@@ -67,13 +67,22 @@ $ qgroup
67 67
 
68 68
 ## Initial value of computing resources and the maximum value that can be specified with the --rsc option{#resouce}
69 69
 
70
-| Option | Description | System B<br>Initial Value | <br>Maximum Value | System C<br>Initial Value | <br>Maximum Value  | System G<br>Initial Value | <br>Maximum Value | Cloud<br>Initial Value |<br>Maximum Value |
70
+| Option | Description | System A<br>Initial Value|<br>Maximum Value|System B<br>Initial Value | <br>Maximum Value | System C<br>Initial Value | <br>Maximum Value | 
71 71
 | --- | --- | --- | --- |
72
-| p | Number of processes                               | 1     |  [Standard Resources](/run/resource#group)<br>(When c=1) | 1      |  [Standard Resources](/run/resource#group)<br>(When c=1)    | 1   |  [Standard Resources](/run/resource#group)<br>(When c=1)    | 1  | 1   |
73
-| t | Number of threads per process                 | 1     | 112(224)   | 1      | 112(224)   | 1   | 64(128)   | 1  | 36(72)  |
74
-| c | Number of cores per process                    | 1     | 112   | 1      | 112   | 1   | 64   | 1  | 36  |
75
-| m | Amount of memory per process<br>(Unit:M, G) | 4571M | 500G | 18392M | 2011G | 8000M  | 500G |14222M | 500G|
76
-| g | Number of GPUs                                       |  - | - | - | - | 1    | [Standard Resources](/run/resource#group) | - | - |
72
+| p | Number of processes                               | 1     |  [Standard Resources](/run/resource#group)<br>(When c=1)| 1     |  [Standard Resources](/run/resource#group)<br>(When c=1) | 1      |  [Standard Resources](/run/resource#group)<br>(When c=1)   |
73
+| t | Number of threads per process               | 1     | 112(224)| 1     | 112(224)   | 1      | 112(224)   | 
74
+| c | Number of cores per process                    | 1     | 112| 1     | 112   | 1      | 112   |
75
+| m | Amount of memory per process<br>(Unit:M, G) | 1142M | 128G | 4571M | 500G | 18392M | 2011G | 
76
+
77
+<br>
78
+
79
+| Option  | Description | System G<br>Initial Value | <br>Maximum Value | Cloud<br>Initial Value | <br>Maximum Value |
80
+| --- | --- | --- | --- | --- | --- |
81
+| p | Number of processes | 1   |  [Standard Resources](/run/resource#group)<br>(When c=1)    | 1  | 1   |
82
+| t | Number of threads per process | 1   | 64(128)   | 1  | 36(72)  |
83
+| c | Number of cores per process  |  1   | 64   | 1  | 36  |
84
+| m | Amount of memory per process<br>(Unit:M, G)| 8000M  | 500G |14222M | 500G|
85
+| g | GPU数 | 1    | [Standard Resources](/run/resource#group) | - | - |
77 86
 
78 87
 
79 88
 * The value of t in () is the maximum value when hyper-threading is enabled. Please specify that  t=c✕2 if hyper-threading is enabled.
... ...
@@ -84,7 +93,7 @@ $ qgroup
84 93
 * If you use system G, you can require resources with either ptcm option or g option. 
85 94
  * When you require resources for each ptcm option, 1 GPU per 16 cores is automatically allocated. 
86 95
  * When you require GPU using g option, it will become as follows. 
87
-      * If g=1, the same parameter as p=1:c=16:m=128000M is automatically set as well as 1 GPU is allocated. 
96
+      * If g=1, the same parameter as p=1:c=16:m=128000M is automatically set as well as 1 GPU is allocated.
88 97
       * If g=2, the same parameter as p=2:c=16:m=128,000M is automatically set as well as 2 GPUs are allocated. If you use 2 GPUs in one process, it is available by specifying -n 1 option to the srun command in the job script1. (Example: srun -n 1 . /a.out) Conversely, if you want to increase the number of processes, please adjust the number of cores per process as well  with -c option of srun to adjust the number of cores per process as well. (Example: srun -n 9 -c 1 . /a.out)
89 98
 
90 99
 <!--
... ...
@@ -86,6 +86,7 @@ Please note that the /tmp area is automatically deleted at the end of the job. T
86 86
 
87 87
 |System Name | Capacity per core |
88 88
 |---------- | -----------------|
89
+|システムA  |   2.4G           |
89 90
 |System B  |   8.9G           |
90 91
 |System C  |   8.9G           |
91 92
 |System G  |   15.6G          |