1 | 1 |
new file mode 100644 |
... | ... |
@@ -0,0 +1,169 @@ |
1 |
+--- |
|
2 |
+title: 'Logging In with FastX' |
|
3 |
+media_order: 'fastx_login_200.png,fastx_login_1022.png,fastx_login_101.png,fastx_login_102.png,fastx_login_103.png,fastx_login_104.png,fastx_login_105.png,fastx_login_106.png,fastx_login_107.png,fastx_login_108.png,fastx_login_109.png,fastx_login_110.png,fastx_login_201.png,fastx_login_202.png,fastx_login_203.png,fastx_login_204.png,fastx_login_205.png,fastx_login_301.png' |
|
4 |
+taxonomy: |
|
5 |
+ category: |
|
6 |
+ - docs |
|
7 |
+visible: true |
|
8 |
+published: true |
|
9 |
+--- |
|
10 |
+ |
|
11 |
+[toc] |
|
12 |
+ |
|
13 |
+## What is FastX?{#fastx} |
|
14 |
+ |
|
15 |
+FastX is a X Window System that can connect to supercomputer systems using HTTPS (443/TCP). |
|
16 |
+Communication traffic is suppressed by data compression technology and you can use the system comfortably such as GUI-based application operations even in remote areas. |
|
17 |
+ |
|
18 |
+A web browser such as Google Chrome or Firefox is required to use remote access service of FastX. |
|
19 |
+It is not necessary to install dedicated client software or plug-in for the web browser. |
|
20 |
+ |
|
21 |
+<!-- |
|
22 |
+専用のクライアントソフト(Windows版)からの利用のほか、ブラウザから利用することも可能です。この場合はクライアントソフトのインストール不要でご利用できます。 |
|
23 |
+--> |
|
24 |
+ |
|
25 |
+## Prerequisites{#before} |
|
26 |
+ |
|
27 |
+* Generate a key pair by the procedure of [Generating Key Pair and Registering Public Key](/login/pubkey), and then register a public key from [User Portal](https://web.kudpc.kyoto-u.ac.jp/portal/). Also, **the private key must be saved in OpenSSH format (PEM format prior to 7.7).** |
|
28 |
+* Compatible browsers:Edge, Firefox, Chrome |
|
29 |
+ * Bidirectional copy-pasting is only supported by Chrome. With other browsers, you can only copy and paste from the server side to the client side. |
|
30 |
+ |
|
31 |
+ |
|
32 |
+### Format of the Private Key{#format} |
|
33 |
+ |
|
34 |
+FastX is not available in ECDSA and ED25519 format keys, so please use RSA format keys. |
|
35 |
+The private key in OpenSSH 7.8 or later formats cannot be loaded. |
|
36 |
+In OpenSSH 7.8 or later formats, the private key has the following contents. |
|
37 |
+ |
|
38 |
+```nohighlight |
|
39 |
+-----BEGIN OPENSSH PRIVATE KEY----- |
|
40 |
+************************************* |
|
41 |
+************************************* |
|
42 |
+************************************* |
|
43 |
+-----END OPENSSH PRIVATE KEY----- |
|
44 |
+``` |
|
45 |
+ |
|
46 |
+To create a private key in OpenSSH 7.8 or earlier format with the ssh-keygen command, use the `-m pem` option as follows. |
|
47 |
+ |
|
48 |
+```nohighlight |
|
49 |
+$ ssh-keygen -t rsa -b 3072 -m pem |
|
50 |
+``` |
|
51 |
+ |
|
52 |
+The private key in OpenSSH 7.8 or earlier format (PEM format) has the following contents. |
|
53 |
+ |
|
54 |
+```nohighlight |
|
55 |
+-----BEGIN RSA PRIVATE KEY----- |
|
56 |
+Proc-Type: 4,ENCRYPTED |
|
57 |
+DEK-Info: AES-128-CBC,7A30F993641D4093A373703A3F644D2D |
|
58 |
+ |
|
59 |
+************************************* |
|
60 |
+************************************* |
|
61 |
+************************************* |
|
62 |
+-----END RSA PRIVATE KEY----- |
|
63 |
+``` |
|
64 |
+ |
|
65 |
+## How to Log in{#procedure} |
|
66 |
+<!-- |
|
67 |
+### ブラウザの場合{#procedure_browser} |
|
68 |
+--> |
|
69 |
+1. Start your browser and go to one of the following addresses. |
|
70 |
+ |
|
71 |
+ Destination | Address |
|
72 |
+ ---------- | ------------ |
|
73 |
+ System B/C | [https://laurel31.kudpc.kyoto-u.ac.jp/](https://laurel31.kudpc.kyoto-u.ac.jp/) |
|
74 |
+ System G | [https://gardenia11.kudpc.kyoto-u.ac.jp/](https://gardenia11.kudpc.kyoto-u.ac.jp/) |
|
75 |
+ Application Server | [https://app.kudpc.kyoto-u.ac.jp/](https://app.kudpc.kyoto-u.ac.jp/) |
|
76 |
+<!-- |
|
77 |
+ camphor31 | [https://camphor31.kudpc.kyoto-u.ac.jp/](https://camphor31.kudpc.kyoto-u.ac.jp/) |
|
78 |
+ camphor32 | [https://camphor32.kudpc.kyoto-u.ac.jp/](https://camphor32.kudpc.kyoto-u.ac.jp/) |
|
79 |
+ lauerl31 | [https://laurel31.kudpc.kyoto-u.ac.jp/](https://laurel31.kudpc.kyoto-u.ac.jp/) |
|
80 |
+ lauerl32 | [https://laurel32.kudpc.kyoto-u.ac.jp/](https://laurel32.kudpc.kyoto-u.ac.jp/) |
|
81 |
+ lauerl33 | [https://laurel33.kudpc.kyoto-u.ac.jp/](https://laurel33.kudpc.kyoto-u.ac.jp/) |
|
82 |
+ gardenia11 | [https://gardenia11.kudpc.kyoto-u.ac.jp/](https://gardenia11.kudpc.kyoto-u.ac.jp/) |
|
83 |
+ gardenia12 | [https://gardenia12.kudpc.kyoto-u.ac.jp/](https://gardenia12.kudpc.kyoto-u.ac.jp/) |
|
84 |
+ gardenia13 | [https://gardenia13.kudpc.kyoto-u.ac.jp/](https://gardenia13.kudpc.kyoto-u.ac.jp/) |
|
85 |
+ gardenia14 | [https://gardenia14.kudpc.kyoto-u.ac.jp/](https://gardenia14.kudpc.kyoto-u.ac.jp/) |
|
86 |
+ |
|
87 |
+ なお、https://laurel.kudpc.kyoto-u.ac.jp/ や https://camphor.kudpc.kyoto-u.ac.jp/ 、https://gardenia.kudpc.kyoto-u.ac.jp/ (DNSラウンドロビンアドレス)にアクセスしても接続できますが、ブラウザのDNSキャッシュの更新タイミングで接続が切れる可能性があります。 |
|
88 |
+--> |
|
89 |
+ |
|
90 |
+2. **(First time only)** Click "Manage Private Keys". A small window prompting you to upload the private key opens. Click "+" and select the private key of the **OpenSSH format**. |
|
91 |
+ ![](01_pubkey_management.png?lightbox=100%&resize=600) |
|
92 |
+ ![](02_add_pubkey.png?lightbox=100%&resize=600) |
|
93 |
+ |
|
94 |
+3. **(First time only)** Check that the name of the uploaded private key is displayed in the blue frame and click 'Done'. |
|
95 |
+ ![](03_select_pubkey.png?lightbox=100%&resize=600) |
|
96 |
+ |
|
97 |
+4. Enter the user ID in the blue frame and click 'SSH Login'. |
|
98 |
+ ![](04_login.png?lightbox=100%&resize=600) |
|
99 |
+ |
|
100 |
+5. Enter the passphrase in the blue frame and click 'Submit'. |
|
101 |
+ ![](05_login_input_password.png?lightbox=100%&resize=600) |
|
102 |
+ |
|
103 |
+6. To launch a new desktop screen or terminal, click on '+'. |
|
104 |
+ ![](06_new_session.png?lightbox=100%&resize=600) |
|
105 |
+ |
|
106 |
+7. To use the desktop screen, double-click 'GNOME' in the blue frame; to use the terminal, double-click 'GNOME terminal' in the green frame. |
|
107 |
+ ![](07_choose_app.png?lightbox=100%&resize=600) |
|
108 |
+ |
|
109 |
+8. If a new tab opens and the desktop screen or terminal is displayed, the login will be successful. |
|
110 |
+Please note that the password used to lock the GNOME Desktop screen is the password of the user portal (not the passphrase for the private key). |
|
111 |
+ ![](08_gnome_desktop.png?lightbox=100%&resize=600) |
|
112 |
+ ![](09_gnome_terminal.png?lightbox=100%&resize=600) |
|
113 |
+ |
|
114 |
+9. By default, the keyboard layout is set to 'Japanese'. To set it to another language, pull down the tabs on the screen and click on the keyboard symbol. |
|
115 |
+ ![](10_select_keyboard.png?lightbox=100%&resize=600) |
|
116 |
+ |
|
117 |
+10. Change Layout to the desired language. |
|
118 |
+ ![](11_change_keyboard.png?lightbox=100%&resize=600) |
|
119 |
+ |
|
120 |
+<!-- |
|
121 |
+### クライアントソフトの場合(Windows){#procedure_clientsoft} |
|
122 |
+ |
|
123 |
+1. Windowsの場合は[Pageant](/install/pageant)を起動し、秘密鍵を登録しておきます。 |
|
124 |
+2. FastX アイコンをクリックして、FastX を起動します。 |
|
125 |
+3. FastX の起動ウィンドウが開いたら、左上の「+」アイコンをクリックします。 |
|
126 |
+ ![](fastx_login_206.png) |
|
127 |
+4. Edit Connection ウインドウで、`ssh`を選択し、「Host」に `laurel.kudpc.kyoto-u.ac.jp` , 「User」に利用者番号、「Name」に`laurel` を入力し、「OK」をクリックします。 |
|
128 |
+ ![](fastx_login_207.png) |
|
129 |
+5. 登録された`laurel`をダブルクリックします。 |
|
130 |
+ ![](fastx_login_208.png) |
|
131 |
+6. laurelに接続成功すると、以下の画面が現れますので、左上の「+」アイコンをクリックします。 |
|
132 |
+ ![](fastx_login_209.png) |
|
133 |
+7. デスクトップを使う場合は「GNOME Desktop」をダブルクリック、ターミナルを使う場合は「GNOME terminal」をダブルクリックします。 |
|
134 |
+ ![](fastx_login_210.png) |
|
135 |
+8. desktopまたはターミナルが表示されればログイン成功です。 |
|
136 |
+ |
|
137 |
+### クライアントソフトの場合(Mac){#procedure_clientsoft_mac} |
|
138 |
+ |
|
139 |
+ログインの操作はWindows版と同様です。 |
|
140 |
+なお、起動時に下記の表示が出た場合は、FastXアイコンをFinderで表示し、Controlキーを押しながらアイコンをクリックして、ショートカットメニューから「開く」を選択してください。 |
|
141 |
+![](fastx_login_200.png?lightbox=100%&resize=600) |
|
142 |
+参考情報:[https://support.apple.com/ja-jp/guide/mac-help/mh40616/mac](https://support.apple.com/ja-jp/guide/mac-help/mh40616/mac) |
|
143 |
+--> |
|
144 |
+ |
|
145 |
+## Notes{#notice} |
|
146 |
+ |
|
147 |
+### The number of Simultaneous Sessions{#max_num_session} |
|
148 |
+ |
|
149 |
+When one user uses multiple sessions at the same time (opens desktop or terminal), only one license is consumed on the same login node, but when sessions are opened from different login nodes, the corresponding licenses are consumed. The number of licenses that can be used by one user is set to 3. If you try to open more sessions, a notice indicating lack of license is displayed. |
|
150 |
+ |
|
151 |
+### Session Timeout{#timeout} |
|
152 |
+ |
|
153 |
+With FastX, the session is maintained even after disconnecting, and the user can reconnect. However, please note that the session will be automatically deleted after 2 days without reconnecting after disconnecting. |
|
154 |
+ |
|
155 |
+<!-- |
|
156 |
+### GNOME Desktopからのログアウトについて{#desktop} |
|
157 |
+ |
|
158 |
+GNOME Desktopからのログアウトは、画面右上の電源マークを押し、出てきたメニューの中からユーザ名を押して、「Log Out」からログアウトしてください。セッションが不要になった場合は、なるべくログアウトをお願いします。なお、電源オプションからのログアウトはできません。 |
|
159 |
+ |
|
160 |
+![](fastx_login_301.png?lightbox=100%&resize=600) |
|
161 |
+--> |
|
162 |
+ |
|
163 |
+## References{#fyi} |
|
164 |
+### Client Software{#client} |
|
165 |
+Client software is distributed on [the developer's official website](https://www.starnet.com/download/fastx-client). |
|
166 |
+Please use it if necessary. |
|
167 |
+ |
|
168 |
+The communication protocol used by the user should be **SSH**. |
|
169 |
+Also, password authentication is not available, so an SSH agent (such as [Pageant](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) that is included in Putty for Windows or ssh-agent for macOS and Linux) is required. |
... | ... |
@@ -74,6 +74,7 @@ DEK-Info: AES-128-CBC,7A30F993641D4093A373703A3F644D2D |
74 | 74 |
---------- | ------------ |
75 | 75 |
システムB/C | [https://laurel31.kudpc.kyoto-u.ac.jp/](https://laurel31.kudpc.kyoto-u.ac.jp/) |
76 | 76 |
システムG | [https://gardenia11.kudpc.kyoto-u.ac.jp/](https://gardenia11.kudpc.kyoto-u.ac.jp/) |
77 |
+ アプリケーションサーバ | [https://app.kudpc.kyoto-u.ac.jp/](https://app.kudpc.kyoto-u.ac.jp/) |
|
77 | 78 |
<!-- |
78 | 79 |
camphor31 | [https://camphor31.kudpc.kyoto-u.ac.jp/](https://camphor31.kudpc.kyoto-u.ac.jp/) |
79 | 80 |
camphor32 | [https://camphor32.kudpc.kyoto-u.ac.jp/](https://camphor32.kudpc.kyoto-u.ac.jp/) |
... | ... |
@@ -1,21 +1,21 @@ |
1 | 1 |
--- |
2 |
-title: 'For Users of the Previous System' |
|
2 |
+title: 'For Users of the Previous Systems' |
|
3 | 3 |
taxonomy: |
4 | 4 |
category: |
5 | 5 |
- docs |
6 | 6 |
--- |
7 | 7 |
|
8 |
-This information is for users to migrate the previous system to the new system replaced in fiscal 2016. |
|
9 |
- |
|
10 | 8 |
[toc] |
11 | 9 |
|
10 |
+This information is for users to migrate the previous system to the new system replaced in fiscal 2022. |
|
11 |
+ |
|
12 | 12 |
## Host Name and How to Log In{#login} |
13 | 13 |
|
14 |
-There are no changes from the previous system in the host name (round robin) and how to log in. If you want to log in to the login node directly without going through the round robin, please refer to [how to connect to the system](/login#fqdn-1) because the specific host name has been changed ( IP address is not changed). |
|
14 |
+There are no changes from the previous system in the host name (round robin) and how to log in. |
|
15 |
+If you want to log in to the login node directly without going through the round robin, please refer to [Access](/login) because the specific host name has been changed. |
|
15 | 16 |
|
16 | 17 |
### If you encountered an error when login{#login_error} |
17 |
- |
|
18 |
-If you use the Portforwarder or PuTTY terminal and cannot connect to the supercomputer with the following message, you must delete the known_hosts information. |
|
18 |
+If the following message appears and you cannot log in, you need to delete the known_hosts information. |
|
19 | 19 |
|
20 | 20 |
```nohighlight |
21 | 21 |
@@@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @@@ |
... | ... |
@@ -33,178 +33,204 @@ requested strict checking. |
33 | 33 |
Host key verification failed. |
34 | 34 |
``` |
35 | 35 |
|
36 |
-Please delete the known_host information with the following instruction. |
|
37 |
- |
|
38 |
- |
|
39 |
-#### Windows |
|
40 |
- |
|
41 |
-** if you use Portforwarder ** |
|
42 |
- |
|
43 |
-The known_hosts file exists in the directory the Portforwarder has been installed. |
|
44 |
-All you have to do is to delete the known_hosts file. |
|
45 |
- |
|
46 |
-** if you use PuTTY ** |
|
47 |
- |
|
48 |
-The known_hosts information is in the registry. |
|
49 |
-You can delete the known_hosts information by the accessory tool of PuTTY. |
|
50 |
- |
|
51 |
-* Open a command prompt window, then use cd command to move to the directory PuTTY has been installed. |
|
52 |
-* Type "putty -cleanup", then the known_hosts is deleted. |
|
53 |
- |
|
54 |
- |
|
55 |
- |
|
56 |
-#### Mac |
|
57 |
- |
|
58 |
-You can delete the known_hosts information with one of the following methods. |
|
59 |
- |
|
60 |
- * ssh-keygen command: type the following commands on the terminal |
|
61 |
- |
|
62 |
- ````nohighlight |
|
63 |
- ssh-keygen -R camphor.kudpc.kyoto-u.ac.jp |
|
64 |
- ssh-keygen -R laurel.kudpc.kyoto-u.ac.jp |
|
65 |
- ```` |
|
66 |
- |
|
67 |
-* edit known_hosts file |
|
68 |
- |
|
69 |
- Open /Users/(user name)/.ssh/known_hosts by an editor app, then delete the content. |
|
70 |
- |
|
71 |
-#### Linux |
|
72 |
- |
|
73 |
-You can delete the known_hosts information with one of the following methods. |
|
74 |
- |
|
75 |
-* ssh-keygen command: type the following commands one the terminal |
|
76 |
- |
|
77 |
- ````nohighlight |
|
78 |
- ssh-keygen -R camphor.kudpc.kyoto-u.ac.jp |
|
79 |
- ssh-keygen -R laurel.kudpc.kyoto-u.ac.jp |
|
80 |
- ```` |
|
81 |
- |
|
82 |
-* edit known_hosts file |
|
83 |
- |
|
84 |
- Open the known_hosts file by an editor app. Usually the file exists on /home/(user name)/.ssh directory. Then, delete the content. |
|
85 |
- |
|
86 |
-## Logging In With Exceed onDemand{#eod} |
|
87 |
- |
|
88 |
-When logging in to the system with Exceed onDemand, port number of port forwarding has been changed. |
|
89 |
- |
|
90 |
-For how to log in with Exceed onDemand, see [Logging In With Exceed onDemand](/login/eod) |
|
91 |
- |
|
92 |
-System|Previous port number|New port number |
|
93 |
--|-|- |
|
94 |
-A|5500|5501 |
|
95 |
-B,C|5500|5500 (Not changed) |
|
96 |
-Private Cluster|5500|5509 |
|
97 |
- |
|
98 |
- |
|
99 |
- |
|
100 |
-## Data migration {#mv_data} |
|
36 |
+You can delete the known_host information with the following instruction. |
|
101 | 37 |
|
38 |
+#### Terminal{#terminal} |
|
39 |
+* Use the ssh-keygen command<br> |
|
40 |
+* (Example) Delete known_hosts in laurel |
|
41 |
+```nohighlight |
|
42 |
+$ ssh-keygen -R laurel.kudpc.kyoto-u.ac.jp |
|
43 |
+``` |
|
44 |
+* Edit the known_host file directly. |
|
45 |
+1. Open the file `%homepath%\.ssh\known_hosts` (Windows), `/Users/(username)/.ssh/known_hosts` (Mac, Linux) using an editor. |
|
46 |
+3. Delete and save the contents. |
|
47 |
+ |
|
48 |
+#### MobaXterm{#mobaxterm} |
|
49 |
+1. Exit MobaXterm. |
|
50 |
+2. Open `%appdata%\MobaXterm\MobaXterm.ini` in an editor. |
|
51 |
+3. Delete the information for the relevant host in [SSH_Hostkeys]. |
|
52 |
+```nohighlight |
|
53 |
+ssh-ed25519@22:laurel.kudpc.kyoto-u.ac.jp=0xd152edcd(omitting the following) |
|
54 |
+``` |
|
55 |
+4. Start MobaXterm. |
|
56 |
+ |
|
57 |
+## How to initialize $HOME/.bashrc{#bashrc} |
|
58 |
+The configuration of modules, location of applications and environment variables have changed since the previous system. |
|
59 |
+If you used a customized .bashrc in the previous system, please modify the .bashrc as necessary.<br> |
|
60 |
+You can also initialize the .bashrc by copying /etc/skel/.bashrc to your home directory as follows.<br> |
|
61 |
+If you cannot log in, please let us know using the [Inquiries Form](https://www.iimc.kyoto-u.ac.jp/en/inquiry/?q=consult), and we will initialize the shell configuration file with administrator privileges. |
|
62 |
+* When you copy /etc/skel/.bashrc to your home directory. |
|
63 |
+```nohighlight |
|
64 |
+$ cp /etc/skel/.bashrc $HOME |
|
65 |
+``` |
|
102 | 66 |
|
103 |
-User data that had been saved in the previous system was migrated automatically and the absolute path was not changed. |
|
67 |
+## $HOME/.ssh directory{#ssh} |
|
68 |
+From the new system, we have changed the operation to manage SSH public keys all together in the user portal. Accordingly, the .ssh directory in the home directory ($HOME) has been moved under $HOME/DOTFILES_20221108/. If you do not need it, please delete it. |
|
104 | 69 |
|
70 |
+## Data migration{#mv_data} |
|
105 | 71 |
|
106 |
-In accordance with the system replacement, the capacity of the home directory and the large volume disk (LARGE) of the system E is increased. |
|
72 |
+User data that had been saved in the previous system was migrated automatically. |
|
107 | 73 |
|
74 |
+<!-- |
|
75 |
+ホームディレクトリの容量は100GBです。パーソナル/グループ/専用クラスタの容量は、2022年度は前システムの容量を継承します。2023年度以降は大型計算機システム利用負担金規程の通りです。 |
|
108 | 76 |
|
109 |
-Directory|Capacity of the previous system|Capacity of new system |
|
77 |
+ディレクトリ|前システムの容量|新システムの容量 |
|
110 | 78 |
---------|----------------|--------------- |
111 |
-Home directory |30GB |100GB |
|
112 |
-Large volume disk(Personal Course) |1TB|2TB |
|
79 |
+ホームディレクトリ |100GB |100GB |
|
80 |
+大容量ストレージ(パーソナルコース) |3TB|8TB |
|
113 | 81 |
|
114 |
-The capacity of the large volume disk follows the following formula. |
|
82 |
+グループコースの大容量ストレージの容量は、以下の計算式に従います。 |
|
115 | 83 |
|
116 |
-**System A,B** |
|
84 |
+**システムA,B** |
|
117 | 85 |
|
118 |
-Type |Capacity of the previous system|Capacity of new system |
|
86 |
+タイプ |前システムの容量 |新システムの容量 |
|
119 | 87 |
----------|------------------|---------------- |
120 |
-Third priority|1.2TB x contract number of nodes|3.6TB x contract number of nodes |
|
121 |
-Second priority |2.0TB x contract number of nodes|6.0TB x contract number of nodes |
|
122 |
-Top priority |2.0TB x contract number of nodes|6.0TB x contract number of nodes |
|
88 |
+準々優先 | - | 6.4TB x 契約ノード数 |
|
89 |
+準優先 |3.6TB x 契約ノード数|9.6TB x 契約ノード数 |
|
90 |
+優先 |6.0TB x 契約ノード数|16.0TB x 契約ノード数 |
|
91 |
+占有 |6.0TB x 契約ノード数|16.0TB x 契約ノード数 |
|
123 | 92 |
|
124 |
-**System C** |
|
93 |
+**システムC** |
|
125 | 94 |
|
126 |
-Type |Capacity of the previous system|Capacity of new system |
|
95 |
+タイプ |前システムの容量 |新システムの容量 |
|
127 | 96 |
----------|------------------|---------------- |
128 |
-Third priority|1.2TB x contract number of nodes|14.4TB x contract number of nodes |
|
129 |
-Second priority |2.0TB x contract number of nodes|24TB x contract number of nodes |
|
97 |
+準々優先 | - | 6.4 x 契約ノード数 |
|
98 |
+優先 |24TB x 契約ノード数|16TB x 契約ノード数 |
|
130 | 99 |
|
131 |
-For the configuration of the file system of the new system, see [Using File System](/filesystem). |
|
132 |
- |
|
133 |
- |
|
134 |
-## Process Limit of Login Nodes{#process_limit} |
|
135 |
- |
|
136 |
-The limit of CPU time and memory amount in the login nodes are extended to reduce a possibility of unintentional abortion of file transaction, etc. |
|
100 |
+**システムG** |
|
137 | 101 |
|
102 |
+タイプ |前システムの容量 |新システムの容量 |
|
103 |
+----------|------------------|---------------- |
|
104 |
+準々優先 | - | 6.4 x 契約ノード数 |
|
105 |
+優先 | - |16TB x 契約ノード数 |
|
106 |
+--> |
|
107 |
+### Large Volume Storage (LARGE){#large} |
|
108 |
+/LARGE2 has been consolidated into /LARGE0 and /LARGE3 into /LARGE1. We configure that you can link to /LARGE0 from /LARGE2 and link to /LARGE1 from /LARGE3 with the existing path, however the configuration will be removed in the future, so please update your path. |
|
138 | 109 |
|
139 |
-System | CPU Time(normal) | CPU Time(maximum) | Memory(normal) |
|
140 |
-Previous system | 1 hour | 20 hours | 2GB |
|
141 |
-New system | 4 hour | 24 hours | 4GB |
|
110 |
+In addition, quota management for large volume storage has been changed from Group Quota to Project Quota. |
|
111 |
+As a result, capacity is managed in the unit of paths of the large volume storage group, not the group to which the file belongs. |
|
142 | 112 |
|
143 |
-## System Compatibility{#compatibility_incompatibility} |
|
113 |
+For details on the file system configuration of the new system, please refer to [Use of Storage](/filesystem). |
|
144 | 114 |
|
145 | 115 |
|
146 |
-<!-- in preparation |
|
147 |
-### Application Software{#application} |
|
148 |
-The new system has made the following changes to the application software. |
|
116 |
+## Process limit of login node{#process_limit} |
|
149 | 117 |
|
118 |
+CPU time and the amount of memory for each system's login node is extended in order to avoid interruptions during file transfers to PCs. |
|
150 | 119 |
|
151 |
-#### **Newly provided application software** |
|
120 |
+System | CPU time (standard)| CPU time (maximum) | Amount of memory (standard) |
|
121 |
+-------- | ------------- | ------------- | ------------- |
|
122 |
+Previous system | 4 hours | 24 hours | 8GB |
|
123 |
+New system |4 hours | 24 hours | 16GB |
|
152 | 124 |
|
125 |
+## Changes{#compatibility_incompatibility} |
|
153 | 126 |
|
154 |
-* [Allinea MAP/DDT](/compilers/allinea-map-ddt) |
|
127 |
+### OS |
|
128 |
+The OS will be changed from CLE/RHEL 7 to RHEL 8. |
|
155 | 129 |
|
156 | 130 |
### Compilers and Libraries{#compiler} |
157 | 131 |
|
158 |
-* The compiler continue to provide Cray, Intel, PGI and GNU. |
|
159 |
- |
|
160 |
-* In the new system, ACML library is not available. |
|
161 |
- |
|
162 |
-* Since the new system does not support GPU, GPU-related CUDA C, CUDA Fortran and OpenACC are not available. |
|
163 |
- |
|
132 |
+Compilers will be provided for Intel, NVIDIA HPC SDK, and GNU. The Cray compiler will no longer be provided. |
|
164 | 133 |
|
165 | 134 |
|
166 | 135 |
### Batch Job Scheduler{#lsf} |
167 | 136 |
|
137 |
+The job scheduler will be changed from PBS to Slurm. |
|
138 |
+ |
|
168 | 139 |
#### Comparisons of Job Script Options |
169 | 140 |
|
170 |
-Purpose| LSF | PBS |
|
141 |
+Purpose| PBS | Slurm |
|
171 | 142 |
:--------------:|:-------------:|:-------------: |
172 |
-Specify the queue for submitting a job.|-q _QUEUENAME_ |Not changed |
|
173 |
-Specify the execution group. | -ug _GROUPNAME_ | Not changed |
|
174 |
-Specify running elapse time for batch-requests | -W _HOUR_ : _MIN_ | Not changed |
|
175 |
-\* Establish per-request Vnodes limits <br>\* Establish per-vnode thread limit <br>\* Establish per-vnode CPUS limit <br>\* Establish per-vnode/per-cpu normal_page memory limits |-A p=_X_:t=_X_:c=_X_:m=_X_ | Not changed |
|
176 |
- Direct stdout output to the stated destination | -o _FILENAME_ | Not changed |
|
177 |
- Direct stderr output to the stdout destination| -e _FILENAME_| Not changed |
|
178 |
-Merge stderr output and stdout output |-o | **-j oe(output to STDOUT) / eo(output to STDERR)** |
|
179 |
-Send mail |-B/-N| **-m a(in job abortion) / b(in beginning job) / e(in ending job)** |
|
180 |
-Send mail for the batch request to the stated user |-u _MAILADDR_|**-M** _MAILADDR_ |
|
181 |
- Declare that batch request is not re-runnable |-rn | **-r n (A space character is needed between r and n.)** |
|
143 |
+ Specify the queue to submit jobs|-q _QUEUENAME_ | -p _QUEUENAME_ |
|
144 |
+ Specify the execution group | -ug _GROUPNAME_ | Not required |
|
145 |
+ Specify the elapsed time limit | -W _HOUR_ : _MIN_ | -t _HOUR_:_MIN_ |
|
146 |
+・ Specify the number of processes <br>・ Specify the number of threads per process<br>・ Specify the number of CPU cores per process<br>・ Specify the memory size per process |-A p=_X_:t=_X_:c=_X_:m=_X_ | --rsc p=_X_:t=_X_:c=_X_:m=_X_ |
|
147 |
+ Specify the standard output file name | -o _FILENAME_ | Not changed |
|
148 |
+ Specify standard error output file name| -e _FILENAME_| Not changed |
|
149 |
+Summarize standard error output | -j oe(Output to standard output) / eo(Output to standard error) | Not changed |
|
150 |
+Send Email|-m a(when a job interrupted) / b(When started) / e(When ended) | --mail-type=BEGIN(When started)/END(when ended)/FAIL(When a job interrupted)/REQUEUE(when re-executed)/ALL(All) |
|
151 |
+Specify email address|-M _MAILADDR_ | --mail-user=_MAILADDR_ |
|
152 |
+Specify prohibition of job re-execution when failure occurs | -r n | --no-requeue |
|
182 | 153 |
|
183 | 154 |
|
184 | 155 |
|
185 |
-#### Comparisons of Job Commands |
|
156 |
+#### Comparison of job-related commands |
|
186 | 157 |
|
187 |
-Purpose| LSF | PBS |
|
158 |
+Purpose| PBS | Slurm |
|
188 | 159 |
:-----------------------:|:-----------------------------:|:-----------------------------: |
189 |
-Check available queues | qstat|**qstat -q** |
|
190 |
-Submit a job | qsub |Not changed |
|
191 |
-Check job status |qjobs | **qstat** |
|
192 |
-Cancel a submitted job | qkill| **qdel** |
|
193 |
-View a user’s own Job |qs | Not changed |
|
160 |
+Check the queue where jobs can be submitted | qstat -q | spartition |
|
161 |
+Submit a job to the queue| qsub | sbatch |
|
162 |
+Check job status |qstat | squeue |
|
163 |
+Cancel a submitted job. | qdel | scancel |
|
164 |
+Check job details|qs | sacct -l |
|
165 |
+ |
|
166 |
+ |
|
194 | 167 |
|
195 | 168 |
|
196 | 169 |
#### Comparisons of Environment Variables |
197 | 170 |
|
198 |
-Purpose | LSF| PBS |
|
171 |
+Purpose | PBS | Slurm |
|
199 | 172 |
:--------------------------:|:------------------------------------------:|:--------------------------------------: |
200 |
-Job request identifier | LSB_JOBID |**QSUB_JOBID** |
|
201 |
-Batch queue name | LSB_QUEUE | **QSUB_QUEUE** |
|
202 |
-Current directory of job submission | LSB_SUBCWD |**QSUB_WORKDIR** |
|
203 |
-Number of assigned processes when submitting a job|LSB_PROCS| **QSUB_PROCS** |
|
204 |
-Number of threads per assigned process when submitting a job | LSB_THREADS| **QSUB_THREADS** |
|
205 |
-Number of CPUs per assigned process when submitting a job | LSB_CPUS | **QSUB_CPUS** |
|
206 |
-Memory limit per assigned process |LSB_MEMORY | **QSUB_MEMORY** |
|
207 |
-Number of processes per node when executing jobs | LSB_PPN | **QSUB_PPN** |
|
208 |
- |
|
173 |
+ Job ID | QSUB_JOBID | SLURM_JOBID |
|
174 |
+ Name of the queue where the job was submitted| QSUB_QUEUE | SLURM_JOB_PARTITION |
|
175 |
+ Current directory where the job was submitted| QSUB_WORKDIR | SLURM_SUBMIT_DIR |
|
176 |
+ Number of processes allocated when executing a job|QSUB_PROCS | SLURM_DPC_NPROCS |
|
177 |
+ Number of threads allocated per process when executing a job| QSUB_THREADS | SLURM_DPC_THREADS |
|
178 |
+ Number of CPU cores allocated per process when executing a job| QSUB_CPUS | SLURM_DPC_CPUS |
|
179 |
+ Upper limit for the amount of memory allocated per process when executing a job |QSUB_MEMORY|ー |
|
180 |
+ Number of processes placed per node when executing a job | QSUB_PPN | ー |
|
181 |
+ |
|
182 |
+## Job Script Conversion{#pbs2slurm} |
|
183 |
+You can convert job script commands and options used in the PBS environment for Slurm with the **pbs2slurm** command. |
|
184 |
+ |
|
185 |
+#### Format |
|
186 |
+```nohighlight |
|
187 |
+pbs2slurm input_script [output_script] |
|
188 |
+``` |
|
189 |
+ |
|
190 |
+#### Examples |
|
191 |
+```nohighlight |
|
192 |
+[b59999@camphor1 script]$ cat pbs.sh |
|
193 |
+#!/bin/bash |
|
194 |
+#======Option======== |
|
195 |
+#QSUB -q gr19999b |
|
196 |
+#QSUB -A p=1:t=1:c=1:m=1G |
|
197 |
+#QSUB -W 12:00 |
|
198 |
+#QSUB -r n |
|
199 |
+#QSUB -M kyodai.taro.1a@kyoto-u.ac.jp |
|
200 |
+#QSUB -m be |
|
201 |
+#====Shell Script==== |
|
202 |
+mpiexec.hydra ./a.out |
|
203 |
+ |
|
204 |
+[b59999@camphor1 script]$ pbs2slurm pbs.sh slurm.sh |
|
205 |
+ |
|
206 |
+[b59999@camphor1 script]$ cat slurm.sh |
|
207 |
+#!/bin/bash |
|
208 |
+#======Option======== |
|
209 |
+#SBATCH -p gr19999b |
|
210 |
+#SBATCH --rsc p=1:t=1:c=1:m=1G |
|
211 |
+#SBATCH -t 12:00 |
|
212 |
+#SBATCH --no-requeue |
|
213 |
+#SBATCH --mail-user=kyodai.taro.1a@kyoto-u.ac.jp |
|
214 |
+#SBATCH --mail-type=BEGIN,END |
|
215 |
+#====Shell Script==== |
|
216 |
+srun ./a.out |
|
217 |
+``` |
|
218 |
+ |
|
219 |
+#### Options for conversion |
|
220 |
+The pbs2slurm command supports conversion of the following options. |
|
221 |
+Options not included below should be modified individually. |
|
222 |
+ |
|
223 |
+| Before conversion | After conversion | Purpose | |
|
224 |
+|------- | ------ | -------| |
|
225 |
+|#QSUB -q | #SBATCH -p | Specify queues | |
|
226 |
+|#QSUB -A | #SBATCH --rsc | Specify resources| |
|
227 |
+|#QSUB -W | #SBATCH -t | Specify elapsed time| |
|
228 |
+|#QSUB -N | #SBATCH -J | Specify job name| |
|
229 |
+|#QSUB -o | #SBATCH -o | Specify the destination for standard output| |
|
230 |
+|#QSUB -e | #SBATCH -e | Specify the destination for standard error output| |
|
231 |
+|#QSUB -m | #SBATCH --mail-type | Specify the timing of email sending| |
|
232 |
+|#QSUB -M | #SBATCH --mail-user | Specify the recipient of the email| |
|
233 |
+|#QSUB -r n | #SBATCH --no-requeue | prohibition of job re-execution | |
|
234 |
+|#QSUB -J | #SBATCH -a | Specify array job| |
|
235 |
+|mpiexec | srun | MPI Execution(If there is a option, it must be removed manually.)| |
|
236 |
+|mpiexec.hydra | srun | MPI Execution(If there is a option, it must be removed manually.)| |
... | ... |
@@ -21,6 +21,7 @@ Option | Description |
21 | 21 |
--rsc p=_PROCS_:t=_THREADS_:c=_CORES_:m=_MEMORY_ <br> or <br> --rsc g=_GPU_ | Specify the amount of job-allocated resources. For more details, [click here](/run/resource#resouce) | tssrun -p gr19999b --rsc p=4:t=8:c=8:m=2G ./a.out <br> or <br> tssrun -p gr19999b --rsc g=1 ./a.out |
22 | 22 |
--x11 | Execute GUI program on computing nodes | tssrun -p gr19999b --x11 xeyes |
23 | 23 |
|
24 |
+* The part *gr19999b* in the example must be changed to your own queue name. |
|
24 | 25 |
* Wen you enter the command, several messages are displayed, then the program begins execution and the results are displayed. |
25 | 26 |
* If the computing nodes for interactive execution are congested, execution may not start immediately after the message is displayed. |
26 | 27 |
* Interactive execution with queue for personal and group courses is also available. |
... | ... |
@@ -153,8 +153,7 @@ Available /tmp capacity in the cloud system is obtained by `Number of processes |
153 | 153 |
|
154 | 154 |
For example, if you submit a job with 4 processes (8 cores per process), you will be allocated **3,008GB** from `4 x 8 x 94`. |
155 | 155 |
|
156 |
-## Supplemental information on storage access for cloud system |
|
157 |
-{#supplemental} |
|
156 |
+## Supplemental information on storage access for cloud system{#supplemental} |
|
158 | 157 |
|
159 | 158 |
### Access to home directories and large volume storage{#storage} |
160 | 159 |
In the cloud system, the files can be accessed with the same PATH since the home directory ($HOME) and large volume storage (/LARGE) are mounted in the same way as on-premise systems such as system B. |