This page explains how to use Academic Center for Computing and Media Studies, Kyoto University (ACCMS) for HPCI.
For those who have completed registration for use of our supercomputer, we will send you a Registration Completion Notice by email. Once you receive the Registration Completion Notice, please follow the Procedure to Start the Service.
The user number notified in the “Registration Completion Notice” is used for the purposes shown in the table below. It may fall into both categories. You can confirm the primary center and computing resources from HPCI online application system .
|Category||Usage of user number|
|Those who designate Kyoto University as the primary center||Use this user number as a HPCI account when WEB authentication is required, such as issuing a certificate.|
|Those who use computing resources of Kyoto University||This login ID is valid only within computing resources of Kyoto University. Although there is little need of the use in HPCI, it is possible to connect directly with SSH.|
Please refer to the Manual provided by HPCI for how to issue electronic certificates and log in. The host names when using computing resources of Kyoto university are as follows. As we perform the registration process to permit the login after issuing the electronic certificate, please log in after about 15 minutes.
|System name||Host name|
|System A (Cray XC40)||camphor.kudpc.kyoto-u.ac.jp|
HPCI uses an electronic certificate to log in to a resource provider using SSH (GSI-SSH) by GSI authentication (Grid Security Infrastructure).
The gsissh commands and myproxy-logon commands required for GSI-SSH are prepared for the computing resources of Kyoto University.
Those who use computing resources of Kyoto university can log in to the other system components via computing resources of Kyoto university by logging in to here without building an environment for HPCI. In that case, please use myproxy-logon (get proxy certificate) and gsissh command as follows.
## Obtain proxy certificate (replace hpci00XXXX with your own HPCI-ID) $ myproxy-logon -s portal.hpci.nii.ac.jp -l hpci00XXXX ## Log in to other resource providers $ gsissh host01.example.jp
For how to use the system, please refer to Access. The system that can be used in HPCI is System A (Cray XC40).
To use computing resources of HPCI, the following queue names must be specified when submitting jobs in batch. For details on how to use the batch system, please refer to Batch Processing (For System A). The HPCI project ID included in the queue name is the project ID at the time of the initial adoption.
|Classification||System||Type||Queue name||Number of nodes (FY2022)||Remarks|
|HPCI||A||Full-year use||hpa||200 Nodes||Nodes are used by sharing between HPCI users.Please avoid exclusive use of resources for a long time, because the resources should be shared with multiple projects.|
|HPCI-JHPCN||A||Full-year use||jha||52 nodes||Nodes are used by sharing between HPCI-JHPCN users.Please avoid exclusive use of resources for a long time, because the resources should be shared with multiple projects.|
|HPCI-JHPCN||A||Intensive use||jhXXXXXXa||64 nodes||Replace "jhXXXXXX" with the project ID. The period of use is notified individually to the project representative.|
In Kyoto university, same user ID (user number) is used even if the multiple projects are adopted in HPCI. Please specify the queue name and the group name when submitting jobs in batch mode to switch the projects.
The below is a sample in the System A of job scripts that is required for submitting a batch job. "hp189999" is specified as the group name in "#QSUB -ug". The project ID at the time of initial adoption is used for the group name of the HPCI project ID.
$ cat sample.sh #! / bin / bash # ============ PBS Options ============ #QSUB -q hpa #QSUB -ug hp189999 #QSUB -W 2:00 #QSUB -A p = 4: t = 8: c = 8: m = 1800M # ============ Shell Script ============ aprun -n $ QSUB_PROCS -d $ QSUB_THREADS -N $ QSUB_PPN ./a.out
The storage paths of Kyoto University that can be used for HPCI projects are as follows. You can use either combination of /LARGE0, /LARGE1 or /LARGE2, /LARGE3. The available LARGE area is notified by e-mail to the project representative and contact person in charge.
The available capacity per project is as follows.
|HPCI Project||Available storage capacity|
|Storage capacity on System A for HPCI projects||Listed in the Large Volume Disk column of the notification of the HPCI resource (allocate one half each to /LARGE0, /LARGE1 or /LARGE2, /LARGE3)|
|Storage capacity on System A for HPCI-JHPCN projects||Listed in the Large Volume Disk column of the notification of the HPCI resource (allocate one half each to /LARGE0, /LARGE1 or /LARGE2, /LARGE3)|
/ LARGE1 and / LARGE3 are backup destinations for / LARGE0 and / LARGE2 in the initial state. / LARGE1 and / LARGE3 can be used by canceling the backup. Please change the backup setting by requesting from Inquiry Form.
In addition to the above LARGE storage areas, you can use the home directory up to 100GB as storage . For details on how to use storage areas, please see Using File System.
The mount points for users of HPCI projects which have been approved for use of the HPCI shared storage are as follows. For details on how to use this shared storage, please see the Manuals for use of the HPCI resources.
|/gfarm /Project ID /user number|
To check the HPCI usage status, please log in to the User Portal and click Statistics Information on the upper part of the page, then click HPCI Statistics on the left side of the menu.
If you divide the value of core elapsed time (seconds) by 3600 to convert the unit into "hours" and divide by 68 (number of cores per node of System A), you can calculate the usage record for the node time of the available frame. In this page, information of all users who can use the queue is displayed, so it is necessary to compare the total value with the project time.
For HPCI users, we are operating a management system for disseminating information to users from each system component organization and for sharing documents within the projects.