--- title: 'Quick Start' taxonomy: category: - docs --- [toc] ## Activation of Login Account{#init} First-time users of the system are required to complete the User Portal start-up procedures after completing the application process. For details, please refer to the [Procedure to Start the Service](/misc/portal_init). ## How to access to the system{#login} Login to the supercomputer is limited to SSH (Secure SHell) key authentication. * Access method: SSH public key authentication * SSH public key :Please register from [User Portal](https://web.kudpc.kyoto-u.ac.jp/portal/). If you have registered the key in the previous system, it has been taken over and you do not need to register it. * Connection Destination: * SysA:camphor.kudpc.kyoto-u.ac.jp * SysB/Cloud:laurel.kudpc.kyoto-u.ac.jp * SysC:cinnamon.kudpc.kyoto-u.ac.jp * SysG:gardenia.kudpc.kyoto-u.ac.jp * File Transfer Server:hpcfs.kudpc.kyoto-u.ac.jp !! * Please be sure to attach a passphrase to the private key. If a private key with no passphrase is placed on the login node, it will be deleted automatically. !! * It is strictly prohibited to share the same account (user ID) with more than one person. For details, please refer to [Access](/login). ## Login Environment{#env} Each login node has a different module environment that is automatically loaded. The batch processing and compiler environments shown in the table are loaded, respectively. Switching the system environment from any login node makes it possible to submit batch jobs to each other. Note that the table below is information for the year 2023. | Login Node | System | Batch Processing | compile environment |---------- | ---------------- | ------------- | --------------- | camphor.kudpc.kyoto-u.ac.jp | SysA | slurm | intel, intelmpi, PrgEnvIntel | laurel.kudpc.kyoto-u.ac.jp | SysB | slurm | intel, intelmpi, PrgEnvIntel | cinnamon.kudpc.kyoto-u.ac.jp | SysC | slurm | intel, intelmpi, PrgEnvIntel | gardenia.kudpc.kyoto-u.ac.jp | SysG | slurm | nvhpc, openmpi, PrgEnvNvidia If you want to switch to the cloud system environment, it is available with the following command. ```nohighlight $ module switch SysCL ``` See [Modules](/config/modules) for more information on the module command. ## Use of Storage{#filesystem} The home directory is available to all users for data storage. Large volume storage is available for users of Personal Course, Group Course, and Private Cluster Course. Both storage areas can be accessed by all login nodes and computing nodes with the same PATH. * Home directory (/home):100G * Large volume storage (/LARGE0, /LARGE1):Several TB ~ several hundred TB (set according to the amount of resources applied for) * High-speed storage (/FAST):Several hundred GB ~ several tens of TB(set according to the amount of resources applied for; Trial offer for 400 GB to 1000 GB in FY2024) For details, please refer to [Use of Storage](/filesystem). ## Compiling of the Program{#compiler} The Intel compiler is set by default on the cloud system. For details, please refer to [Compilers・Libraries](/compilers). ## Execution of the Program{#run} We provide a program execution environment using the Slurm job scheduler. For details, please refer to [Execution of the Program](/run). ## Available Software The list is available from [Software / Libraries](/software). ## Contact Information{#inquiry} If you have any inquiries, please contact us from [Inquiry form](https://www.iimc.kyoto-u.ac.jp/en/inquiry/?q=consult).