--- title: 'Interactive Processing(GPU Server)' published: true taxonomy: category: - docs external_links: process: true no_follow: true target: _blank mode: active --- [toc] ## GPU Server In order to improve the visualization environment of the supercomputer, we have started offering a server equipped with GPU experimentally from December 2018.<br> The GPU server provided by the Academic Center for Computing and Media Studies, Kyoto University (ACCMS) is constructed in compliance with the system B/C environment, it is supposed to be used from system B/C.<br> It is also possible to use it from system A, but applications that can be used are limited to those that can be used in system B/C, and some applications compiled with System A will not work. Please understand beforehand that there is a risk of problems. ### System configuration GPU servers provided by the ACCMS are as follows. #### GPU node 1 (gp - 0001) Performance specifications |   :-------------------:|:----------------------------------------------------------------: CPU | Intel (R) Xeon (R) Gold 6140 2.30 GHz (18 cores) x 2 Memory | 512 GB Equipped GPU | NVIDIA Quadro P4000 (8 GB GDDR5) x 2 #### GPU node 2 (gp - 0002) Performance specifications |   :-------------------:|:----------------------------------------------------------------: CPU | Intel(R) Xeon(R) Silver 4110 2.10GHz (8 cores) x 2 Memory | 512 GB Equipped GPU | NVIDIA Quadro V100 (32 GB HBM2) x 2 ## Flow of the usage The GPU server can be used from the login nodes of the systems A , B and C. Please log in to the login node referring to [Access](https://web.kudpc.kyoto-u.ac.jp/manual/en/login#flow). <br> There is no particular application for using the GPU server, and any supercomputer user can use it. ## Commands for running applications on the GPU server When you run an application using the GPU server, you need to log in to the login node and execute it using the xrun command. <br> By specifying the -gpu option and the command you want to execute on the xrun command, the program can be executed on the GPU server. <br> ### When using in the System B/C Command | Description | Remarks :-----------:|:----------------------------------------------:|:-----------------------------------------------------------------------:| xrun -gpu | Execute GUI program on GPU server|Available only with X server software such as [Exceed onDemand](https://web.kudpc.kyoto-u.ac.jp/grav-draft/en/login/eod). * Notes on the entire systems * After entering the command, execution of the program is started after several messages are displayed. * Unlike normal xrun (without -gpu), the same processing as new login will be performed before starting the program on the GPU server. Please note that $HOME/.bash_profile, .bashrc, .tcshrc etc will be evaluated again. Also please note that since the module environment is also reset, environment variables at command execution can not be reproduced accurately. There may be differences in the order and definition status of PATH and LD_LIBRARY_PATH. * If it exceeds the acceptance limit of GPU server, it shows that it is busy and the program ends. Please leave a while and try again. #### Example of execution ```nohighlight $ module load mathematica $ xrun -gpu mathematica [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to [VGL] 10.11.0.9, the IP address of your SSH client. (X application launches.) ``` ### When using in the System A Command | Description | Remarks :-----------:|:----------------------------------------------:|:-------------------:|:-----------------------------------------------------------------------:| xrun -gpu -module ModuleName | Execute GUI program on GPU server |Available only with X server software such as [Exceed onDemand](https://web.kudpc.kyoto-u.ac.jp/grav-draft/ja/login/eod). * Notes on the system A * Since the GPU server is built in compliance with the system B/C, it is **experimental provision** for the use from the system A. * After entering the command, execution of the program is started after several messages are displayed. * Environment variables set in the login node and loaded module environment will **not be inherited** to the GPU server. * When executing the program, you need to specify the module to be read using the -module option. Please note that the module name specified at this time will be **module name available on the system B/C**. * Notes on the entire systems * Unlike normal xrun (without -gpu), the same processing as new login will be performed before starting the program on the GPU server. Please note that $HOME/.bash_profile, .bashrc, .tcshrc etc will be evaluated again. Also please note that since the module environment is also reset, environment variables at command execution can not be reproduced accurately. There may be differences in the order and definition status of PATH and LD_LIBRARY_PATH. * If it exceeds the acceptance limit of GPU server, it shows that it is busy and the program ends. Please leave a while and try again. #### Example of execution ```nohighlight $ xrun -gpu -module mathematica mathematica [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to [VGL] 10.11.0.11, the IP address of your SSH client. (X application launches.) ``` ### Command options When executing a program using the GPU server, the options available with the xrun command are as follows. Option | Required / Not required  | Description :-----------------------------:|:---------------------------:|:-------------------------------------------------: -gpu | Required | Declaration for executing programs using GPU server -module ModuleName | Option (**System A**)<br>Not Available(**System B/C**) |When executing a program using the GPU server from the system A, it is necessary to **specify system B/C module**. -W HOUR:MINUTES | Option |Specification of the upper limit of the elapsed time (hour: minute)<br>The elapsed time standard is 1 hour (1: 00), and it can be specified up to 24 hours (24: 00). ## Restriction on running programs using GPU server When executing a program using the GPU server, the following restrictions apply.<br> * Node sharing<br> The GPU server is currently composed of one node.<br> Regardless of the course under contract, all users share the node and use it. When it exceeds the acceptance upper limit of the GPU server, it shows that it is busy and the program ends.<br> * Number of available CPU cores<br> The number of CPU cores that each user can simultaneously use is **4 core** and it can not be changed.<br> * Amount of memory available<br> The amount of memory that each user can simultaneously use is **32 GB** and it can not be changed.<br> * Program elapsed time limit<br> When executing a program using the GPU server, the program is forcibly terminated if the elapsed time exceeds the standard limit (1 hour).<br> -You can increase the time to the upper limit (24 hours) by specifying the W option.