Remote Development Guide on Tencent Cloud GPU Instances: Driver, CUDA, cuDNN Installation and PyCharm/Jupyter Integration
This guide walks researchers through selecting a Tencent Cloud GN7 GPU instance, installing NVIDIA drivers, CUDA 10.2, cuDNN, setting up PyTorch and Jupyter, and configuring remote development with PyCharm, enabling efficient, cost‑effective AI development on a Tesla T4 GPU server.
This article provides a step‑by‑step guide for researchers and developers who want to use Tencent Cloud GPU instances for deep learning and other AI workloads.
1. Instance selection – The author recommends the GN7 series (e.g., GN7.5XLARGE80) equipped with a Tesla T4 GPU, 20 vCPU, 80 GB RAM, Ubuntu 18.04, and disables the automatic GPU driver installation.
2. Driver installation – After creating the instance, the NVIDIA driver is installed via the official apt repository:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cudaAfter a reboot the driver can be verified with nvidia-smi , which shows the driver version, CUDA version, GPU model and memory.
3. CUDA Toolkit installation – The CUDA 10.2 toolkit is installed from the NVIDIA website using the network‑deb method. The PATH is updated:
echo 'export PATH=/usr/local/cuda/bin:$PATH' | sudo tee /etc/profile.d/cuda.sh
source /etc/profileSample CUDA programs from /usr/local/cuda/samples are compiled with make -j16 -k and run to confirm the environment.
4. cuDNN installation – cuDNN 7.6.5 deb packages are downloaded from the NVIDIA developer site, transferred to the server and installed with:
sudo dpkg -i libcudnn7*.debVerification is performed by building the provided cuDNN samples.
5. Remote development with PyCharm – The guide shows how to create a remote interpreter in PyCharm, configure SSH connection, synchronize project files via SFTP, and run or debug Python scripts on the GPU server. Tips include using the built‑in terminal for SSH and rsync for large data transfers:
# rsync -avtP ~/data ubuntu@your-server-ip:~/Debugging is demonstrated with breakpoints and variable inspection.
6. PyTorch installation and verification – On the GPU instance, PyTorch is installed with:
sudo apt install python3-pip
sudo pip3 install numpy torch torchvisionRunning python3 -c "import torch; print(torch.__version__)" confirms the installation.
7. Jupyter Notebook setup – JupyterLab is installed via pip, a workspace directory is created, and the server is started with jupyter-notebook --no-browser --ip=0.0.0.0 --port=8887 ~/jupyter_workspace . The required security‑group rule (TCP:8887) is mentioned. Access from a local browser requires the public IP and the password set with jupyter-notebook password .
Integration of Jupyter with PyCharm is also covered: adding the Jupyter server URL in PyCharm settings allows creating and running notebooks directly inside the IDE.
The article concludes that a local PyCharm IDE combined with a Tencent Cloud GPU instance provides a powerful, flexible, and cost‑effective environment for AI research and development.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.