1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
|
# Rocky SSH Container
## Setup
### SSH Keys
Place your SSH public keys in the `docker_build/ssh-keys/` directory:
```bash
cp ~/.ssh/id_ed25519.pub docker_build/ssh-keys/
```
The container will automatically add all `.pub` files from this directory to `/root/.ssh/authorized_keys`.
## Building Containers
### Base Development Container
```bash
# From the dev_env directory
podman build -t rocky_dev:latest -f docker_build/Dockerfile .
```
### GPU-Enabled Container
The GPU container builds on top of the base container using multi-stage build:
```bash
# First build the base container (from dev_env directory)
podman build -t rocky_dev:latest -f docker_build/Dockerfile .
# Then build the GPU version
podman build -t rocky_dev_gpu:latest -f docker_build/Dockerfile.gpu .
```
## GPU Support
The GPU-enabled container includes:
- NVIDIA Container Toolkit for GPU access
- GPU test script at `/usr/local/bin/gpu-test.sh`
- Environment variables configured for NVIDIA GPU visibility
- Workspace directory at `/workspace` for GPU workloads
### Running with GPU Support
```bash
# Run GPU-enabled container
podman run -it --device nvidia.com/gpu=all rocky_dev_gpu:latest
# Test GPU inside container
gpu-test.sh
nvidia-smi
```
## Podman
```bash
python3 podman_launch_devenv.py
python3 podman_launch_devenv.py run
python3 podman_launch_devenv.py run -p 2222
python3 podman_launch_devenv.py list
python3 podman_launch_devenv.py cleanup
```
## Kubernetes
```bash
kubectl apply -f rocky-ssh-deployment.yaml
kubectl get pods -l app=rocky-dev -o wide
kubectl get svc rocky-dev-svc
kubectl delete pod rocky-dev-0
kubectl scale statefulset rocky-dev --replicas=10
kubectl delete -f rocky-ssh-deployment.yaml
```
### Kubernetes GPU Deployment
```bash
kubectl apply -f rocky-ssh-gpu-deployment.yaml
kubectl get pods -l app=rocky-dev-gpu -o wide
kubectl describe pod rocky-dev-gpu-0 | grep nvidia
kubectl exec -it rocky-dev-gpu-0 -- nvidia-smi
kubectl scale statefulset rocky-dev-gpu --replicas=4
kubectl delete -f rocky-ssh-gpu-deployment.yaml
```
## Local Registry
```bash
podman run -d -p 5000:5000 --name registry registry:2
podman tag localhost/rocky_dev:latest localhost:5000/rocky_dev:latest
podman push localhost:5000/rocky_dev:latest --tls-verify=false
```
## Access
```bash
# Direct shell
kubectl exec -it rocky-dev-0 -- /bin/bash
# SSH with agent forwarding (2 terminals)
kubectl port-forward rocky-dev-0 2222:22
ssh-agent bash -c 'ssh-add ~/macm4-resident && ssh -A -p 2222 root@localhost'
# External
kubectl port-forward --address 0.0.0.0 rocky-dev-0 9999:22
```
## Features
### Development Tools
- C/C++ development: gcc, gcc-c++, make, cmake
- Python 3 with pip and development headers
- Rust toolchain with cargo tools (cargo-edit, bacon, evcxr_jupyter)
- Node.js v22 via nvm
- Claude Code CLI tool
### System Utilities
- SSH server with key-based authentication
- tmux, vim, nano editors
- htop, bmon for system monitoring
- git, wget, tree, bat
- Network tools: nc, net-tools, wireguard-tools
### GPU Computing (GPU version only)
- NVIDIA GPU support via container toolkit
- GPU test utilities
- Dedicated /workspace directory for ML/GPU workloads
|