GPU Container
A GPU container job gives you direct, interactive access to GPU hardware inside an isolated compute environment. It is the most general-purpose job type on CosmicAC. You define the environment, and the hardware runs dedicated entirely to your workload.
What a GPU Container Actually Is
Despite the name, a CosmicAC GPU container is not a Docker container in the traditional sense. Each GPU container job runs inside a KubeVirt virtual machine, a full VM that Kubernetes schedules and manages. The term container describes the job type and the interface it exposes, not the underlying isolation mechanism.
KubeVirt runs each VM inside a non-privileged pod, so your workload runs without elevated container privileges and inside an isolated environment.
How GPU Hardware Is Exposed
CosmicAC exposes direct GPU hardware to your VM through secure device plugins, making it available to scheduled workloads without requiring privileged pod access.
Shell Access
Once a GPU container job is running, you reach it through a shell session. CosmicAC connects your CLI directly to the agent running inside the VM over Hyperswarm SSH, a peer-to-peer SSH tunnel. This connection is how you communicate with and maintain an active session to your GPU container.
The General-Purpose Environment Model
GPU containers give you full control over your environment. You install packages, run scripts, and configure your setup as needed inside the VM.
This is in contrast to job types that do not need that level of control, such as Managed Inference, which handles the serving infrastructure for you.
What's Next
- Getting Started: GPU Container Job — Create your first GPU container job through the web interface.
- How to Create a GPU Container — Create a GPU container job using the CLI.
- How to Access a GPU Container — Open a shell session and start working inside the container.
- GPU Types — Available GPU hardware configurations and vRAM options.