#Install
This is the shortest production-like path from zero to a running self-hosted Sandbox0.
Prerequisites#
- Kubernetes
1.35+ - Helm
3.8+ kubectlmatching your cluster minor version- A default
StorageClass
Kubernetes 1.35+ is required because Sandbox0 pause/resume depends on Kubernetes in-place pod resource updates. Sandbox pause needs to reduce sandbox pod resources without recreating the pod, otherwise process state cannot be preserved across pause/resume.
0) Create a Local Kind Cluster#
Create the cluster:
bashkind create cluster --config kind-config.yaml
kind-config.yaml:
yamlkind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: sandbox0 nodes: - role: control-plane image: kindest/node:v1.35.0 kubeadmConfigPatches: - | kind: ClusterConfiguration apiServer: extraArgs: enable-aggregator-routing: "true" extraPortMappings: # internal-gateway HTTP port - containerPort: 30080 hostPort: 30080 # registry port for template image push - containerPort: 30500 hostPort: 30500
1) Install infra-operator#
bashhelm repo add sandbox0 https://charts.sandbox0.ai helm repo update helm install infra-operator sandbox0/infra-operator \ --namespace sandbox0-system \ --create-namespace
Verify operator + CRD:
bashkubectl get pods -n sandbox0-system kubectl get crd sandbox0infras.infra.sandbox0.ai
2) Choose a Deployment Mode and Apply a Sample#
Pick one official sample from source based on your target mode:
- single-cluster/minimal.yaml
- single-cluster/volumes.yaml
- single-cluster/network-policy.yaml
- single-cluster/fullmode.yaml
- multi-cluster/control-plane.yaml
- multi-cluster/data-plane.yaml
Apply the one you selected:
Default recommended mode: fullmode (Linux nodes only). For local macOS/Windows testing, use single-cluster/minimal.yaml instead.
bashkubectl apply -f https://raw.githubusercontent.com/sandbox0-ai/sandbox0/main/infra-operator/chart/samples/single-cluster/fullmode.yaml
Watch status:
bashkubectl get sandbox0infra -n sandbox0-system -w
Expected status.phase lifecycle:
InstallingUpgrading(when reconciling changes)Ready
3) Initial Admin Credentials#
If spec.initUser.passwordSecret is not provided, operator generates admin-password.
bashADMIN_PASSWORD="$(kubectl get secret admin-password -n sandbox0-system -o jsonpath='{.data.password}' | base64 -d)" printf 'username: %s\npassword: %s\n' 'admin@example.com' "$ADMIN_PASSWORD"
4) Install s0#
macOS and Linux:
bashcurl -fsSL https://raw.githubusercontent.com/sandbox0-ai/s0/main/scripts/install.sh | bash
Windows PowerShell:
powershellirm https://raw.githubusercontent.com/sandbox0-ai/s0/main/scripts/install.ps1 | iex
Or with Go:
bashgo install github.com/sandbox0-ai/s0/cmd/s0@latest
Manual release archives are available from GitHub Releases. See the s0 README for platform-specific manual install steps.
5) Configure the API URL and Create a Token#
For the local kind setup above, the API endpoint is http://localhost:30080.
That is because the kind config maps host port 30080, and the sample Sandbox0Infra exposes internal-gateway on the same NodePort.
Export the base URL, log in with the initial admin account, then create an API token:
bashexport SANDBOX0_BASE_URL="http://localhost:30080" s0 auth login unset SANDBOX0_TOKEN && export SANDBOX0_TOKEN="$(s0 apikey create --name test-apikey --role admin --expires-in 30d --raw)"
After that, continue with Get Started to make your first Sandbox0 API request.
Advanced: Deploy on GCP#
For Google Cloud, use GKE Standard rather than GKE Autopilot.
Recommended path:
- Create a GKE Standard regional cluster with Linux nodes.
- Pin Kubernetes to an explicit
1.35+version. Do not rely on the GKE default version. - Keep a regular node pool for host-level system components.
- Add a GKE Sandbox (
gVisor) node pool for sandbox workloads. - Install
infra-operator. - Apply the GCP
gVisorsample. - Expose
internal-gatewaythrough aLoadBalancer. - Verify the deployment by logging in with
s0and claiming a sandbox.
1) Pick a Supported GKE Version#
Sandbox0 requires Kubernetes 1.35+.
Before creating the cluster, query the versions available in your region:
bashgcloud container get-server-config \ --zone us-east1-b \ --format='yaml(validMasterVersions)'
Choose any available 1.35.x version from validMasterVersions, then reuse it below as ${GKE_VERSION}.
2) Create a Regional Cluster#
This creates a three-zone regional cluster in us-east1, with one regular Linux node per zone:
bashexport GKE_VERSION="1.35.1-gke.1616000" gcloud container clusters create sandbox0-gke-gvisor \ --region us-east1 \ --node-locations us-east1-b,us-east1-c,us-east1-d \ --machine-type e2-standard-2 \ --disk-size 30 \ --num-nodes 1 \ --cluster-version "${GKE_VERSION}"
Get kubeconfig and confirm the nodes:
bashgcloud container clusters get-credentials sandbox0-gke-gvisor --region us-east1 kubectl get nodes -o wide
Keep a regular node pool for core system services. In the current GKE gVisor sample, components such as internal-gateway, manager, storage-proxy, PostgreSQL, and the builtin registry run on the regular pool, while sandbox workloads run on the gVisor pool.
3) Add a gVisor Node Pool#
Add one gVisor node per zone:
bashgcloud container node-pools create gvisor-pool \ --cluster sandbox0-gke-gvisor \ --region us-east1 \ --machine-type e2-standard-2 \ --disk-size 30 \ --num-nodes 1 \ --sandbox type=gvisor
Verify both pools:
bashkubectl get nodes -L cloud.google.com/gke-nodepool,sandbox.gke.io/runtime
Expected result:
default-poolnodes have nosandbox.gke.io/runtimelabelgvisor-poolnodes showsandbox.gke.io/runtime=gvisor
4) Install infra-operator#
bashhelm repo add sandbox0 https://charts.sandbox0.ai helm repo update helm install infra-operator sandbox0/infra-operator \ --namespace sandbox0-system \ --create-namespace
Verify operator + CRD:
bashkubectl get pods -n sandbox0-system kubectl get crd sandbox0infras.infra.sandbox0.ai
5) Apply the GCP gVisor Sample#
Use the official gVisor sample, then switch internal-gateway to LoadBalancer:
bashkubectl apply -f https://raw.githubusercontent.com/sandbox0-ai/sandbox0/main/infra-operator/chart/samples/single-cluster/fullmode-gke-gvisor.yaml
bashkubectl patch sandbox0infra fullmode -n sandbox0-system --type merge -p \ '{"spec":{"services":{"internalGateway":{"service":{"type":"LoadBalancer","port":80}}}}}'
Wait for the deployment to finish reconciling:
bashkubectl get sandbox0infra -n sandbox0-system -w kubectl get svc -n sandbox0-system
Expected result:
Sandbox0InfrabecomesReadyfullmode-internal-gatewayis exposed asLoadBalanceron port80fullmode-netdandfullmode-k8s-pluginrun ongvisor-pool- sandbox pods in
tpl-defaultrun ongvisor-pool
This sample uses the native GKE node label sandbox.gke.io/runtime=gvisor for sandbox placement. You do not need to add a separate custom node label just to target gVisor nodes.
6) Expose the API Publicly#
After the LoadBalancer patch above, GKE allocates a public IP for internal-gateway.
Check the service:
bashkubectl get svc fullmode-internal-gateway -n sandbox0-system -o wide
Wait until EXTERNAL-IP is no longer <pending>, then export the public base URL directly from the service:
bashexport SANDBOX0_BASE_URL="http://$(kubectl get svc fullmode-internal-gateway -n sandbox0-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" printf '%s\n' "$SANDBOX0_BASE_URL"
For production on GCP, this is a better default than exposing a node IP through NodePort.
7) Verify with s0#
Retrieve the generated admin password:
bashADMIN_PASSWORD="$(kubectl get secret admin-password -n sandbox0-system -o jsonpath='{.data.password}' | base64 -d)" printf 'username: %s\npassword: %s\n' 'admin@example.com' "$ADMIN_PASSWORD"
Log in from the public endpoint:
bashs0 auth login \ --api-url "${SANDBOX0_BASE_URL}" \ --email admin@example.com \ --password "${ADMIN_PASSWORD}"
Claim a sandbox to verify the deployment is actually usable:
bashs0 sandbox create --api-url "${SANDBOX0_BASE_URL}" -t default
Optional deeper check:
bashs0 sandbox list --api-url "${SANDBOX0_BASE_URL}" s0 sandbox exec <sandbox-id> --api-url "${SANDBOX0_BASE_URL}" -- sh -lc 'echo ready && uname -s'
If s0 sandbox create returns a sandbox ID and the sandbox reaches running, the GCP deployment is working end to end.
Important Notes#
- Sandbox0 requires different runtime behavior for different components. On GKE, keep a regular node pool for system services and a
gVisornode pool for sandbox workloads. templatepods can run withruntimeClassName: gvisor, and the GCPgVisorsample places sandbox workloads onto nodes labeledsandbox.gke.io/runtime=gvisor.- Use
sandboxNodePlacementto place sandbox workloads,netd, andk8s-pluginonto the same sandbox node pool. - Do not set
services.netd.runtimeClassName: gvisor. netdandk8s-plugindepend on host features such ashostNetworkandhostPath. This is why GKE Autopilot is not a good fit for full Sandbox0 deployments.- Even when
netdandk8s-pluginare scheduled ontogVisornodes, they still run on the node's default host runtime becauseservices.netd.runtimeClassNameremains unset. - If you hit GCP
SSD_TOTAL_GBquota limits while creating node pools, reduce--disk-sizeexplicitly instead of relying on the larger default boot disk size.
Why Not Autopilot#
Autopilot is not suitable for the full fullmode deployment because Sandbox0 host-level components need capabilities that Autopilot restricts. In practice, netd and k8s-plugin need node-level access, while template sandbox pods run on the gVisor pool.
Next Steps#
Configuration
Enable storageProxy, netd, and production storage/database settings
Get Started
Use SANDBOX0_BASE_URL and SANDBOX0_TOKEN to make your first request