#Install

This is the shortest production-like path from zero to a running self-hosted Sandbox0.

Prerequisites#

  • Kubernetes 1.35+
  • Helm 3.8+
  • kubectl matching your cluster minor version
  • A default StorageClass

Kubernetes 1.35+ is required because Sandbox0 pause/resume depends on Kubernetes in-place pod resource updates. Sandbox pause needs to reduce sandbox pod resources without recreating the pod, otherwise process state cannot be preserved across pause/resume.

0) Create a Local Kind Cluster#

Create the cluster:

bash
kind create cluster --config kind-config.yaml

kind-config.yaml:

yaml
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: sandbox0 nodes: - role: control-plane image: kindest/node:v1.35.0 kubeadmConfigPatches: - | kind: ClusterConfiguration apiServer: extraArgs: enable-aggregator-routing: "true" extraPortMappings: # internal-gateway HTTP port - containerPort: 30080 hostPort: 30080 # registry port for template image push - containerPort: 30500 hostPort: 30500

1) Install infra-operator#

bash
helm repo add sandbox0 https://charts.sandbox0.ai helm repo update helm install infra-operator sandbox0/infra-operator \ --namespace sandbox0-system \ --create-namespace

Verify operator + CRD:

bash
kubectl get pods -n sandbox0-system kubectl get crd sandbox0infras.infra.sandbox0.ai

2) Choose a Deployment Mode and Apply a Sample#

Pick one official sample from source based on your target mode:

Apply the one you selected:

Default recommended mode: fullmode (Linux nodes only). For local macOS/Windows testing, use single-cluster/minimal.yaml instead.

bash
kubectl apply -f https://raw.githubusercontent.com/sandbox0-ai/sandbox0/main/infra-operator/chart/samples/single-cluster/fullmode.yaml

Watch status:

bash
kubectl get sandbox0infra -n sandbox0-system -w

Expected status.phase lifecycle:

  1. Installing
  2. Upgrading (when reconciling changes)
  3. Ready

3) Initial Admin Credentials#

If spec.initUser.passwordSecret is not provided, operator generates admin-password.

bash
ADMIN_PASSWORD="$(kubectl get secret admin-password -n sandbox0-system -o jsonpath='{.data.password}' | base64 -d)" printf 'username: %s\npassword: %s\n' 'admin@example.com' "$ADMIN_PASSWORD"

4) Install s0#

macOS and Linux:

bash
curl -fsSL https://raw.githubusercontent.com/sandbox0-ai/s0/main/scripts/install.sh | bash

Windows PowerShell:

powershell
irm https://raw.githubusercontent.com/sandbox0-ai/s0/main/scripts/install.ps1 | iex

Or with Go:

bash
go install github.com/sandbox0-ai/s0/cmd/s0@latest

Manual release archives are available from GitHub Releases. See the s0 README for platform-specific manual install steps.

5) Configure the API URL and Create a Token#

For the local kind setup above, the API endpoint is http://localhost:30080.

That is because the kind config maps host port 30080, and the sample Sandbox0Infra exposes internal-gateway on the same NodePort.

Export the base URL, log in with the initial admin account, then create an API token:

bash
export SANDBOX0_BASE_URL="http://localhost:30080" s0 auth login unset SANDBOX0_TOKEN && export SANDBOX0_TOKEN="$(s0 apikey create --name test-apikey --role admin --expires-in 30d --raw)"

After that, continue with Get Started to make your first Sandbox0 API request.

Advanced: Deploy on GCP#

For Google Cloud, use GKE Standard rather than GKE Autopilot.

Recommended path:

  1. Create a GKE Standard regional cluster with Linux nodes.
  2. Pin Kubernetes to an explicit 1.35+ version. Do not rely on the GKE default version.
  3. Keep a regular node pool for host-level system components.
  4. Add a GKE Sandbox (gVisor) node pool for sandbox workloads.
  5. Install infra-operator.
  6. Apply the GCP gVisor sample.
  7. Expose internal-gateway through a LoadBalancer.
  8. Verify the deployment by logging in with s0 and claiming a sandbox.

1) Pick a Supported GKE Version#

Sandbox0 requires Kubernetes 1.35+.

Before creating the cluster, query the versions available in your region:

bash
gcloud container get-server-config \ --zone us-east1-b \ --format='yaml(validMasterVersions)'

Choose any available 1.35.x version from validMasterVersions, then reuse it below as ${GKE_VERSION}.

2) Create a Regional Cluster#

This creates a three-zone regional cluster in us-east1, with one regular Linux node per zone:

bash
export GKE_VERSION="1.35.1-gke.1616000" gcloud container clusters create sandbox0-gke-gvisor \ --region us-east1 \ --node-locations us-east1-b,us-east1-c,us-east1-d \ --machine-type e2-standard-2 \ --disk-size 30 \ --num-nodes 1 \ --cluster-version "${GKE_VERSION}"

Get kubeconfig and confirm the nodes:

bash
gcloud container clusters get-credentials sandbox0-gke-gvisor --region us-east1 kubectl get nodes -o wide

Keep a regular node pool for core system services. In the current GKE gVisor sample, components such as internal-gateway, manager, storage-proxy, PostgreSQL, and the builtin registry run on the regular pool, while sandbox workloads run on the gVisor pool.

3) Add a gVisor Node Pool#

Add one gVisor node per zone:

bash
gcloud container node-pools create gvisor-pool \ --cluster sandbox0-gke-gvisor \ --region us-east1 \ --machine-type e2-standard-2 \ --disk-size 30 \ --num-nodes 1 \ --sandbox type=gvisor

Verify both pools:

bash
kubectl get nodes -L cloud.google.com/gke-nodepool,sandbox.gke.io/runtime

Expected result:

  1. default-pool nodes have no sandbox.gke.io/runtime label
  2. gvisor-pool nodes show sandbox.gke.io/runtime=gvisor

4) Install infra-operator#

bash
helm repo add sandbox0 https://charts.sandbox0.ai helm repo update helm install infra-operator sandbox0/infra-operator \ --namespace sandbox0-system \ --create-namespace

Verify operator + CRD:

bash
kubectl get pods -n sandbox0-system kubectl get crd sandbox0infras.infra.sandbox0.ai

5) Apply the GCP gVisor Sample#

Use the official gVisor sample, then switch internal-gateway to LoadBalancer:

bash
kubectl apply -f https://raw.githubusercontent.com/sandbox0-ai/sandbox0/main/infra-operator/chart/samples/single-cluster/fullmode-gke-gvisor.yaml
bash
kubectl patch sandbox0infra fullmode -n sandbox0-system --type merge -p \ '{"spec":{"services":{"internalGateway":{"service":{"type":"LoadBalancer","port":80}}}}}'

Wait for the deployment to finish reconciling:

bash
kubectl get sandbox0infra -n sandbox0-system -w kubectl get svc -n sandbox0-system

Expected result:

  1. Sandbox0Infra becomes Ready
  2. fullmode-internal-gateway is exposed as LoadBalancer on port 80
  3. fullmode-netd and fullmode-k8s-plugin run on gvisor-pool
  4. sandbox pods in tpl-default run on gvisor-pool

This sample uses the native GKE node label sandbox.gke.io/runtime=gvisor for sandbox placement. You do not need to add a separate custom node label just to target gVisor nodes.

6) Expose the API Publicly#

After the LoadBalancer patch above, GKE allocates a public IP for internal-gateway.

Check the service:

bash
kubectl get svc fullmode-internal-gateway -n sandbox0-system -o wide

Wait until EXTERNAL-IP is no longer <pending>, then export the public base URL directly from the service:

bash
export SANDBOX0_BASE_URL="http://$(kubectl get svc fullmode-internal-gateway -n sandbox0-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" printf '%s\n' "$SANDBOX0_BASE_URL"

For production on GCP, this is a better default than exposing a node IP through NodePort.

7) Verify with s0#

Retrieve the generated admin password:

bash
ADMIN_PASSWORD="$(kubectl get secret admin-password -n sandbox0-system -o jsonpath='{.data.password}' | base64 -d)" printf 'username: %s\npassword: %s\n' 'admin@example.com' "$ADMIN_PASSWORD"

Log in from the public endpoint:

bash
s0 auth login \ --api-url "${SANDBOX0_BASE_URL}" \ --email admin@example.com \ --password "${ADMIN_PASSWORD}"

Claim a sandbox to verify the deployment is actually usable:

bash
s0 sandbox create --api-url "${SANDBOX0_BASE_URL}" -t default

Optional deeper check:

bash
s0 sandbox list --api-url "${SANDBOX0_BASE_URL}" s0 sandbox exec <sandbox-id> --api-url "${SANDBOX0_BASE_URL}" -- sh -lc 'echo ready && uname -s'

If s0 sandbox create returns a sandbox ID and the sandbox reaches running, the GCP deployment is working end to end.

Important Notes#

  • Sandbox0 requires different runtime behavior for different components. On GKE, keep a regular node pool for system services and a gVisor node pool for sandbox workloads.
  • template pods can run with runtimeClassName: gvisor, and the GCP gVisor sample places sandbox workloads onto nodes labeled sandbox.gke.io/runtime=gvisor.
  • Use sandboxNodePlacement to place sandbox workloads, netd, and k8s-plugin onto the same sandbox node pool.
  • Do not set services.netd.runtimeClassName: gvisor.
  • netd and k8s-plugin depend on host features such as hostNetwork and hostPath. This is why GKE Autopilot is not a good fit for full Sandbox0 deployments.
  • Even when netd and k8s-plugin are scheduled onto gVisor nodes, they still run on the node's default host runtime because services.netd.runtimeClassName remains unset.
  • If you hit GCP SSD_TOTAL_GB quota limits while creating node pools, reduce --disk-size explicitly instead of relying on the larger default boot disk size.

Why Not Autopilot#

Autopilot is not suitable for the full fullmode deployment because Sandbox0 host-level components need capabilities that Autopilot restricts. In practice, netd and k8s-plugin need node-level access, while template sandbox pods run on the gVisor pool.

Next Steps#

Configuration

Enable storageProxy, netd, and production storage/database settings

Get Started

Use SANDBOX0_BASE_URL and SANDBOX0_TOKEN to make your first request