Query Terraform state. Gate changes. Track applies.
Gate risky changes in CI. Answer questions without parsing state files. Track every apply as a transaction.
Quickstart
Install, authenticate, run your first query.
$ curl -sSL https://get.stategraph.com/install.sh | sh
Prefer another method? See installation docs.
$ export STATEGRAPH_API_BASE=https://stategraph.example.com $ export STATEGRAPH_API_KEY=your-api-key-here
$ stategraph mql query \ "SELECT r.type, count(*) AS n FROM resources r GROUP BY r.type ORDER BY n DESC" [ { "type": "aws_instance", "n": 47 }, { "type": "aws_security_group", "n": 23 }, { "type": "aws_s3_bucket", "n": 18 } ]
Query infrastructure like a database
MQL queries the normalized state graph. No downloading or parsing .tfstate.
Self-documenting schema
mql schema returns every table and column available for querying. The CLI tells you what you can ask.
$ stategraph mql schema { "tables": [ { "name": "instances", "columns": [ { "name": "address", "type": "string" }, { "name": "type", "type": "string" }, { "name": "attributes", "type": "json" }, ... ] }, { "name": "resources", ... }, { "name": "states", ... } ] }
Query across states
JOIN instances to states for cross-workspace queries. Answer "how many resources per workspace?" without parsing files.
$ stategraph mql query \ "SELECT s.workspace, count(*) as total FROM instances i JOIN states s ON i.state_id = s.id GROUP BY s.workspace ORDER BY total DESC" [ { "workspace": "production", "total": 312 }, { "workspace": "staging", "total": 187 }, { "workspace": "development", "total": 94 } ]
Know what breaks before you break it
Dependency distance and total impact for any resource address.
Check blast radius
Every resource that depends on the one you're changing. TSV output for piping to wc -l or awk.
$ stategraph states instances blast-radius \ --state $STATE_ID "aws_vpc.main" # TSV: resource, address, index, distance aws_subnet.public aws_subnet.public[0] 0 1 aws_subnet.private aws_subnet.private[0] 0 1 aws_instance.web aws_instance.web[0] 0 2 aws_lb.public aws_lb.public 0 2 $ stategraph states instances blast-radius \ --state $STATE_ID "aws_vpc.main" | wc -l 23
Gate changes on impact
No plan required. Check any resource address against a threshold. Run it in CI before apply, in a cron job, or ad-hoc from your terminal.
# No plan needed — just the address $ count=$(stategraph states instances blast-radius \ --state $STATE_ID "aws_vpc.main" | wc -l) $ if [ "$count" -gt 10 ]; then echo "BLOCKED: aws_vpc.main affects $count resources" exit 1 fi $ echo "PASS: aws_vpc.main affects $count resources" PASS: aws_vpc.main affects 4 resources
Summarize a state
Edge, instance, module, and provider counts in one command. Drill into resource types to see instance distribution.
$ stategraph states summary --state $STATE_ID # TSV EDGES INSTANCES MODULES PROVIDERS RESOURCES 847 312 14 3 89 $ stategraph states resources summary --state $STATE_ID # TSV: resource type, instance count aws_instance 47 aws_security_group 23 aws_s3_bucket 18 aws_iam_role 14 ...
Find unmanaged resources. Generate import blocks.
Scan your cloud account, compare against Terraform state, generate HCL to close the gap.
Scan for coverage gaps
Compare live cloud resources against your Terraform state. Totals, managed counts, and every unmanaged resource grouped by service. AWS, GCP, and Azure.
$ stategraph tenant gaps analyze \ --tenant $TENANT_ID --provider aws { "summary": { "total_aws_resources": 1847, "managed_by_stategraph": 1621, "unmanaged": 226 }, "unmanaged_resources": [ { "service": "S3", "resource_type": "bucket", "region": "us-east-1" }, { "service": "EC2", "resource_type": "instance", "region": "us-west-2" }, ... ] }
Generate import blocks
Pipe unmanaged resources into gaps import to generate Terraform import blocks and resource definitions. Review the output, then apply.
# Save unmanaged resources, then generate import blocks $ stategraph tenant gaps analyze \ --tenant $TENANT_ID --provider aws \ | jq '.unmanaged_resources' > unmanaged.json $ stategraph tenant gaps import \ --tenant $TENANT_ID unmanaged.json { "import_blocks": "import {\n to = aws_s3_bucket.logs\n id = ...\n}", "generated_hcl": "resource \"aws_s3_bucket\" \"logs\" { ... }", "supported_count": 198, "unsupported_count": 28 }
Every apply, tracked
stategraph apply is a drop-in replacement for terraform apply. Every run becomes a transaction automatically.
Drop-in replacement
Replace terraform with stategraph. Same flags, same workflow. Plans, applies, and state changes are recorded as transactions with no extra steps.
# Before $ terraform plan $ terraform apply # After — same flags, automatic tracking $ stategraph plan $ stategraph apply
Review the audit trail
Every transaction records who changed what, when, and which states were affected. Tag with pipeline metadata for traceability.
$ stategraph tx list --tenant $TENANT_ID [ { "id": "tx_01jk9m2n3p4q", "state": "completed", "created_by": "ci-pipeline", "created_at": "2025-01-15T14:32:00Z", "tags": { "pipeline": "gh-actions", "pr": "142" } } ] # Drill into a specific transaction $ stategraph tx logs list --tx tx_01jk9m2n3p4q
One plan, multiple states
Plan and apply across multiple root modules atomically. One command. Parallel execution. ACID multi-state transactions.
$ stategraph tf mtx --out plan.out networking compute Stategraph will perform the following actions: # aws_subnet.public (State 0) will be created # aws_instance.web (State 1) will be created # aws_lb.public (State 0) will be created Plan: 3 to add, 0 to change, 0 to destroy. $ stategraph tf apply plan.out aws_subnet.public [State 0]: Creating... aws_instance.web [State 1]: Creating... aws_subnet.public [State 0]: Creation complete aws_lb.public [State 0]: Creating... aws_instance.web [State 1]: Creation complete aws_lb.public [State 0]: Creation complete Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
name: Terraform Apply on: pull_request: paths: ['**.tf'] jobs: apply: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Stategraph CLI run: curl -sSL https://get.stategraph.com/install.sh | sh - name: Plan and apply env: STATEGRAPH_API_KEY: ${{ secrets.STATEGRAPH_API_KEY }} run: | stategraph plan --out plan.out stategraph apply plan.out
Full state lifecycle
Create, import, list, query, and delete states. Soft-delete preserves history for auditing.
Import from any backend
Import .tfstate files from S3, GCS, or local disk. Tag on import with key-value metadata. Queryable via MQL immediately.
$ stategraph states import \ --tenant $TENANT_ID \ --name "networking" \ --workspace "production" \ --tag environment=prod \ --tag team=platform \ terraform.tfstate { "id": "st_01jk9m2n3p4q", "name": "networking", "workspace": "production", "created_at": "2025-01-15T14:32:00Z" }
List, query, and delete
List all states for a tenant. Query instances with filters and pagination (-i to iterate all pages). List modules and their resource counts.
$ stategraph states list --tenant $TENANT_ID | \ jq '.[] | "\(.name) (\(.workspace))"' "networking (production)" "compute (production)" "data (staging)" $ stategraph states modules list --state $STATE_ID # TSV: module name, resource count, instance count module.vpc 12 24 module.ecs_cluster 8 16 module.rds 6 12
Built for scripting
JSON is the default where structure matters. TSV is used where piping is the common path. Exit codes are stable and meaningful.
# Iterate all tenants you have access to $ stategraph user tenants list | while read id name; do stategraph states list --tenant "$id" done # Export MQL results to CSV $ stategraph mql query "SELECT * FROM instances WHERE type = 'aws_instance'" | \ jq -r '.[] | [.address, .type, .provider] | @csv' > instances.csv # Gap analysis summary in one line $ stategraph tenant gaps analyze --tenant $TENANT_ID --provider aws | \ jq '"Total: \(.summary.total_aws_resources), Managed: \(.summary.managed_by_stategraph)"'
# Environment variables $ export STATEGRAPH_API_BASE=https://stategraph.example.com $ export STATEGRAPH_API_KEY=your-api-key-here # Or pass per-command $ stategraph --api-base https://stategraph.example.com user whoami
CLI vs API
The CLI uses the same API surface as the platform. Same data, same operations, different interface. Use CLI for terminal workflows and scripting. Use API for custom integrations and long-running services.
CLI
- CI/CD pipelines
- Cron jobs and scheduled tasks
- Ad-hoc queries from terminal
- Shell scripts
- One-off operations
API
- Custom dashboards
- Slack/Teams bots
- Webhooks
- Long-running services
- JIRA/ServiceNow integrations
See CLI documentation for the full command reference and API documentation for REST endpoints.
Get started with the CLI
Install the CLI. Configure two environment variables. Start querying.