Pilot Onboarding
Prerequisites
Section titled “Prerequisites”- Admin access to Akili API (
adminrole via SSO) - Tenant details: slug, display name, pricing tier
- Source database connection details (host, port, credentials)
Onboarding Steps
Section titled “Onboarding Steps”-
Create tenant
Terminal window curl -X POST https://api.akili.systems/api/v1/tenants \-H "Authorization: Bearer $ADMIN_TOKEN" \-H "Content-Type: application/json" \-d '{"slug": "acme-corp","display_name": "Acme Corporation","tier": "standard"}'This automatically provisions:
- Structured store schema with RLS policies
- S3 storage bucket (object store)
- Event bus topics
- Analytics engine catalog (lakehouse federation)
- Ingestion connector destination (lakehouse, tenant-namespaced)
Wait for the response — provisioning takes 10-30 seconds.
-
Create SSO users
In the SSO admin panel (
https://auth.akili.systems/if/admin/):- Create user accounts for the tenant’s team
- Assign users to the appropriate groups:
akili-developers— can create and deploy data productsakili-viewers— read-only access to data products
- Set the
tenant_idattribute on the user’s group (for SSO-to-tenant mapping)
-
Register external data sources
Terminal window # Register a PostgreSQL sourcecurl -X POST https://api.akili.systems/api/v1/connections \-H "Authorization: Bearer $TENANT_TOKEN" \-H "Content-Type: application/json" \-d '{"name": "production_db","connector_type": "postgresql","config_encrypted": "{\"host\":\"db.acme.com\",\"port\":5432,\"database\":\"production\",\"user\":\"readonly\",\"password\":\"...\",\"ssl_mode\":\"require\"}"}'Test and discover:
Terminal window # Test the connectioncurl -X POST https://api.akili.systems/api/v1/connections/{connection_id}/test \-H "Authorization: Bearer $TENANT_TOKEN"# Discover available tablescurl https://api.akili.systems/api/v1/connections/production_db/discover \-H "Authorization: Bearer $TENANT_TOKEN" -
Create a data product
Write the 6 YAML manifest files. See the Sales Analytics Example for a complete working set.
Required files:
File Purpose product.yamlName, namespace, archetype, description inputs.yamlSource connections and tables output.yamlOutput format, partitioning, retention quality.yamlData quality checks serving.yamlHow the data should be served compute.yamlCompute requirements Register, upload, and deploy:
Terminal window # Register the productcurl -X POST https://api.akili.systems/api/v1/products \-H "Authorization: Bearer $TENANT_TOKEN" \-H "Content-Type: application/json" \-d '{"name": "daily_order_summary","namespace": "sales","archetype": "aggregate","description": "Daily aggregated order metrics by region"}'# Upload manifestscurl -X POST https://api.akili.systems/api/v1/products/{product_id}/manifests \-H "Authorization: Bearer $TENANT_TOKEN" \-F "product=@product.yaml" \-F "inputs=@inputs.yaml" \-F "output=@output.yaml" \-F "quality=@quality.yaml" \-F "serving=@serving.yaml" \-F "compute=@compute.yaml"# Deploy (dry run first)curl -X POST https://api.akili.systems/api/v1/products/{product_id}/deploy \-H "Authorization: Bearer $TENANT_TOKEN" \-H "Content-Type: application/json" \-d '{"version": "1.0.0", "dry_run": true}'# Deploy for realcurl -X POST https://api.akili.systems/api/v1/products/{product_id}/deploy \-H "Authorization: Bearer $TENANT_TOKEN" \-H "Content-Type: application/json" \-d '{"version": "1.0.0", "dry_run": false}' -
Trigger ingestion
Terminal window # Initial synccurl -X POST https://api.akili.systems/api/v1/connections/{connection_id}/sync \-H "Authorization: Bearer $TENANT_TOKEN"# Check sync statuscurl https://api.akili.systems/api/v1/ingestion?tenant_id={tenant_id} \-H "Authorization: Bearer $TENANT_TOKEN" -
Query data
Terminal window curl -X POST https://api.akili.systems/api/v1/query \-H "Authorization: Bearer $TENANT_TOKEN" \-H "Content-Type: application/json" \-d '{"sql": "SELECT * FROM sales.daily_order_summary ORDER BY date DESC LIMIT 10"}' -
Monitor
- Grafana:
https://grafana.akili.systems— login with SSO - Data quality: Check quality scores in the product detail view
- Pipeline status: Check ingestion state via API
- Grafana:
Troubleshooting
Section titled “Troubleshooting”| Issue | Resolution |
|---|---|
| Connection test fails | Verify network access from cluster to source DB. Check firewall rules. |
| Ingestion stuck in “running” | Check ingestion connector sync job status. Logs in the ingestion namespace. |
| Quality checks failing | Review check SQL in quality.yaml. Check source data freshness. |
| Deploy fails | Run with dry_run: true first. Check manifest validation errors. |
| Cross-tenant data visible | Verify RLS policies. Check SET LOCAL app.tenant_id in logs. |
Resource Limits
Section titled “Resource Limits”| Tier | Products | Connections | Concurrent Executions | Storage |
|---|---|---|---|---|
| Free | 5 | 3 | 2 | 10 GB |
| Standard | 50 | 20 | 5 | 100 GB |
| Enterprise | Unlimited | Unlimited | 20 | 1 TB |
Next Steps
Section titled “Next Steps”- Sales Analytics Example — complete working data product
- Write Manifests — field-by-field manifest reference
- End-to-End Tutorial — full declare, deploy, query lifecycle