Pricing
Flexible, usage-based pricing for teams of all sizes.
Run Apache Airflow without the overhead. Astro's usage-based model means you only pay for the compute resources you actually use. Clusters, deployments, and workers scale with your workload.
Deployments starting at $0.35/hr.
- Pay-as-you-go (billed monthly)
- AI-powered Dag authoring & debugging
- Flexible, scale-to-zero compute
- Hibernating deployments
- API access
- Deployment rollbacks
Team
For teams running production pipelines that require Airflow support & observability.
Try for FreeDeployments starting at $0.42/hr.
- Pay-as-you-go (billed monthly)
- Annual agreements
Everything in Developer, plus:
- End-to-end observability & data quality
- Network isolation with Dedicated clusters
- High availability deployments
- Audit logging (7-day retention)
- 24x5 support availability
Contact us for pricing information.
- Annual agreements
Everything in Team, plus:
- SSO enforcement
- CI/CD enforcement
- Audit logging (90-day retention)
- 24x7 support availability (1 hour SLA)
Enterprise
For teams managing Airflow at scale that require enterprise security & governance.
Get a QuoteContact us for pricing information.
- Annual agreements
Everything in Business, plus:
- Remote execution agents
- Custom RBAC
- SCIM provisioning
- IP access list
- Organization dashboards
- Manage deployments across multiple clusters and regions
- Air-gapped deployment support
- Enterprise SSO integration
- Deployment isolation
- Dynamic resources management
- In-place Airflow upgrades
- Houston API access
- Professional services installation assistance
- 24/7 committer-led support
How Astro Pricing Works
Astro's transparent pricing model has three primary components that work together to give you control over your costs. You only pay for what you use, when you use it.
Available on your preferred cloud marketplace.
Purchase Astro directly through AWS, Azure, or GCP marketplaces and consolidate your cloud spending. Existing marketplace commitments and enterprise discount programs apply.
AWS Marketplace
Available with flexible pay-as-you-go or annual agreements.
Try Free with AWS →
Azure Marketplace
Available with flexible pay-as-you-go or annual agreements.
View in Marketplace →
Google Cloud Marketplace
Available with annual agreements.
View in Marketplace →
Snowflake Marketplace
Integrate Astro with your Snowflake environment
View in Marketplace →
FAQs
What if I need to run individual tasks on bigger workers?
You might have a large number of tasks that require low amounts of CPU and memory, but a small number of tasks that are resource intensive — e.g., machine learning tasks.
To address this use case, we recommend using worker queues. Worker queues allow you to configure different groups of workers for different groups of tasks. That way, you’ll only be charged for the larger worker type if and when a task that requires that worker type actually runs.
Specifically, you can:
- Create a default queue with a small worker type. For example, A5.
- Create a second queue called
large-taskwith a larger worker type. For example, A10. - Set the Minimum Worker Count for the
large-taskqueue to 0 if your resource-intensive tasks run infrequently. - In your DAG, assign the larger task to the “large-task” queue.
To learn more about worker queues, see Worker queues in Astronomer documentation.
What if I need additional ephemeral storage for workers?
All Astro workers include an amount of ephemeral storage by default: 10 GiB of for Celery workers, and 0.25 GiB for Kubenetes Executor and Kubernetes Pod Operator workers. You can configure additional ephemeral storage at a rate of $0.0002 per GiB per hour.
How will I be charged for the Kubernetes Executor and Kubernetes Pod Operator?
In Airflow, the Kubernetes Executor (KE) and the KubernetesPodOperator (KPO) allow you to run a single task in an isolated Kubernetes Pod. Astro measures the total amount of CPU and Memory allocated across your KE/KPO infrastructure at any given time. Astro bills for the number of A5 workers necessary to accommodate the total amount of CPU and Memory rounded up to the nearest A5. One A5 worker corresponds to 1 CPU and 2 GiB Memory.
For example:
If you are running 4 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you will be charged for 1 A5 for the duration of the infrastructure running those tasks.
Similarly, if you are running 3 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you would still be charged for 1 A5. In this case the total amount allocated is 0.75 cpu and 1.5 GiB memory which rounds up to single A5.
If you have 5 concurrent tasks that are each allocated 2 CPU and 4 GiB memory, that is a total of 10 CPU cores and 20 GiB memory and maps to 10 A5s.
In order to ensure reliability Astro will allocate the limit requested by each task. If a task has not specified limits then the Deployment defaults will be used.
In addition, ephemeral storage limits of greater than 0.25 GiB per pod will be charged at a rate of $0.0002 per GiB per hour.
What networking costs are passed through from the Cloud Provider?
This varies slightly by cloud:
AWS: Data Transfer within and between AWS regions, and out to the Internet (inclusive of Data Processing by NAT Gateway and PrivateLink VPC Endpoints). Includes Site-to-Site VPN uptime charges if configured for private connectivity to data sources.
GCP: Data Transfer within and between GCP regions, and out to the Internet (inclusive of Data Processing). Includes Cloud VPN and Private Service Connect endpoint uptime charges if configured for private connectivity to data sources.
Azure: Azure: Peered and Non-Peered Data Transfer within and between Azure regions, and out to the Internet (inclusive of Data Processing by Load Balancer and Private Link). Includes VPN Gateway uptime charges if configured for private connectivity to data sources.
Get started free.
OR
By proceeding you agree to our Privacy Policy, our Website Terms and to receive emails from Astronomer.