Skip to main content
Version: 0.33

Astronomer Software platform resources

Configuring platform resources

By default, Astronomer needs around 10 CPUs and 44Gi of memory:

PodRequest CPURequest MemLimit CPULimit MemStorage
astro-ui100m256Mi500m1024MiNA
houston250m512Mi800m1024MiNA
prisma250m512Mi500m1024MiNA
commander250m512Mi500m1024MiNA
registry250m512Mi500m1024Mi100Gi
install100m256Mi500m1024MiNA
nginx500m1024Mi12048MiNA
grafana250m512Mi500m1024MiNA
prometheus1000m4Gi1000m4Gi100Gi
elasticsearch client replica-112Gi24GiNA
elasticsearch client replica-212Gi24GiNA
elasticsearch data replica-112Gi24Gi100Gi
elasticsearch data replica-212Gi24Gi100Gi
elasticsearch master replica-112Gi24Gi20Gi
elasticsearch master replica-212Gi24Gi20Gi
elasticsearch master replica-312Gi24Gi20Gi
kibana250m512Mi500m1024MiNA
fluentd250m512Mi500m1024MiNA
kubeState250m512Mi500m1024MiNA
Total10.723.5Gi21.344Gi460Gi

Changing values

You can change the request and limit of any of the components above in your config.yaml or in values.yaml (config.yaml will overwrite values.yaml).

To change something like the resources allocated to astro-ui, add the following fields to your config.yaml:

#####
#Changing Software UI CPU
####

astronomer:
astroUI:
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "700m"
memory: "1024Mi"

Once all the changes are made, run helm upgrade to switch your platform to the new config:

helm upgrade <platform-release-name> -f config.yaml --version=<platform-version> astronomer/astronomer -n <your-namespace>

Be sure to specify the platform namespace, not an Airflow namespace.

Infrastructure cost estimates

To ensure reliability with a starting set of Airflow Deployments, these estimates apply to our general recommendation for an Astronomer Software installation in a US East region.

AWS

ComponentItemHourly Cost (Annual Upfront Pricing)
Compute6 m5.xlarge or 3 m5.2xlarge (24 vCPU 96 GiB)$0.68
EKS control plane$0.20/hr x 24hr x 365$0.20
Databasedb.t2.medium Postgres, Multi-AZ at $0.29/hr x 24hr x 365$0.05
Total$0.93

GCP

ComponentItemHourly Cost (Annual Upfront Pricing)
Compute6 n2-standard-8 at $0.311/hr$0.31
DatabaseCloud SQL for PostgresSQL with 2 cores and 14.4GB at $0.29/hr$0.29
Total$0.60

For more information, reference the GCP Pricing Calculator.

Azure

ComponentItemHourly Cost (Annual Upfront Pricing)
Compute3 x D8s v3 (8 vCPU(s), 32 GiB)$0.95
Database1 x Gen 5 (2 vCore), 25 GB Storage, LRS redundancy$0.18
Total$1.13

For more information, reference the Azure Price Calculator.

Configuring Deployment resources

Most of the key components that you will need to run Airflow can be controlled via the sliders in our UI. However, you may find that there are some discrepancies between the number in the UI and what exists in Kubernetes at any given moment. Below is a summary the less-visible resources that get provisioned with each Airflow deployment you create on Astronomer. All of these resources will exist within the namespace created for your Airflow deployment.

Note: These are resources that are provisioned in addition to the scheduler, webserver, and worker resources that you've set in your Software UI. If you're running the Kubernetes executor or Kubernetes Pod Operator, resources will be created dynamically, but will not exceed the resource quota you've dictated by your Extra Capacity slider.

ComponentDescriptionAUCPU/Memory
Resource QuotasThe resource quota on the namespace will be double the AU (cpu/mem) provisioned for that namespace. Note that this has little to do with what actually gets consumed, but it's necessary because, in certain cases, restarting Airflow requires two webservers and schedulers to exist simultaneously for a brief second while the new environment gets spun up. Without the 2x quota cap, the restart of these services would fail. Additionally, note that any amount added by the extra capacity slider will increase this quota by that same amount.N/AN/A
PgBouncerAll Airflow deployments ship with a PgBouncer pod to handle pooling connections to the metadata database.2 AU200cpu/768mb
StatsDAll Airflow deployments ship with a StatsD pod to handle metrics exports.2 AU200cpu/768mb
RedisAll Airflow deployments running the celery executor ship with a Redis pod as a backend queueing system for the workers.2 AU200cpu/768mb
FlowerAll Airflow deployments running the celery executor ship with a Flower pod as a frontend interface for monitoring celery workers.2 AU200cpu/768mb
PgBouncer Metrics Exporter SidecarThis component reads data from the PgBouncer internal stats database and exposes them to Prometheus. This data powers the database related graphs on our UI and Grafana.1 AU100cpu/384mb
scheduler Log Trimmer Sidecar (scheduler-gc)The scheduler emits logs that are not useful to send to Elasticsearch. Those files are written locally in the pod, so we deploy a trimmer to ensure that the pod does not overflow its ephemeral storage limits.1 AU100cpu/384mb

Note that, if the platform is deployed with Elasticsearch disabled, the workers are deployed as StatefulSets to store the logs. This means that we also need to deploy an additional log-trimming sidecar on each pod to prevent the pod from overflowing its ephemeral storage limits.

Was this page helpful?