Skip to content

Commit

Permalink
Minor changes + Update README.md file
Browse files Browse the repository at this point in the history
  • Loading branch information
rafidka committed Apr 25, 2024
1 parent cfc6ef0 commit e227b38
Show file tree
Hide file tree
Showing 6 changed files with 39 additions and 29 deletions.
24 changes: 24 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,33 @@
## aws-mwaa-docker-images

## Overview

This repository contains the Docker Images that [Amazon MWAA](https://aws.amazon.com/managed-workflows-for-apache-airflow/)
will use in future versions of Airflow. Eventually, we will deprecate [aws-mwaa-local-runner](https://github.com/aws/aws-mwaa-local-runner)
in favour of this package. However, at this point, this repository is still under development.

## Using the Airflow Image

Currently, Airflow v2.9.0 is supported. Future versions in parity with Amazon MWAA will be added.

To experiment with the image using a vanilla Docker setup, follow these steps:

0. Ensure you have:
- Python 3.11 or later.
- Docker and Docker Compose.
1. Clone this repository.
2. This repository makes use of Python virtual environments. To create them, from the root of the package, execute the following command:
```
python3 create_venvs.py
```
3. Build the Airflow v2.9.0 Docker image using:
```
cd <amazon-mwaa-docker-images path>/images/airflow/2.9.0
./run.sh
```

Airflow should be up and running now. You can access the web server on your localhost on port 8080.

## Security

See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
Expand Down
1 change: 0 additions & 1 deletion images/airflow/2.9.0/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +0,0 @@
run.sh
12 changes: 5 additions & 7 deletions images/airflow/2.9.0/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,11 @@ x-airflow-common: &airflow-common
image: amazon-mwaa/airflow:2.9.0
restart: always
environment:
# AWS credentials
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
AWS_REGION: ${AWS_REGION}
AWS_DEFAULT_REGION: ${AWS_REGION}

AWS_ACCESS_KEY_ID: "FAKE_AWS_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "FAKE_AWS_SECRET_ACCESS_KEY"
AWS_SESSION_TOKEN: "FAKE_AWS_SESSION_TOKEN"
AWS_REGION: "us-west-2"
AWS_DEFAULT_REGION: "us-west-2"
# Core configuration
MWAA__CORE__REQUIREMENTS_PATH: "/usr/local/airflow/requirements/requirements.txt"

Expand Down
7 changes: 7 additions & 0 deletions images/airflow/2.9.0/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#!/bin/bash
set -e

# Build the Docker image
./build.sh

docker compose up
18 changes: 0 additions & 18 deletions images/airflow/2.9.0/run.sh.template

This file was deleted.

6 changes: 3 additions & 3 deletions images/airflow/generate-dockerfiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
Generate a Dockerfile based on the Jinja2-templated Dockerfile.j2 file.
Dockerfile is very limited in nature, with just primitive commands. This
usually results in Dockerfiles becoming lengthy, repetetive, and error prone,
usually results in Dockerfiles becoming lengthy, repetitive, and error prone,
resulting in quality degradation. To work around this limitation, we use Jinja2
templating engine which offers a lot of futures, e.g. if statements, for loops,
template engine which offers a lot of futures, e.g. if statements, for loops,
etc., and enable integration with Python (via data variables) resulting in a
way more powerful Dockerfile.
When exectued, this script takes the Dockerfile.j2 and pass it to Jinja2 engine
When executed, this script takes the Dockerfile.j2 and pass it to Jinja2 engine
to produce a Dockerfile. The reader is referred to the code below for a better
understanding of the working mechanism of this.
"""
Expand Down

0 comments on commit e227b38

Please sign in to comment.