To have an api that sends SQS messages to a queue to be processed by workers.
api request -> AWS API Gateaway -> SQS -> Celery
Why manage an api for yourself when chalice and AWS can handle it quite easily for you. Yes the cloud is someone else’s computer, but I don’t want to be woken up because a server is down. I would rather be notified because a behemouth like Amazon is down.
Create Chalice Appliction
I use python3 for everything. 3 > 2 so python3 is obviously better. We will also use the default project names. Keeps it simple. A lot taken from their README
- Create virtualenv for program
Since Chalice is for Amazon, you need to setup your Amazon credentials
Then create app.py inside the chalice_project directory with the following contents:
Also add the following to requirements.txt
Notice a few things here. We need to create a SQS queue, and we need our aws credentials. For testing that is fine to leave those in there, but I would HIGHLY recommend using AWS KMS Keys for the data. This is just to get things working.
You can also use your AWS “master” account, but below I will show the polcies that I used to make it work.
Go to your aws dashboard (or use cloudformation) and make your SQS FIFO queue.
Now it is time to deploy your chalice app!
Any credentials or permissions you are missing for that user are made clear here. If you are using your master account, it should work just fine. It is creating the lambda, creating permissions, and making the api gateay. Really, it is setting up the API Gateway -> Lambda function method execution for you. Anything I do not have to do manually is a win.
In addition, it uploads any depencies you installed via pip, amazing!
Something else to notice as well, we added ‘api_key_required=True, authorizer=authorizer’ to the constructor, so yes, we will need to auth, TWICE! Why not, security rocks. You can remove those from the constructor and have it be unauthenticated, but whats the fun in that!
Adding api key to endpoint
You can probably do this with cloudformation or aws api, but for this tutorial we will use the console
- In your AWS Console (web interface) go to Services and choose API Gateway
- Go to API Keys
- Under actions, select “Create API Key”
- Give it a name. I let is autogenerate but if I am very paranoid I will generate my own.
- Hit Save
- Go to “Usage Plans”
- Hit Create
- Give it a name (Basic is fine for this demo)
- Usage plans let you rate limit, throttle, etc… based on api key
- After hitting save, select your newly created Usage Plan
- Select “Add API Stage”
- An API Stage lets you deploy code to different “Stages”. You could have a stage for dev, pre-prod, prod, hamster, etc..
- If you wanted a new stage, its as easy as “chalice deploy –stage penguin”
- Under api you should have your api selected
- Under stage, if its default you can enter in api, or choose whatever stage you want it to be.
- Hit that tiny checkmark beside it
There, now you have an api key to hit this API. We also chose IAM authentication. That means we have to use AWS Signature Version 4 singing. Don’t worry, smart people already made a python module for this.
And then to test the api, we have this snazzy little python program. Let’s name is snazzy.py
You can get the api path by going to your api, selecting stages, your stage, then the url at the top is how to get to your url (also the end of chalice deploy tells you what it is as well)
Now, if things are good, we are almost done. We need to enable cloudwatch logs for the api.
AWS Documentation. This is where I got my info.
- Under services, select IAM
- Select Roles
- You should see something like chalice_project-dev, select that
- Under trust relationship, select “Edit trust relationship”
- It should look something like this:
- Select “Update Trust Policy”
- Back at the policy summary for chalie_project-dev, select “Attach Policy”
- Choose “AmazonAPIGatewayPushToCloudWatchLogs” and select “Attach Policy”
- Back at the policy summary page, copy the Role ARN
- Go to services and select API Gateay
- Select your API
- Select Stages
- Select your stage (api is default)
- Under logs select “Enable Cloudwatch logs” and hit save changes
- Under settings on the left hand side, under “Cloudwatch log role ARN”, paste the ARN you copied earlier and hit save If all goes well, there should be no errors, and now your application is logging to cloudwatch!
Now you can test your api, and look at cloudwatch to see if there are any issues. Just run:
It should hit the sendtosqs endpoint and put a message on the queue! WOW!
- Create another virtual environment somewhere
- NOTE: there may be funny things with pycurl. This is the worst part of everything. YMMV, may have to use google to figure out why it wont install. Again, so stupid.
- Create a file called tasks.py
The point here is not to do actual work, its to show that it is reading SQS messages, and executing child tasks that are run in paralell.
Run the program with:
If thinks work right, you should see things coming on in!
If you do everything with an account that has full access to everything, there should be zero issues. However, that is NOT how you want to do it in production.
- Here is the policy json that worked for me to have a user I create have access to deploy, update and delete a chalice app.
The nice thing if you are using IAM authentication, you can make a user that can only hit certain endpoints. (There are OAUTH and custom authenticators you can use with API Gateway)
- I created a user, then a group to contain that user. I then created then attached the following policy:
- This user could only hit the sendtosns endpoint using Get.