AWS is the most popular cloud provider, and Elastic Beanstalk offers a simple way to deploy applications with just a few commands. It's well-suited for deploying containerized applications, like our churn prediction service packaged in a Docker container.
In this session, we explored how to deploy a machine learning model to the cloud using AWS Elastic Beanstalk, leveraging Docker to containerize the application for easy deployment and scalability.
- The churn prediction service is packaged into a Docker container.
- This container is deployed to AWS Elastic Beanstalk.
- A marketing service sends requests to the Elastic Beanstalk environment.
- Elastic Beanstalk forwards these requests to the Docker container.
- The container processes the request and sends the response back to Elastic Beanstalk.
- Elastic Beanstalk relays the response to the requesting service.
Elastic Beanstalk automatically scales the application based on traffic. If the churn prediction service receives a high volume of requests, EB automatically adds more instances of the service to handle the load without interruption (horizontal scaling). Similarly, when traffic decreases, EB scales down the number of instances to optimize resource utilization.
-
Install the AWS EB CLI:
pipenv install awsebcli --dev
Note: This installs the EB CLI as a development dependency, as it's not needed within the container itself.
-
Initialize the EB environment:
eb init -p docker -r eu-north-1 churn-serving
This command configures the EB environment with the following parameters:
-p docker
: Specifies the platform as Docker.-r eu-north-1
: Sets the region toeu-north-1
. You can choose a different region based on your account information.churn-serving
: Defines the name of the environment.
-
Verify the EB Configuration:
less .elasticbeanstalk/config.yml
This command allows you to review the generated configuration file.
-
Test Locally:
eb local run --port 9696
This command starts the application locally using the specified port. You can use the
predict.py
script to test the service locally. -
Deploy to AWS:
eb create churn-serving-env
This command creates the Elastic Beanstalk environment and deploys the application.
-
Access the Deployed Service:
After successful deployment, Elastic Beanstalk provides the endpoint URL for the service. Update the predict-test
script with the new host address. Note that you'll no longer need the port number when accessing the deployed service.
- Security Considerations:
The deployed service is publicly accessible. In a production environment, it's crucial to implement appropriate security measures to restrict access to authorized users or networks.
- Terminate the Service:
eb terminate churn-serving-env
This command terminates the Elastic Beanstalk environment and removes all associated resources.
- Remember to configure your AWS credentials before running the EB CLI commands.
As we saw how to deploy our apps in AWS Let's find it out how to deploy them in Heroku.
- First of all create your web service with flask. (example file: churn_prediction.py
- Then create a file named requirements.txt and pass your dependencies there. Example:
pickle
numpy
flask
gunicorn
- Create another file named Procfile and add the app you want to be able to run there. Example:
web: gunicorn churn_serving:app
Note that the churn_serving name in the box above is the name of the main python file we're going to be running.
- Create your heroku profile, Go to dashboard and the Deploy tab.
- Follow the instruction to Deploy using Heroku Git.
- Great, your app is now available from global universe.
I've put my heroku app files in this repository: https://github.com/amindadgar/customer-churn-app
Add notes from the video (PRs are welcome)
The notes are written by the community. If you see an error here, please create a PR with a fix. |