Quantcast
Channel: CodeSection,代码区,Python开发技术文章_教程 - CodeSec
Viewing all articles
Browse latest Browse all 9596

Turn your ML model into a web service in under 10 minutes with AWS CodeStar

$
0
0

As a data scientist, developing a model that makes the right predictions feels incredibly rewarding on its own. An hdf5 file on your machine, however, is often not really helpful for your company or anyone with the same problem you just solved. The next step is therefore often to create a web service for your model, that you can access via API. One option would be to write a Flask application which you can host on your server. Unfortunately, this approach is often complex and doesn’t scale well. While there are many tools around to help with the set-up and management of virtual servers, everyone who tried setting up an EC2 instance knows the hassles that come with it. Sticking with AWS, the next option would be running your model on Lambda, exposing it through API Gateway, etc. Since at least four different Services + code needs to be managed, this method might be easier but can still be quite complex. Fortunately, Amazon recognized this issue and introduced a solution in 2017: AWS CodeStar .

CodeStar streamlines the app creation and deployment process by connecting multiple of the AWS services in a, mostly, intuitive and easy-to-use way. As an example, we will deploy my implementation of Rob Renalds Gibberish Detector, a Markov-Chain based tool to detect whether a string contains real words or just random “gibberish”. I trained the model on German text and stored it as a python Pickle file.

By the end of this post, we will have a working web service that takes in one variable through a GET request, runs the model via Python code and returns the resulting prediction as JSON.

Step 1: Creating your CodeStarproject
Turn your ML model into a web service in under 10 minutes with AWS CodeStar
Over 30 templates make it easy to set up your web service orapp

Since our goal is to establish a web service, we can choose the first Python template. As a result, our service will run “serverless” on Lambda. The following three steps are relatively self-explanatory:

We decide on a project name, which AWS will also convert into an URL friendly project id. This project id will be, later on, part of the HTTP endpoint. Next, we have to choose our preferred Git repository. If you decided on GitHub, you have the option to change the repository name, description and set it either to public or private. CodeStar will then create the repository for you with all necessary files in it, but more on those later. To work its magic, CodeStar needs permission to manage all the different tools in our pipeline on your behalf.
Turn your ML model into a web service in under 10 minutes with AWS CodeStar
Our AWS CodePipeline Step 2: Connect to your source repository

One of the great things about CodeStar is that the code is all managed via Git and every update you push into it, will update the Lambda function and automatically be deployed. Since AWS automatically creates the repository for you, all that needs to be done to start coding is a git clone your_repository_url .


Turn your ML model into a web service in under 10 minutes with AWS CodeStar
AWS CodeStar creates all the necessary files foryou Step 3: Create GET parameter in AWS APIGateway

To make changes in the API parameters, we need to open our project in AWS APIGateway. The fastest way leads over the side-bar in our dashboard: Project -> Project Resources -> AWS APIGateway.


Turn your ML model into a web service in under 10 minutes with AWS CodeStar
AWS APIGateway before ourchanges

The first step is to add a Methode Request . For this exercise, we add one string parameter, which is called string . Since we need this input to run our model, we can make this parameter Required . APIGateway requires us to have a fitting Request Validator in check. Because we are only using URL parameters, the validator Validate query string parameters and headers will do fine. Here is how the resulting page should look like:


Turn your ML model into a web service in under 10 minutes with AWS CodeStar
AWS APIGateway configuration for one URL parameter Step 4: Write the Lambdafunction

CodeStar builds from your Git repository. As a result, you can write code in your favorite IDE and push once you are done. The created repository contains the following items by default:

index.py: This file contains the code for your Lambda function. README.md: The readme file contains basic information about the next steps to take and links to the official documentation. template.yml: The structure of your ‘serverless’ AWS architecture. buildspec.yml: This file contains additional commands that are executed during the build process. A standard command pre-build is the execution of a unit test. tests/: Contains the file test_handler.py with the unit test mentioned above.

First, we have to make our model file accessible to the function. The easiest way is to add the file into our Git repository. AWS Lambda has relatively generous storage limits , which should be sufficient for most use-cases. Once uploaded, Lambda can access the file the usual way, using open .

Finally, we can write our Python code into index.py, which will become our Lambda function. With our set-up in step 3 and 4, we can access the URL get-parameter easily through the event parameter:

req_name = event['queryStringParameters']['string']

You can find the full code on Github . After implementing the main function, we have to update the unit test. Remember, if the unit test fails, the service will not be deployed. For that reason, we update everything accordingly:

All we have to do now is to push the model file, and code to the project repository, using the usual commands: git add. , git commit , and git push . As soon as the changes are online, CodeStar will automatically update its code basis and build and deploy everything. You can find the status on the right-hand side of the dashboard.


Turn your ML model into a web service in under 10 minutes with AWS CodeStar
How it should looklike Final words

If you followed along ― congratulations, you just made your machine learning model publicly available in less than 10 minutes! You can find the endpoint for your API on the dashboard, add the parameters to it and voila. Due to the integration of AWS CodePipline, it is easy to keep your model updated, and the connection to Amazon CloudWatch gives you many insights into what happens to your functions once it’s out in the wild.

Making your machine learning models public via Lambda is just one of many great things you can do with CodeStar. The next time you get lost in setting up any AWS workflow that involves the usual 5+ services, take a look, maybe CodeStar can help you to reduce your time to production as well.


Viewing all articles
Browse latest Browse all 9596

Trending Articles