Blog

Using the Amazon Echo to Support Continuous Integration Builds – Part 2

Austin Parker

By Austin Parker1.19.17

Gene Roddenberry’s original Star Trek has always captured my imagination to a certain extent. The idea of a utopian, space-faring society, predominately concerned with missions of discovery and exploration? Sounds pretty good to me.

Of course, where would a discussion about Star Trek be without a musing on the technology? How many metaphorical ships have been launched in response to Majel Barret’s shipboard computer? I would definately draw a straight bright line between the ships computer and modern voice assistants such as Siri, Alexa, or whatever they call the Google Home voice.

Here at Apprenda, we’re always looking to see how we can take new technologies and integrate them into our workflows in order to improve efficiency and think about problems from a different angle. 

Digital assistants, such as Alexa, provide a convenient voice-based interface for engineers, product owners, and other stakeholders to consume project data in a self-service fashion.

I recently built an interface and Alexa Skill to allow for us to get information about test deployments during standups. In the prior entry in this series, I demonstrated how to build a C# application that scrapes information from TeamCity and exposes it via HTTP.

In this post, I’ll demonstrate the microservices we use to convert this information into responses from Alexa.

The High Level Design

For running our Alexa skill, we’d like to use AWS Lambda. It’s free for up to a million requests a month, which is far less than we’ll possibly need for a primarily internal service. Lambda also has a convenient integration with the Alexa developer portal and tools.

As I mentioned in part one, we’re pulling data from an HTTP service that’s part of a larger internal service. Placing the endpoint for this service on the public internet isn’t really an option! So, how to get data out of it?

Since we don’t need real-time resolution of these test deployments (given that they generally only run a few times a day and can take some time to perform), we’ll use a small Golang application that runs on a schedule in order to exfiltrate our data to a AWS S3 bucket that the Lambda pulls from.

alexaskill

Getting Data Out To S3

Our data is pretty straightfoward, we can represent it as a simple text file in JSON. With that in mind, I created a simple Golang application that I’ll run via Docker. The code for this is below:

package main

import (
 "fmt"
 "net/http"
 "os"

"github.com/aws/aws-sdk-go/aws"
 "github.com/aws/aws-sdk-go/aws/credentials"
 "github.com/aws/aws-sdk-go/aws/session"
 "github.com/aws/aws-sdk-go/service/s3/s3manager"
)

func main() {
 url := "service url"
 res, err := http.Get(url) 
 // make sure you handle errors in your own code!

defer res.Body.Close()
 fmt.Println("Uploading report to S3.")
 
 creds := credentials.NewStaticCredentials(os.Getenv("AWS_ACCESS_KEY"), os.Getenv("AWS_SECRET_ACCESS_KEY"), "")

sesh := session.New(&aws.Config{
 Credentials: creds,
 Region: aws.String("us-east-1"),
 })

uploader := s3manager.NewUploader(sesh)
 s3res, err := uploader.Upload(&s3manager.UploadInput{
 Bucket: aws.String("bucket-name"),
 Key: aws.String("deploymentreport"),
 Body: res.Body,
 })

fmt.Println("Uploaded file to ", s3res.Location)
}

The corresponding Dockerfile is equally straightfoward:

FROM golang:onbuild
ENV AWS_ACCESS_KEY MyAccessKey
ENV AWS_SECRET_ACCESS_KEY MySecretKey

Remember – never commit AWS keys to a git repository! Consider using key management to store secrets.

For my purposes, we can simply build and copy the Docker image to another host; docker build -t publish_srv && docker save -o publish_img publish_srv. Copy the tarfile to your Docker host however you prefer, and load it via docker load -i path/to/img.

I chose to use cron on my Docker host to docker run publish_srv at a regular interval. Other options exist as well, it’s possible to leave the container and application running constantly and schedule the execution of the task at some defined period.

The Joy Of The Cloud

“Wait, why use S3? Why not publish results to some sort of document store, or a relational database?”

Why not use S3? It’s dirt-cheap for something that is being pushed only several times a day (consider that PUT requests are billed at $0.005/1,000) and each result is only a few kb in size.

One of the biggest challenges when transitioning to cloud-native is breaking the mental model of trying to fit all of your pegs into database-shaped holes. A point to Amazon here as well; S3 is incredibly easy to use from Lambda.

Lambda functions have the API keys for their roles available during Lambda execution, which means you don’t have to fiddle with secrets management in Lambda functions. That doesn’t mean you can’t, obviously, but why wouldn’t you in this case?

Being able to utilize S3 as a go-between for internal providers of data and external consumers of data grants us the ability to begin extending and refactoring legacy applications and services into cloud-native patterns.

In fact, for many internal applications, S3 or other scalable cloud storage might wind up being the only data store you actually need.

Let’s Review

Last time, we enhanced our legacy service to pull data from TeamCity and make it available via HTTP. We’ve also designed a simple architecture to exfiltrate that information to the cloud, where it can be consumed by a Lambda service on AWS.

To move the data out to S3, we created a simple golang microservice and deployed it via Docker. The only thing left to do is wire up a Lambda skill and start talking! Next time, I’ll go over how you can do just that. Find part one of my series here and feel free to move on to part 3!

To see the integration live in action, check out the demo below!

Austin Parker
Austin Parker

Austin Parker is a Software Engineer at Apprenda who is primarily concerned with developer productivity, automation, and the cloud. Outside of that, he’s mostly concerned with cat videos. You can find him on Twitter @austinlparker.

1
View Comments

Leave a Reply

Your email address will not be published. Required fields are marked *