Always use single character variable names so that the reader can imagine whatever name they want.
Contributing to Async
Its not a big deal but I recently made a contribution to one of my favorite javascript projects and wanted to share it.
I added the cargoQueue
type to Async:
https://github.com/caolan/async/commit/db49b8922b541d011d1e457b658ef8435539fdb4
Tao of Leo #33
A committee that makes fair decisions will not choose the best solution, but the one everyone hates equally.
Tao of Leo #30
The ideal solution is not the one most people like, it’s the one most people do not object to.
Try / Finally with AWS Step Functions
AWS Step Functions has some built-in features for catching and handling errors but, surprisingly, it doesn’t have semantics for the usually accompanying “finally” concept.
In my scenario I am creating an ephemeral Kinesis stream in my State Machine, which I then stream a large number of records into while executing one lambda. I then process those records more slowly in a series of subsequent lambda functions. Once completed I then delete the ephemeral kinesis stream.
The problem with this approach is that if there is an unexpected error anywhere in one of my steps it can cause the whole Step Function to fail and end up orphaning the kinesis stream. Therefore I needed a way to reduce the likelihood of this problem with a try/finally pattern.
To accomplish this, first imagine we have this step function:
StartAt: ConfigureIterator States: ConfigureIterator: Type: Pass Result: limit: 500 ResultPath: $.iterator Next: InitializeIterator InitializeIterator: Type: Task Resource: iterator InputPath: $.iterator ResultPath: $.iterator Next: ConfigureXmlStream ConfigureXmlStream: Type: Pass Result: gz: true root: item ResultPath: $.options Next: XmlStream XmlStream: Type: Task Resource: xmlstream ResultPath: $.xml Next: SendItemsToApi SendItemsToApi: Type: Task Resource: items2api ResultPath: $.iterator Next: IterateNext IterateNext: Type: Choice Choices: - Variable: $.iterator.state StringEquals: done Next: Cleanup Default: SendItemsToApi Cleanup: Type: Pass Result: done ResultPath: $.iterator.state Next: IteratorDone IteratorDone: Type: Task Resource: iterator InputPath: $.iterator ResultPath: $.iterator Next: Finally Done: Type: Pass End: true
In the InitializeIterator
step we are creating our ephemeral Kinesis stream. In the XmlStream
step we are streaming items from a large xml document into JSON objects which are then written to the stream. Next, in the SendItemsToApi
we are reading items out of the kinesis stream, doing some formatting and validation on those items, and then sending each item to a REST endpoint for storage and/or other actions. Finally in the IteratorDone
step we are destroying the Kinesis stream.
You could imagine a variety of other possible scenarios where one would need to cleanup resources allocated in a previous Step. In this particular scenario we need to ensure that the IteratorDone
step is called regardless of errors that may happen between it and the InitializeIterator
step.
To do this we first will wrap then XmlStream
and SendItemsToApi
steps in a Parallel block with a single branch. The reason we want to do this is so that these steps can be treated like a single block where any errors in any state can be caught and handled in a single Catch clause.
The three steps wrapped in a Parallel block now look like this:
Main: Type: Parallel Branches: - StartAt: XmlStream States: XmlStream: Type: Task Resource: xmlstream ResultPath: $.xml Next: SendItemsToApi SendItemsToApi: Type: Task Resource: items2api ResultPath: $.iterator Next: IterateNext IterateNext: Type: Choice Choices: - Variable: $.iterator.state StringEquals: done Next: Cleanup Default: SendItemsToApi Next: Cleanup ResultPath: $.main Retry: - ErrorEquals: [ 'States.ALL' ] MaxAttempts: 3 Catch: - ErrorEquals: [ 'States.ALL' ] ResultPath: $.error Next: Cleanup
It’s important to note here that the result of the block is an array of results where each index in the array is the result object from the last step of each branch. So in this case we will have an array with a single object in it [ { iterator: ... } ]
. If you don’t specify a ResultPath it will replace the entire context object $
, which is undesirable in this case since we need to still access the iterator object in a later step.
It’s also important to note that we are storing the caught exception into the $.error
field, which we will rethrow later, after cleanup.
Cleanup: Type: Pass Result: done ResultPath: $.iterator.state Next: IteratorDone IteratorDone: Type: Task Resource: iterator InputPath: $.iterator ResultPath: $.iterator Next: Finally Finally: Type: Task Resource: throwOnError Next: Done Done: Type: Pass End: true
So now if an error occurs while processing our xml file or sending items to the api it will retry a couple of times and then ultimately capture the error and move to the Cleanup
phase. We’ve added a new Finally
Step, which will throw an exception if there is a value stored in $.error
, which will allow the Step Function to complete in an Error state rather than a Success state so we can further trigger alarms through Cloud Watch.
Here is the code for the throwOnError
lambda:
import { log, parse, handler } from 'mya-input-shared' function RehydratedError (message, name, stack) { const tmp = Error.apply(this, arguments) this.name = tmp.name = name this.message = tmp.message = message Object.defineProperty(this, 'stack', { get: () => [`${this.name}: ${this.message}`].concat(stack).join('\n at ') }) return this } RehydratedError.prototype = Object.create(Error.prototype, { constructor: { value: RehydratedError, writable: true, configurable: true } }) export const throwOnError = handler((event, context, callback) => { const { feed, error } = event if (error) { const Cause = error.Cause || '{}' parse(Cause, (err, cause) => { if (err) return callback(err) const { errorMessage, errorType, stackTrace } = cause err = new RehydratedError( errorMessage || 'An unknown error occurred.', errorType || 'UnknownError', stackTrace || '') log.error('feed_error', err, { feed }, callback) }) } else { callback(null, event) } })
Tao of Leo #27
When you take micro services too far you end up with COM.
Iterating with AWS Step Functions
One interesting challenge I immediately encountered when attempting to work with AWS Lambda and Step functions was the need to process large files. Lambda functions have a couple of limitations namely memory and a 5 minute timeout. If you have some operation you need to perform on a very large dataset it may not be possible to complete this operation in a single execution of a lambda function. There are several ways to solve this problem, in this article I would like to demonstrate how to create an iterator pattern in an AWS Step Function as a way to loop over a large set of data and process it in smaller parts.
In order to iterate we have created an Iterator Task which is a custom Lambda function. It accepts three values as inputs in order to operate: index, size and count.
Here is the code for this example step function:
{ "Comment": "Iterator Example", "StartAt": "ConfigureCount", "States": { "ConfigureCount": { "Type": "Pass", "Result": 10, "ResultPath": "$.count", "Next": "ConfigureIterator" }, "ConfigureIterator": { "Type": "Pass", "Result": { "index": -1, "step": 1 }, "ResultPath": "$.iterator", "Next": "Iterator" }, "Iterator": { "Type": "Task", "Resource": "arn:aws:lambda:{region}:{accountId}:function:iterator", "ResultPath": "$.iterator", "Next": "IterateRecords" }, "IterateRecords": { "Type": "Choice", "Choices": [ { "Variable": "$.iterator.continue", "BooleanEquals": true, "Next": "ExampleWork" } ], "Default": "Done" }, "ExampleWork": { "Type": "Pass", "Result": { "success": true }, "ResultPath": "$.result", "Next": "Iterator" }, "Done": { "Type": "Pass", "End": true } } }
ConfigureCount
In this step we need to configure the number of times we want to iterate. In this case I have set the number of iterations to 10 and put it into a variable called $.count. In a more complete example this may be the number of files you want to iterate over. For example in my real world scenario I am receiving a substantial CSV file which is then broken into many smaller CSV files, all stored in s3, the number of smaller files is then set into the count variable here. The large CSV file can be read entirely in a single lambda execution, streaming sections into smaller files, never loading the entire file into memory at the same time; but it cannot be processed entirely in a single function. Thus we split it and then iterate over the smaller parts.
ConfigureIterator
Here we set the index and step variables into the $.iterator field, which the iterator lambda uses to determine whether or not it should continue iterating.
Iterator
This is the iterator itself, a small lambda function that simply increments the current index by the step size and calculates the continue field based on the current index and count.
export function iterator (event, context, callback) { let index = event.iterator.index let step = event.iterator.step let count = event.count index += step callback(null, { index, step, count, continue: index < count }) }
The reason why we want to support a step size is because we may have multiple workers which operate on data in parallel. In this example we have a single worker but in other cases we may need more in order to complete the overall work in a timely fashion.
IterateRecords
From there we need to immediately move into a Choice state. This state simply looks at the $.iterator.continue field and if it is not true then our iteration is over and we exit the loop. If iteration is not over then we move to the worker tasks which may use the $.iterator.index field to determine which unit of work it should operate on.
ExampleWork
In this example this is just a Pass state, but in a real example this may represent a series of Tasks or Activities which process the data for this iteration. When completed, the last step in the series should point back to the Iterator state.
Its also important to note that all states in this chain must use the ResultPath field to bucket their results in order to preserve the state of the iterator field throughout theses states. Do not override the $.iterator or $.count fields while doing work or you may end up in an infinite loop or error condition.
Done
This state simply signifies the end of the step function.