Build Business Logic in Minutes, Not Weeks

Many backend systems, while difficult and expensive to construct, are very similar to one another in a fundamental way; they manage state, over time, as it responds to external events.

By creating a framework that models these as "networked state machines" which are easy to create and to edit, it makes it simple to build a backend suitable for launching a simple courier service in minutes, not weeks.


For example… 

Task Rabbit connects customers who need to have something done to couriers who will do it. Customers and couriers are connected by requests describing what needs to be done.

architecture

Requests can be in exactly one of four states:

  1. Pending Response when they are just created by a customer
  2. In Progress when a courier accepts the task
  3. Pending Acceptance when a courier submits the task for inspection
  4. Completed when the customer is satisfied

This can easily be modeled by a state machine (see below). Note that couriers may drop a request in progress, which sets the state of the request back to pending, for another courier to pick up. Customers may also reject a submitted request, in effect saying that the courier has more to do. Additional states may be necessary for other features, but these four should get the job done.

The state machine might look like this, with the initial state, Pending Response outlined.

request state machine

This state machine is the central piece of business logic to a Task Rabbit-like service. When a customer creates a request, they are in effect creating an instance of this state machine. A list of state machines which are in the Pending Response state (and in a certain location!) can be watched by all the active couriers (via some sort of app). When a courier accepts a request, a message is sent to the state machine, and the machine sends a text to the customer in response.

At this point, the machine looks like this:

request accepted

And has sent a text message, right away:

request accepted text

Each action the courier or the customer takes advances the state machine which triggers other actions on other services (text messages, push notifications, email, storing records, triggering billing; whatever you can imagine). In addition, these machines can maintain an internal state as well, acting as records in a database which you can query for whatever purpose.

All of this functionality is described by a few lines of JSON (or constructed via a simple GUI interface; drag and drop states, and connect them with transition arrows), below, and hosted on Marion Technologies' platform right now. The images you saw above are live; you can watch as the individual state machines in your application progress, interact & modify them on the fly, and analyze them. They can communicate with and create other state machines. Use them for constructing APIs; for coordinating client applications; for storing and processing evented information.


In this system, the only code you need to write is for the client applications (web or iOS or Android) which communicates with these machines via a simple API, and a login system to track your users and assign them an authentication key allowing them to create their requests via the API.

Even billing could be handled by a separate networked machine, one for each customer, that tracks requests made and bills the user on a monthly basis.

The key point is; you're not writing code for business logic. You just need to connect your users' clients to the networked machines you've defined. We host them, we maintain them, we even handle the external services like Stripe for billing, Twilio for texting, Mailgun for emailing, Mixpanel for analytics. You just need to describe what should happen; our machines handle the details.

If you'd like to hear more, sign up here, and I'll be in touch:


indicates required

The JSON Describing a Request
    {
        "name": "Request",
        "initialStateName": "Pending Response",
        "states": [
            {
                "name": "Pending Response",
                "actions": {
                    "event": [
                        [["if", "eq", "acceptRequest"],
                         ["text", ".customerCell", "Your request has been accepted!"],
                         "In Progress"]]
                }
            },
            {
                "name": "In Progress",
                "actions": {
                    "event": [
                        [["if", "eq", "decline"], "Pending Response"],
                        [["if", "eq", "submit"],
                         ["text", ".customerCell", "Your request has been submitted!"],
                         "Pending Acceptance"]]
                }
            },
            {
                "name": "Pending Acceptance",
                "actions": {
                    "event": [
                        [["if", "eq", "acceptSubmission"], ["set", "rating", "..rating"],
                         ["text", ".courierCell", "Your submission has been accepted"],
                         "Completed"],
                        [["if", "eq", "rejectSubmission"], ["set", "rejectReason", "..reason"],
                         ["text", ".courierCell", "Your submission has been rejected"],
                         "In Progress"]]
                }
            },
            {
                "name": "Completed"
            }]
    }

Thanks to Jamie Quint, Raja Hamid, and Ben Gundersen for reading over drafts and offering some much appreciated advice.


Problem Solving with Constraints

I regularly encounter seemingly Sisyphean tasks when I work on a new piece of software with few constraints and unknown usage patterns.


Scheduling recurring actions shouldn't be a tough problem. Or so I thought as I sat for the nth hour in yet another cafe in Manhattan. It turns out that it's not so difficult, given certain constraints.

My mission was to schedule text messages to go out at certain intervals. My problem was not knowing what the most common intervals would be. The first order of business is deciding what your intervals are, and how you should describe them. Here I list a few for your convenience:

  1. Once (e.g. The 31st of August, 2017)
  2. Daily at 1:30 PM
  3. Weekly (e.g. every Monday at 2 PM)
  4. Monthly (e.g. first Friday of every month)
  5. Yearly (e.g. the first day of the year, or July 4th of every year at 12:00 PM)
  6. Every a-day, b-day, ..., n-day. (e.g. Mondays and Fridays)
  7. Every n weeks
  8. Every hour
  9. Every other week, on Mondays and Fridays

There are many more possibilities as well, but you get the idea. It soon becomes apparent that you must describe a proper grammar for describing intervals (which can quickly become "parse a not-insignificant subset of English"), or choose the few most important ones for your use-case and stick with them for now.

I chose to support intervals of days and weeks to begin with. I'd allow a time as granular as an hour for each interval, and that would be it. This included, for example, "daily at 9:00 PM" and "Every Monday at 6:00 PM".

Once you get there, the rest falls into place.


This is yet another example of a problem that appeared almost intractable until imposing a few simple constraints upon it. You'll always be able to go back later and make the system more flexible.


Tmux Simply Explained

If you work in the terminal regularly, and particularly if you work on a number of ongoing software projects, and particularly if you try to do everything code-related in the terminal, you should be using tmux [wiki] to mux your ts.

It's not immediately clear (and some would say, it's not even eventually clear) what tmux does, how it's useful, and what the terms regularly tossed around in it daily use (client, pane, window, server, etc.) mean.

It's a terminal multiplexor. From the website,

What is a terminal multiplexer? It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more.

As motivation for your using it, let me explain how I generally use it. I'll incorporate the terminology as I go. When I start a new project, I create a session with the name of my project. For example, tmux new-session -s cryptopals (most tmux commands can be shortened, but I'll use the unaliased names to make things clearer; in this case, I could've typed tmux new -s cryptopals). tmux will then launch a client, and I'll be presented with a new shell inside of a window.

Then I'll usually open a few more windows ctrl-b : new-window <enter> (try it—it'll make sense; you're basically opening a tmux command-line with ctrl-b :, and then typing in whatever command you'd like to execute, with whichever options you'd like to specify). I'll also split some windows into multiple panes with ctrl-b % or ctrl-b : split-window -v (to split it vertically, -h for horizontally). You can even adjust the size of panes, c.f. this series of posts for more info on tmux. At this point, you may be wondering if you can bind your own keys to tmux actions: of course you can. For instance, my tmux escape sequence is actually ctr-z, and I split windows vertically with ctrl-z - and horizontally with ctrl-z \. Much nicer, and more cooperative with my emacs keybindings. You can find my old tmux config here.

After making windows, splitting them into panes, etc., I might get some work done on the project. When I'm done for the day, I'll detach the client from the session by hitting ctrl-z : detach-client (ctrl-z d). All the programs running in the session continue to run in the background. This can be extremely useful. Note, if you kill tmux or your computer loses power, you will lose your sessions and all unsaved work. You can also use teamocil to automate setting up sessions if you get tired of configuring your sessions after killing them.

When I'm ready to get back to work, I attach to my session by firing up bash and typing tmux attach -t cryptopals (-t targets the session you wish to attach to; you can have multiple sessions running on the tmux server [think of it that way, anyway] on with independent windows and panes and programs). You can list all running sessions by typing tmux list-sessions (or, tmux ls).

That's my typical usage of tmux, and the basic understanding you need to have of it to use it gainfully. From here, you can probably start configuring it to your liking and reading more on it on your own. The man page is rather useful, and there is abundance of literature on the web on this subject as well.

It's also worth noting that multiple clients can connect to the same session, either on the same machine (not so useful) or over your network (very useful; you can pair-program, or whatever you'd like).

If you liked this article, or have any questions, ping me on Twitter @ihodes.


User-Centric Design for the Thinking Developer: Intent

You've encountered software that seemed designed to raise your blood pressure and cause hair loss, feelings of loss and loss of temper. In fact, if you're a developer, it's likely you've created some of that software. I know I have.

However, there's a simple way to get at least halfway toward usable software. Write a story. And then, another few.

Name your user. Let's call it Jason. Now write out what Jason wants to do, and how he might try to do it. Don't just think about it; it's key that the story is written or typed out. Pretend he's using your application and accomplishing his goal in a pain-free, efficient and fun way.

Now make that application.


Too often the purpose of the application comes first, then the implementation, then the user interface is bolted on with little attention paid to the user experience. But the user and her purpose are the software's raison d'être.

An additional benefit to this approach is that it often simplifies the actual implementation of the product. With a clear objective and possible points of confusion outlined early on, a first version of your application can be built and expected to work reasonably well.


Your software is only a vehicle for your users' intent. Figure out the intent; the code will follow.


Fintech Hackathon and Global FX

I was part of a fun little excursion into the world of fintech (née “financial technology”) this weekend. At the FinTech hackathon in New York, Ben Gundersen, an extemely talented guy and a wizard with JS, and I wrote a simple visualization tool we call “GlobalFX”. The idea was to create an interesting and somewhat useful way to view foreign exchange rates over time.

First, a quick shout out to some awesome companies who helped us out. Some wonderful people (thanks, Francesca and Matt!) at 10Gen got us in touch with the MongoHQ folks who, at 11PM EST on a Saturday, spun up a 20GB MongoDB instance for us. Right before that, Tammer at Quandle basically gave us unlimited API calls. We also used Oanda (who have incredibly fast data serving ability), and intend to integrate them even more fully with the application in the coming days.

The Problem & Solution

We came into the event knowing that we wanted to make a novel, useful visualization. In retrospect, that’s a rather tall order. I think we ended up doing that, but I’ll leave that judgement to you, gentle reader.

The problem? Visualization the value of currencies relative to each other, over time. ForEx traders have very few comparative visual analysis tools, making EDA rather difficult. As someone who is interested in data as a medium for driving discovery, EDA is very important to me.

We chose to make a dynamic global chloropleth map of the value of currencies relative to a selected country, over time. That’s quite a mouthful, so here’s a screenshot to show you what that might look like.

screenshot

Here, China is selected. To the north, where it is a very vibrant red, you can see that Russia is doing particularly poorly. This indicates that Russia’s ruble has declined in value relative to the China renminbi from the start date of the data (here set to 2012-01-1) to the current date of the animation (2012-03-30). Conversely, Iran’s (bright green) currency has done quite well.

Oh yeah. That’s right: it’s an animation. The values change before your eyes, as the globe rotates slowly (or you drag it, zoom in on it, or keep it in place by hovering over it). You can check it out here, though I don’t know how much longer our Mongo instances will be running (and thus, how much longer this will work).

The Technology

Our backend is an extraordinarly simple Python application using Flask to serve our content and run an api for returning data from our MongoHQ DB (their RESTful interface wasn’t working for us, but that was probably a result of our brains failing/it being 6AM when we tried to use it). The rest of our data came directly from Oanda, and we just pulled that in with AJAX, as it was in the right format for us right away. The much more interesting frontend is a JavaScript application using the usual suspects (JQuery, Underscore, and Bootstrap), and the magical, wonderful d3.js to map our GeoJSON and make visualizing the data the way we do possible in fewer than 24 hours.

The data processing came in the form of transforming a ton of CSVs from Quandl into a few GB of documents we could quickly serve from Mongo. We used pandas to make a few throw-away scripts handling the ETL of our data. It wasn’t the most interesting thing, but it made our app useable and fast. I was a bit jealous of the magic happening on Ben’s monitor, though…


Overall, we had a lot of fun. We didn’t win the whole thing, but we got a nice prize from Oanda, and met a lot of great people both working on projects and from the sponsor companies. Thanks to all the sponsors and organizers who put on this great event, especially Nick for making it happen, and Novus for sponsoring a lot of it and being such great people in general. We all had a great time: I know I’ll be there next year.