Subscribe to get more articles about programming languages
Donate

5 mundane monad examples in everyday use

train spiral warp

1 - Java exceptions

In the old days of C people had to do error checking for each line of code that could produce errors. Imagine a program that reads 4 letters from a text file, and treats them as a number representing a year:

FILE* f = fopen("year.txt", "r");
if (f == NULL) {
    fprintf(stderr, "fopen has failed");
    return -1;
}

char data[5];
size_t fread_result = fread(data, 1, 4, f);
if (fread_result < 4) {
    fprintf(stderr, "fread has failed");
    fclose(f);
    return -2;
}

data[4] = '\0';
long year = strtol(data, NULL, 10);
if (year == 0) {
    fprintf(stderr, "strtol has failed");
    fclose(f);
    return -3;
}

fclose(f);
return year;

Can you spot a boilerplate pattern? Each step of the algorithm can fail: file opening (fopen) can fail, file reading (fread) can fail and year string parsing (strtol) can fail. Each function call requires a condition with a return value, some log message and an error code to return. Additionally some cleanup code is needed in many cases (fclose).

Here’s roughly the same program using the exception handling syntax in Java:

DataInputStream f = null;
try {
    f = new DataInputStream(new FileInputStream("year.txt"));
    byte[] data = new byte[4];
    f.readFully(data);
    int year = Integer.parseInt(new String(data));
    return year;
} catch (Exception e) {
	System.err.println(e.getMessage());
	throw new Exception("The year reading has failed.", e);
} finally {
	if (f != null) f.close();
}

This code is logically similar, but the happy path is linear and concise now. Each step still can fail: file opening (new FileInputStream) can fail, file reading (readFully) can fail and year string parsing (Integer.parseInt) can fail. The failures are “magically” transferred to the catch block to be handled at once. In addition the cleanup code is organized to be in one place.

This is done by the try/catch “monad” which is available in most popular languages. It is in heavy day-to-day use in languages like Java, Python, PHP, C#.

Let’s refactor the example steps to see the monadic properties:

DataInputStream open(String name) throws Exception {
	return new DataInputStream(new FileInputStream(name));
}

byte[] read(DataInputStream f) throws Exception {
    byte[] data = new byte[4];
    f.readFully(data);
    return data;
}

int parse(byte[] data) throws Exception {
	return Integer.parseInt(new String(data));
}

The composed program try block would look like this:

DataInputStream f = open("year.txt");
byte[] data = read(f);
int year = parse(data);
return year;

or simply:

return parse(read(open("year.txt")));

We have a way to compose functions normally as we are used to, but the control flow is convoluted now by exception handling rules, because every time a method throws an exception, the execution swings into the corresponding catch block.

Java demonstrates another property of type-safe monads: once we jump into the exception monad, we can’t get out of it by accident. If a function declares throws Exception, the composition code must either catch it or re-declare throws Exception. In other words the compiler is checking that all errors are handled.

2 - Python list comprehensions

Let’s imagine we are implementing Twitter. Given some posted messages we would like to find the hash tags in them, and then find users who have subscribed to the hash tags. Here are the primitives:

def messages(time):
	return [
		'enjoy my #apple #pie',
		'#vanilla #linux is awesome'
	]

def hash_tags(message):	
	return filter(lambda w: w.startswith('#'), message.split())

def subscribers(tag):
	name = tag.replace('#', '@')
	return [name + 'lover', name + 'hater']

We can compose these primitives using list comprehensions like so:

ss = [s for m in messages(0) 
		for t in hash_tags(m)
		for s in subscribers(t)]
print ss

The same behaviour can be refactored as a series of “flat map” calls:

def flat_map1(f, xs):
	return (y for x in xs for y in f(x))

ms1 = flat_map1(messages, [0])
ts1 = flat_map1(hash_tags, ms1)
ss1 = flat_map1(subscribers, ts1)
print list(ss1)

In this case flat_map1 returns a generator expression so that each step of the algorithm becomes “lazy” and returns a generator (iterable) until it gets materialized with a help of the list function.

To understand “flat map” a bit more we could reimplement it using itertools:

def flat_map2(f, xs):
	from itertools import chain, imap
	return chain.from_iterable(imap(f, xs))

ss2 = flat_map2(subscribers,
 	  flat_map2(hash_tags, 
 	  flat_map2(messages, [0])))
print list(ss2)

Here a sequence of objects is mapped with imap and flattened with chain.

This simple but powerful generator expression syntax is a way to compose here: it takes one generator, arranges it as if it was evaluated and turns it into another composite generator. The real control flow is convoluted, it differs from the nice visible control flow, because the iterables are not actually iterated until some further point in code (like calling list above). This makes it possible to pass those “loops” around, evaluate a few iterations here and there, and even work with potentially infinite iterators (“streams”) without blocking forever.

3 - JavaScript Promise

Let’s say we have a web site that works in several geographical regions, and we want to load a list of regions from a server. This list is changed very rarely, so we would like to cache it locally in order to save network trips to the server later. We could describe the logic of these operations as functions returning promises: loadFromCache, saveToCache, download.

function loadFromCache() {
	return new Promise((resolve, reject) =>
		setTimeout(() => {
			let regionsJSON = localStorage.getItem("regions");
			if (regionsJSON) resolve(JSON.parse(regionsJSON));
			else reject("not found in the cache");
		}, 100)
	);
}
function saveToCache(regions) {
	return new Promise((resolve, reject) =>
		setTimeout(() => {
			localStorage.setItem("regions", JSON.stringify(regions));
			resolve(true);
		}, 500)
	);
}
function download() {
	return new Promise((resolve, reject) =>
		// pretend that we have fetched this data from a server
		setTimeout(() => {
			resolve(["us", "in", "jp", "uk", "de"]);
		}, 1000)
	);
}

Using setTimeout we imitate that each operation is asynchronous, and takes some time to complete.

Now we can compose these operations like so:

loadFromCache()
	.catch((reason) => 
		download()
			.then(saveToCache)
			.then(loadFromCache)
	)
	.then(regions => {
		console.log(regions);
	});

At first the code tries to load from the cache. If it succeeds - the regions list is returned immediately. Otherwise we download the list from the server, then save it to the cache and then try to load from the cache again.

The syntax irregularity could be improved a bit by adding a dummy promise variable:

let begin = Promise.resolve(true);

begin
	.then(loadFromCache)
	.catch((reason) => 
		begin
			.then(download)
			.then(saveToCache)
			.then(loadFromCache)
	)

Unfortunately each new promise chain indents the control flow 1 level to the right, but the code looks a bit clearer than the classic “callback hell”-based solution, because you don’t have to name and “pass through” inputs and outputs of each operation.

The operations chain looks linear now, and that is exactly what this “monad” is trying to achieve. The actual control flow is different: the visible code only constructs the chain, but it defers the execution to later stages. The operations are composed using the a.then(b) syntax, can be wrapped further and built upon.

4 - C# await

Let’s build up on the previous monad example using the C# async/await feature. This time it is an example that downloads and caches a text file. First we rewrite the building blocks: LoadFromCache, SaveToCache and Download. These operations now return Task objects (analogous to promises):

Task<string> LoadFromCache() {
    return Task.Delay(100)
        .ContinueWith(task => AppSettings.Default.Text);
}
Task SaveToCache(string text) {
    return Task.Delay(500)
        .ContinueWith(task => {
            AppSettings.Default.Text = text;
            AppSettings.Default.Save();
        });
}
HttpClient _client = new HttpClient();
Task<string> Download() {
    return _client.GetStringAsync("http://www.wtfpl.net/txt/copying/");
}

Task.Delay(ms) is used again to emphasize that operations are not immediate, and will take some time to complete.

Here’s the assembled code:

async Task<string> LoadAsync() {
    string text = await LoadFromCache();
    if (String.IsNullOrEmpty(text)) {
        text = await Download();
        await SaveToCache(text);
    }
    return text;
}

But wait, it looks as though it was a plain old synchronous code. What’s going on? The compiler is doing the magic of mangling the code so that it does what we intended to do. await is not blocking the thread, it is not the same as the Task.Wait() method. await actually instructs the compiler to wrap subsequent lines of code as a subtask that depends on the result of the awaited task and links these two tasks with ContinueWith.

As a result we get linearization of the visual program control flow. The operations can be chained almost seamlessly with addition of await. But the real control flow is complicated and depends on hidden things such as a thread dispatcher strategy.

5 - Swift Optional chaining

Remember the hell of null pointer exceptions? How many of those have been fixed by adding a simple null-check? Probably thousands.

In many popular languages we would have to write this kind of code:

Order order = findOrder(orderID);
if (order != null) {
    if (order.billingAddress != null) {
        print(order.billingAddress.postalCode);
    }
}

This can be translated to the following Swift code:

let orderOptional = findOrder(orderID)
if let order = orderOptional {
    if let billingAddress = order.billingAddress {
        print(billingAddress.postalCode)
    }
}

And Optional chaining gives a possibility to simplify the code to just:

let order = findOrder(orderID)
print(order?.billingAddress?.postalCode ?? "")

The ?. operator is used to safely access properties of potentially null objects. Visually it is very concise and pleasant. The code gets expanded into a chain of if statements. This syntax is even more powerful, because we can call methods along the way, and interleave optional and non-optional values chaining.

The simple “syntax sugar” of this monad example can be implemented with standard facilities in Swift while keeping it type-safe at compile time and generic, although probably less performant. The code above could then look like so:

let order = findOrder(orderID)
order.safeAccess({ $0.billingAddress })
    .safeAccessFinal({ $0.postalCode }, "") {
    	print($0) 
    }

Here the safeAccess method replaces links with ?. that return intermediate Optional<T> values, and safeAccessFinal method replaces the last access that does some processing on any result value (with a default value passed for the nil-case).

Conclusion

Some conservative languages (like C and Go) do not wish to introduce any non-trivial control flow manipulations, which for a long time used to be a perk of esoteric and functional programming languages. In the last years some popular modern languages have embraced some of the monad applications into their syntax.

Like many theoretical mathematical concepts it is not clear whether or not the theory makes any practical sense. Although the theory of monads and their mathematical form is not explained here, the monad examples should give an impression that the concept has well-recognized practical applications.

Subscribe to get more articles about programming languages
Donate