Mesolithic Architecture - Is Hivekit a monolith?

Photo of Old Man of Storr, Skye by Luke Ellis-Craven

My co-founder at Hivekit, Wolfram, wrote a piece last week arguing that having an on-premise solution is well worth the extra work it brings. It picked up some interesting discussion on the orange site, where a lot of people made excellent points.

I particularly liked a comment from physicsguy:

Building an on-premise deployment requires a maturity of company that is quite rare for a startup to have:

It’s much easier to do on-prem if you do a monolith, or at least a small number of services….

Usually when you hear a system described as a monolith, it’s considered to be a negative, and is contrasted with a microservice architecture. In a monolith, all the components of a system are composed in a single piece, and are generally assumed to be tightly coupled.

Interestingly though, physicsguy isn’t using ‘monolith’ in a negative sense here - there’s been more of an understanding recently that a full-blown microservices architecture isn’t necessarily the right choice in all situations. And one of the potential downsides is complexity of deployment.


A few years ago I attended a software conference where the organizers had the interesting idea of getting an actual, real-life architect of buildings (Maurice Mitchell) to address a room full of software architects. The talk was fascinating and his description of how he sees his discipline really stuck with me:

Architecture is about thresholds. Barriers, walls, shores, transitions between types of space.

Software architecture too is all about seams. Sometimes it’s about trying to match the seams in our code with the seams in the real world business domain. Sometimes we introduce new seams to protect our code, so that when some part of it inevitably changes, those changes wash up against our strict, clean API seawalls and don’t cause cascading changes throughout the whole codebase.


Programs at any scale are made up of many different parts working together. A microservice architecture takes relatively small (‘micro’) pieces of your system and inserts some of the strongest kinds of barriers we know how to make in between them. There are a lot of reasons why this can be a good idea but one is how the added inconvenience makes you behave. Because there’s quite a lot of ceremony involved in talking to other parts of the system (compared to just calling a function), you start to be more careful about when and how you do it, and that care causes your code to naturally become more cohesive.

Discipline is a set of rules to create controlled behaviour, especially with the aim of producing good character. In this case, the use of microservices forces a set of rules on you for how you communicate between components and what data you have access to, and that controls your behaviour as you code.

The discipline that microservices enforce can be useful but there are downsides too: one is that it comes at the cost of a more complex deployment scenario that makes on-premise deployment a lot harder, which was something we definitely wanted to support.

Taking Responsibility

Hivekit is a developer platform that provides APIs and SDKs to track people and vehicles, stream updates and execute logic based on spatial events.

For Hivekit, we don’t use a microservice architecture. In fact, we really liked the ‘single binary, no dependencies’ deployment philosophy of go, and it fits perfectly with the requirement to allow on-premise deployments. However, to build a truly scalable system, you will have to distribute your code across multiple nodes, and different parts will need to scale differently.

Our approach to this problem was to split the functionality up into ‘responsibilities’, but to include all the code for all responsibilities in our single binary. Within that binary, the different responsibilities are islands of functionality, some of which are turned on or not based on the configuration.

Photograph of a gabion by Monika P from pixabay.

Our responsibilities draw seams around the things that the full hivekit system needs to do. For example, we have a responsibility to receive client connections, check their messages are well-formed and pass the messages on to the appropriate handler. We call that responsibility HandlesClientConnections.

Another responsibility is for managing and tracking the objects and areas that belong to a particular tenant in a logical grouping. Since we call that logical grouping a ‘realm’, the responsibility is called HandlesRealms.

Users of hivekit can create rules that automatically take effect when a defined situation occurs. One rule might be to alert a cyclist moving into a dangerous traffic area, or to send someone to check on a moisture sensor that hasn’t reported in for a while. The code to detect and execute these rules is wrapped up in the HandlesInstructions responsibility.

One nice effect of this division is that in development we can run a single process that does all the things, while in production we can scale the stateless parts separately to the stateful parts.

Our realm processors for example, manage the data changes for a realm in memory. These can be distributed to different nodes, so that heavy traffic in one doesn’t affect others. When we detect that a new realm needs its data managed, we assign it to the least busy node that has realm processing responsibilities.

Our deployments can scale each of the responsibilities separately by running new nodes with the appropriate configuration.


We still need discipline. In fact, without the process boundaries that microservices force on you, you need to pay more attention, not less, to making sure that your components stay cohesive, and that they don’t start poking into data they shouldn’t have access to.

Good design, a distributed test suite and a type system can help you avoid making mistakes like that. You still need to think about where to put the barriers - though ours are weaker than in a microservice design, they still protect you from evolving the system in a direction that doesn’t scale.

For us, the boundaries between components and responsibilities are still there and still important, even if whether to make those boundaries into full process-over-the-network boundaries or simple goroutine boundaries is a choice we can make whenever we create a deployment, rather than when we write the code.