Polylith in a Nutshell
Polylith is a software architecture that simplifies our backend services and tools by enabling us to construct them as “modular monoliths” using composable building blocks.
People often wonder what Polylith brings to the table and if it's worth looking into.
To see if Polylith might be right for you, ask yourself whether these statements are true or not for the systems you work with:
- I can easily split up a service in two.
- I can easily share code between services without creating libraries.
- I have no code duplication in my entire system.
- I can easily change, find, refactor, and reason about all my code, even across services.
- My tests run fast, and they only run tests that are affected by the changes.
There aren’t many developers who can do all the above, but if you can, congratulations! We still think that Polylith can help you be happier and more productive, so keep reading!
Let's say we have two nicely structured services, where each box represents some source code that lives in its own namespace with a single responsibility:

If we want to use the code that lives in the red box in both service A and B…

...we have four alternatives:
- Duplicate the code
- Create a library
- Create a service
- Use a monorepo
If we take a closer look at service A, we realise that the red box depends on the green box that depends on the purple box:

Now we not only need to copy the red box, but all three of them, and paste them to service B:

The second alternative is to create a library:

Because the three boxes were connected, we have to include all of them in the library, which violates the single-responsibility principle.
Creating a library also harms the development experience, because changes to the code no longer take effect immediately, as we need to build a library every time we make a change.

If we have many services and perhaps many teams, the risk is that we don't get round to updating the version in all the services, which means that we don't use the latest code everywhere.
The longer we wait to upgrade all libraries to the latest version, the harder it becomes and the higher the risk of introducing bugs.
The third alternative is to extract the code we want to share into a new service, that lives in its own repository:

Now we add some code to call service C from A and B. This seems to solve our problem, albeit at the price of increased complexity, as we now have one extra service to maintain, and simple function calls have been replaced by network calls.
But let’s see what happens if we add a blue box to service C, which may happen in the future:

Now this piece of code can't easily be shared by the other two services, and we are thus unfortunately back to square one.
The fourth option is to use a monorepo, where the code is shared among several src directories instead of just one. Polylith uses this idea but takes composability one stage further, which we’ll soon cover.
This is what a Polylith system looks like; where all boxes (called bricks) can freely access one another ¹ via their exposed interfaces:

An interface in the Polylith world is just a namespace with the name interface ². Since bricks now all have access to each other, we have the freedom to combine them in any way, and they will automatically "connect" ³:

Libraries and bricks are used in a similar way so it's sufficient to refer to them by name (here illustrated with dashed lines). But there are also significant differences as libraries are versioned and compressed, while bricks are plain source code which can be easily changed.
If we want to divide service B into two, we can easily do that ⁴:

Another superpower is the development project from which we can work with all our bricks ⁵ and where code changes are immediately reflected in all services:

We have now reached our goal of being able to work with all the code in an efficient way, while also enabling the code to be easily split up and shared ⁶ between services ⁷.
The next great advantage is the ability to test the code incrementally.
The Polylith architecture can support incremental testing if we build tools ⁸ for it. Let’s take a look at how this works.
If only the red brick is changed…

…we can calculate which services ⁶ and bricks that are affected, by checking how the bricks depend on each other:

Because the red brick is changed, we know that we need to execute the tests for the red, yellow, and blue bricks, from service A, because they are all directly or indirectly affected by the change. The green and purple bricks are not affected and we can therefore skip testing them.
Tests that are run incrementally run faster as we usually don’t need to run the entire test suite. This applies both when we run tests locally and in the CI build.
The whole idea with Polylith is to make our life easier, both when developing the code and when changing how things are executed in production.
The introduction of new high-level concepts with well-defined responsibilities in combination with a standardized directory structure also contributes to making the code easier to reason about.
All of this aims to make us happier and more efficient as developers, while saving both time and money.
As we go through the basic building blocks of Polylith, we will use a slightly different representation, where the green bricks are called components, and the blue ones are bases:

¹ We are only allowed to access interfaces, not the implementing namespace(s). All bricks can access each other, except the green components that can't access the blue bases. These constraints are guaranteed through the use of a Polylith tool.
² The default name is interface but can be changed to any valid namespace name, e.g. ifc. Having interface sub namespaces are also allowed, like interface.mysubns.
³ The included bricks for a project are listed by their name in a config file, which may be implemented differently depending on the chosen programming language and tooling. An interesting detail is that a brick defines which libraries it uses, but not its bricks. They are instead defined in the projects, which makes them loosely coupled and interchangeable ⁵.
⁴ We also need to add some code to B2 to make a service out of it.
⁵ As long as we have just one component per interface, which is true 95% of the time, then these components can be directly changed from the development environment. In the other 5% of cases, they can be edited in other ways, either by switching them into development by using profiles, or by directly changing the file (but then we don’t get the refactoring support from the IDE).
⁶ The introduction of interfaces allows us to replace one component for another, as long as they satisfy the same contract/interface (often just a set of functions). The use of profiles makes it possible to switch between components that implement the same interface, from the development environment. How a service is exposed/called can easily be changed by replacing the base, from e.g. REST to lambda function.
⁷ Polylith supports any type of artifact, such as services and tools.
Now let's introduce the basic building blocks of Polylith, like components, bases, projects, and more.
Functions are the smallest building blocks in Polylith from which everything is created. Most communication within a Polylith system is done by using simple function calls as a way to connect the different high-level building blocks that a Polylith system consists of.
The simplicity of functions makes them fantastic building blocks for code:
- 1.Encapsulation: functions hide their implementation and only expose their signature.
- 2.Simplicity: functions have a single responsibility and don't mix nouns with verbs, which makes them fundamentally untangled.
- 3.Stateless: functions are just code; they don't contain state or instance.
- 4.Purity: functions can be pure (i.e. have no side-effects) which makes them easy to understand, reuse, test and parallelise.
These properties make functions (especially pure functions) inherently composable and testable units of code and a perfect foundation for a software architecture like Polylith.

A library is a piece of code that lives in its own namespace which allows us to pick the libraries we want to use, without getting into name clashes (but sometimes dependency hell!).
We rely on tooling we already use, that hides the complexity of solving dependencies to other libraries and the caching of files on disk.

Components are high-level building blocks (bricks) which remove the need for layers (horizontal, vertical slice, or onion) in our architecture.
A component can represent a part of our domain (e.g. cart, invoice, order, user, etc.), or be part of our infrastructure (e.g. authentication, database, log, etc.), or be an integration point to a third-party system, (e.g. crm-api, payment-api, sms-api, etc).

A base is a special type of building block (brick) that only exposes its functionality via a public API, e.g. REST, Lambda, GraphQL, gRPC, command-line, etc.
A base exposes a collection of endpoints via its API, and delegates the implementation of each endpoint to an appropriate component.

Brick is the common name for a component or base, which are our building blocks (together with libraries).

A project specifies which libraries and bricks should be included in an artifact, like a service, lambda function or command line tool. This allows for optimal code reuse of components across multiple projects.

A development project is the place we use to work with all our libraries, components, and bases. It gives us a “monolithic development experience” with full code navigation, debugging, and refactoring across our entire codebase, and the possibility to work with our entire system in a single REPL.

A workspace is the place in a Polylith codebase where we store all our building blocks and configure our projects.
Now, let's dig deeper into the Polylith architecture to better understand how it solves these challenges.
Last modified 2d ago