However good your management and delivery practices are, without a solid technical foundation you will quickly run into trouble. has a strong heritage of robust technical practices covering the entire development cycle, infrastructure, DevOps, CI/CD, release engineering, observability, automation, tooling, and working with legacy systems.
Our consultants and associates pioneered DevOps and Continuous Delivery, Behaviour-Driven Development and Deliberate Discovery, test automation, DORA’s Accelerate metrics, and the methods, tools and techniques necessary for delivery at scale.
Here are some of the methods we introduce into the teams and organizations we work with, which lead to measurable improvements in time to value, quality and stability.
Continuous Delivery
Continuous Delivery (CD) is a collection of patterns, practices and techniques for modern software development based on the experiences of many teams across hundreds of projects, companies and business verticals, by some of the pioneers of agile methods.
Members of the team were deeply involved in this movement, building and leading teams and developing several of these techniques.
At its core, Continuous Delivery says to:
- automate the code delivery lifecycle
- adopt TDD, BDD, automated acceptance testing
- automate build, test and deployment through Continuous Integration pipelines
- work on the main code branch, avoid feature branches
- use feature toggles where necessary
- define and implement a release and rollback / fix-forward strategy
- fail fast and fix in lower environments
Domain-Driven Design
Domain-Driven Design (DDD) takes the position that modelling a domain is instrumental to solving problems in it. By understanding a business domain—not just the language, but its processes, idioms, structures, and other tacit knowledge—and making this explicit, you create building blocks that make it easy to reason in that domain.
Rather than one Universal Domain Model or Enterprise Data Dictionary, DDD proposes a separate model for each bounded context, the domain in which the model makes sense. For instance, the domain model of a Customer for a fraud team is necessarily different from that of a delivery team. The fraud team needs to know purchase history, credit risk, prior aliases, and other current and historical details. The delivery team just needs a delivery address and whether the customer has a preferred safe space!
Domain-Driven Design is based on the following principles:
- Model the business domain directly in the code
- Only model what you need; avoid over-designing and ‘baking ignorance in’
- Identify the domain and context boundary within which each model is valid
- Create explicit mappings for sharing between bounded contexts
- Model business processes using domain events
Data Mesh
Data Mesh is the data equivalent of Domain-Driven Design. In DDD, the domain model appears directly in the structure and algorithms in the code, and each bounded context contains its own data model. In many real-world implementations, these bounded context models are tied together by a single huge data store, be it a traditional database, a data warehouse, a data lake, or some combination of these.
The Data Mesh model, proposed by Zhamak Dehghani, says that each subdomain should have its own sovereign data store and be the ‘golden source’ for any operational or analytical business data it generates. Domains share data only through event streams of data ‘facts’ rather than exposing the bare data model through a traditional CRUD API.
This represents a significant shift in thinking, both on a theoretical and practical level, from the way organizations traditionally share or expose data. It has implications for data governance, lineage, provenance and regulatory aspects such as monitoring and observability of data usage.
Zhamak summarizes the Data Mesh approach in terms of four significant shifts:
- Serving over ingesting: making data facts available rather than consuming the raw data
- discovering and using over extracting and loading: making data sources discoverable through a central, highly available data service locator
- Publishing events as streams over flowing data around via centralized pipelines: applications have bilateral, peer-to-peer relationships rather than bottlenecking on a central ’enterprise bus’
- Ecosystem of data products over centralized data platform: teams publish their own data-as-a-product, with centralized governance and policy but federated implementation
CUPID
CUPID is an acronym coined by Daniel Terhorst-North as a modern counterpart to the SOLID principles from the Clean Code movement. Daniel believes that these principles are outdated, so he proposed a set of five properties or characteristics of code that makes it joyful to work with.
CUPID says that code should be:
- Composable: plays well with others
- Unix philosophy: does one thing well
- Predictable: does what you expect
- Idiomatic: feels natural
- Domain-based: the code models the problem domain in language and structure