Software Design

Content tagged "Software Design".

No, Your Domains and Bounded Contexts Don’t Map 1 on 1

Mathias Verraes:

In DDD, we reason like this: The engineers need to build, maintain, and evolve secure and performant systems that serve the company. To do that, the engineers need an understanding of the domains and of the software systems. To achieve that, we leave the domains as the organisation sees them, and we draw our own Bounded Contexts to serve our need for understanding. The Bounded Contexts exist primarily for the engineers, and for the engineers’ communication with domain experts and other business functions.

Classnames

Paul Robert Lloyd:

This small website provides a list of words that you can refer to when naming something like an HTML class, custom CSS property or JavaScript function. Each word links to a page on Wordnik, an online dictionary that does the hard work of providing multiple definitions and listing related words.

Eventual Business Consistency

Kent Beck:

The fundamental, inescapable problem? What is in the system is a flawed reflection of what is going on in reality. We want what is in the system to be as close as possible to reality, but we also need to acknowledge that consistency between the system & reality will only ever be approached, not achieved. The system will record changes in reality eventually, but by then we may have made decisions that need to be undone.

Scaling the Practice of Architecture, Conversationally

Andrew Harmel-Law:

The moves in software delivery towards ever-increasing team autonomy have, in my mind at least, heightened the need for more architectural thinking combined with alternative approaches to architectural Decision Making.

Ensuring our software teams experience true autonomy raises a key problem: how might a small group of architects feed a significant number of hungry, value-stream-aligned teams? Why? Because in this environment Architects now need to be in many, many more places at once, doing all that traditional “architecture”.

The Rule: anyone can make an architectural decision.

The Qualifier: before making the decision, the decision-taker must consult two groups: The first is everyone who will be meaningfully affected by the decision. The second is people with expertise in the area the decision is being taken.

The Strong and Weak Forces of Architecture

Good technical design decisions are very dependent on context. Teams that regularly work together on common goals are able to communicate regularly and negotiate changes quickly. These teams exhibit a strong force of alignment, and can make technology and design decisions that harness that strong force. As we zoom out in a larger organisation an increasingly weak force exists between teams and divisions that work independently and have less frequent collaboration. Recognising the differences in these strong and weak forces allows us to make better decisions and give better guidance for each level, allowing for more empowered teams that can move faster.

Islands Architecture

The general idea of an “Islands” architecture is deceptively simple: render HTML pages on the server, and inject placeholders or slots around highly dynamic regions. These placeholders/slots contain the server-rendered HTML output from their corresponding widget. They denote regions that can then be “hydrated” on the client into small self-contained widgets, reusing their server-rendered initial HTML.

Designing Data-Intensive Applications 📚

Designing Data-Intensive Applications by Martin Kleppmann

This book surveys data storage and distributed systems and is a fantastic primer for all software developers.

It starts with naive approaches to storing data, quickly builds up to how transactions work, and works up to the complexities of building distributed systems.

I particularly enjoyed the chapter on stream processing and event sourcing. It contrasts stream processing to batch processing and highlights many of the challenges of these approaches and explores options for addressing them.

Forgetting Data in Event Sourced Systems

GDPR’s right to be forgotten means we have to be able to erase a person’s data from our systems. Event sourced systems work from an immutable log of events which makes erasure difficult. You probably want to think hard about storing data you need to delete in an immutable event log but sometimes that choice is already made and you need to make it work, so let’s dig in.

Erasing user data from current state projections

This is relatively straightforward. A RightToBeForgottenInvoked event is added to the event store for the person. All projectors that depend on personal data listen for this event and prune or scrub the appropriate data for the person from their projections.

Erasing data from the event stream itself

This case is trickier. We need to rewrite history in a way that doesn’t break things. Let’s look at an option for erasing data without rebuilding the event stream. This approach is also applicable for projections that are immutable change logs.

We can store personal data outside of events themselves in a separate storage layer. Each event instead stores a key for retrieving the data from this layer and any event consumers request the data when they need it. Given this data is personal the storage layer should probably encrypt the data at rest.

Once a RightToBeForgottenInvoked event is added to the event store all data for that person can be erased from the storage layer. All subsequent requests for data from the secure storage layer for that person’s data will return null objects rather than the actual data. This should make life easier for all consumers and avoid you null checking yourself to death all over the place.

Let’s see what this secure storage layer might look like.

Sketch of a secure storage layer

Our secure storage layer stores data that is scoped to a person and has a type (so we can return null objects). The store allows all data for a specific person to be erased.

Let’s start with two main models: a Person1 and a Data model.

      Data                 Person
  ┌──────────┐        ┌───────────────┐
  │    id    │   ┌───>│      id       │
  ├──────────┤   │    ├───────────────┤
  │person_id │───┘    │encryption_key │
  ├──────────┤        ├───────────────┤
  │   type   │        │   is_erased   │
  ├──────────┤        └───────────────┘
  │ciphertext│
  └──────────┘

The interface to the secure storage layer is outlined below.

class SecureStorage
  def add(person_id, data_id, type, data)
    # Find the Person model for person_id (lazily create one if needed).
    #
    # Encrypt the data using the person's encryption_key and store the
    # ciphertext in the data table using the client supplied data_id and type.
    #
    # Clients will store this data_id in an event body and use it to retrieve
    # the data later.
  end

  def erase_data_for_person(person_id)
    # Mark the corresponding record in the person table as erased
    # and delete the encryption key.
  end

  def get(data_id)
    person = Person.find_non_erased(person_id)
    if person
      # Look up the row from the data table, decrypt ciphertext using the
      # key on the person model, and return the data.
    else
      # Look up the row from the data table and return a null object for
      # that data type.
    end
  end
end

Where does that leave us?

After a person has invoked their right to be forgotten all current state projections will be updated to erase that person’s data. The event store will return null objects for any events that contain data for the person which means that any event processors won’t see that data as they build their projections. It will also contain the RightToBeForgottenInvoked event for the person so consumers can handle that explicitly if required.


  1. This could be expanded to be more general but we’ll stick with person for the purpose of this post. ↩︎

Monads Are a Solution to a Problem

Max Kreminski:

Monads are a solution to a specific problem: the problem of repetitive code. If you write enough code in a functional programming language, you start to notice that you’re writing a lot of suspiciously similar code to solve a bunch of superficially different problems. Wouldn’t it be nice if you could just write this code once and then reuse it, instead of rewriting it slightly differently every time? I’m omitting a lot of detail here, but this is effectively what monads allow you to do.

Stacking Theory for Systems Design

Jesper L. Andersen:

In the recent years, I have adopted a method for system design, which I think yields good results. For a lack of better word, I overloaded “stack” yet again, and use it as a metaphor for this design.

The baseline is level 0 in the stack, and now we try to move the system upwards in the stack by adding another level. It is important to stress that transitioning is a best effort method. We make an attempt at increasing the operational level of the system, but if we can’t we stay put at the current level.

DDD is Not for Perfectionists

Jan Stenberg quoting Eric Evans:

Evans points out that a microservice, which should be autonomous, can be a good bounded context, but emphasizes that this does not mean that a service always is a bounded context, something that developers sometimes interpret it as.

Some interesting points in here about using multiple bounded contexts within a system to make it easier to work on.

Sometimes a model is a little incomplete, lacking the ability to handle all the cases it’s meant to handle. Instead of creating a model that is able to handle more cases but feels awkward, an alternative may be to create a function that deals with cases not handled by the model. Such a function works with lots of if-then-else statements, staying away from any high-level concepts to avoid creating yet another model. An abstraction that is leaking or confusing should not be used at all. Evans notes that it’s better to use if-then-else instead of creating an illusion of an elegant model. Such a model may even prevent finding a working model. He believes that this is a great example of a trade-off, aiming for a good but not perfect design.

I think this is a pragmatic approach. Finding an all-encompassing model can be paralysing.

Eventsourcing: Why Are People Into That?

Robert Reppel:

… smaller subsystems communicate via agreed-upon contracts (commands and events). Each has their own data store. Eventsourcing and the CQRS pattern are used to reduce coupling (and therefore the potential for side effects) to a minimum.

What makes the combination of event sourcing, Domain Driven Design and CQRS so attractive is that it can greatly simplify building software which keeps subsystems cleanly separated and independently maintainable as more features are added over time, akin to what we have learned to do for cars, spacecraft and toasters.

Event-based Architecture at Airtime

Nathan Peck:

Airtime uses AWS Simple Notification Service (SNS) and AWS Simple Queue Service (SQS) to power the pub/sub event backbone that connects producer services to consumer services.

A loosely-coupled, event-based architecture based on AWS services.

Choosing Boring Technology

Dan McKinley:

If you think about innovation as a scarce resource, it starts to make less sense to be on the front lines of innovating on databases. Or on programming paradigms.

The point isn’t really that these things can’t work. Of course they can work. But exciting new technology takes a great deal more attention to work than boring, proven technology does.

Some fantastic points about what your job is when building software and the trade-offs that are constantly in front of you.

Designing Data-Driven Interfaces

Truth Labs:

“Dashboard”, “Big Data”, “Data visualization”, “Analytics” — there’s been an explosion of people and companies looking to do interesting things with their data. I’ve been lucky to work on dozens of data-heavy interfaces throughout my career and I wanted to share some thoughts on how to arrive at a distinct and meaningful product.

I’ve been doing a lot of data-driven interface work over the last few years and the advice in this article is spot on.

Moving Past the Scaling Myth

Michael Feathers:

It’s funny. I can’t count the number of times I’ve seen organizations with large monolithic software applications move toward SOA. It’s easy to think that we’ve learned a lesson in the industry and that services are just the new “best practice.” Surely we would’ve started with them if we were just starting development now. Actually, I think the truth is a bit more jarring. Different architectures work at different scales. Maybe we need to recognize that and understand when to leap from one to another.

The process space has similar issues. If we cargo-cult anything from the small team space into the large organization space, we’re falling prey to a blind spot. We’re missing an opportunity to re-think things and figure out what works best for small teams, interacting teams, and large-scale development. They may be quite different things.

I agree that the rules are different for different organisation sizes. I like Michael’s comparison of moving across organisation sizes to state transitions in physics.

Fighting spam with Haskell

Simon Marlow:

Haskell isn’t a common choice for large production systems like Sigma, and in this post, we’ll explain some of the thinking that led to that decision. We also wanted to share the experiences and lessons we learned along the way. We made several improvements to GHC (the Haskell compiler) and fed them back upstream, and we were able to achieve better performance from Haskell compared with the previous implementation.

Anatomy of a Rails Service Object

Dave Copeland:

We’ve given up on “fat models, skinny controllers” as a design style for our Rails apps—in fact we abandoned it before we started. Instead, we factor our code into special-purpose classes, commonly called service objects. We’ve thrashed on exactly how these classes should be written, so this post is going to outline what I think is the most successful way to create a service object.

Some good advice for building service objects in Rails.

I Prefer This Over That

Elisabeth Hendrickson:

I prefer:

  • Recovery over Perfection
  • Predictability over Commitment
  • Safety Nets over Change Control
  • Collaboration over Handoffs

Ultimately all these statements are about creating responsive systems.

When we design processes that attempt to corral reality into a neat little box, we set ourselves up for failure. Such systems are brittle. We may feel in control, but it’s an illusion. The real world is not constrained by our imagined boundaries. There are surprises just around the corner.

Monoids without tears

Scott Wlaschin:

But now we have something much more abstract, a set of generalized requirements that can apply to all sorts of things:

  • You start with a bunch of things, and some way of combining them two at a time.
  • Rule 1 (Closure): The result of combining two things is always another one of the things.
  • Rule 2 (Associativity): When combining more than two things, which pairwise combination you do first doesn’t matter.
  • Rule 3 (Identity element): There is a special thing called “zero” such that when you combine any thing with “zero” you get the original thing back.

With these rules in place, we can come back to the definition of a monoid. A “monoid” is just a system that obeys all three rules. Simple!

To sum up, a monoid is basically a way to describe an aggregation pattern – we have a list of things, we have some way of combining them, and we get a single aggregated object back at the end.

The first of three posts that give a simple definition of what monoids are and the benefits they provide.

Kent Beck's Design Rules

Martin Fowler:

Kent Beck came up with his four rules of simple design while he was developing ExtremeProgramming in the late 1990’s. I express them like this.

  • Passes the tests
  • Reveals intention
  • No duplication
  • Fewest elements

Start with a Monolith

The Microservices train is leaving the station baby and everyone is getting on board. Lots of folks have written about Microservices and their benefits but a recent project experience has left me more interested in when you should use the approach.

Here are two posts which jibe with some of what I’ve recently felt.

Eric Lindvall of Papertrail:

When you’re starting out, and when you’re small, the speed at which you can make changes and improvements makes all the difference in the world. Having a bunch of separate services with interfaces and contracts just means that you have to make the same change in more places and have to do busywork to share code.

What can you do to reduce the friction required to push out that new feature or fix that bug? How can you reduce the number of steps that it takes to get a change into the hands of your users? Having code in a single repository, using an established web framework like Rails or Django can help a lot in reducing those steps. Don’t be scared of monolithic web apps when you’re small. Being small can be an advantage. Use it.

Adrian Cockcroft ex. Netflix:

I joined Netflix in ‘07 and the architecture then was a monolithic development; a two week Agile sort of train model sprint if you like. And every two weeks the code would be given to QA for a few days and then Operations would try to make it work, and eventually … every two weeks we would do that again; go through that cycle. And that worked fine for small teams and that is the way most people should start off. I mean if you’ve got a hand full of people who are building a monolith, you don’t know what you are doing, you are trying to find your business model, and so it’s the ability to just keep rapidly throwing code at the customer base is really important.

Once you figure out how… Once you’ve got a large number of customers, and assuming that you are building some Web-based, SasS-based kind of service, you start to get a bigger team, you start to need more availability.

Large projects with long-term timelines seem like good candidates for using the Microservices approach1.

On the other hand, new products or services may not be the right situation to immediately dive in with a Microservices approach. It’s likely that the idea itself is being fleshed out and investing anywhere outside of that core goal is ultimately waste. Carving process boundaries throughout your domain in this early turbulent stage is going to slow you down when you inevitably need to move them.

Pushing infrastructure style functionality—such as logging or email delivery—out into services makes sense, but waiting to see how things develop seems worthwhile when it comes to the core domain. Initially focussing on understanding the domain and investing in getting changes out to production as quickly as possible is likely more important then developing loads of cross-process plumbing.

A monolithic application isn’t such a bad place to start. The trick, as always, is to know when to change that plan.


  1. In fact a brown field or system refresh project seems like an ideal situation to test the waters of implementing them. These projects have a runway long enough to justify the investment required to put all of the required communication, deployment, and monitoring ligatures in place. ↩︎

RSS feed for content about Software Design.