You Create the Complexity of Tomorrow!

In his book “Facts and Fallacies of Software Engineering”, Robert L. Glass states that an increase of 25% in problem complexity, results in a 100% increase in complexity of the software solution. Reason enough, I would say, to focus on mitigating complexity.

In software development, the primary source of complexity comes from requirements and constraints. Requirements determine what to build and therefore are a main contributor of the complexity the development team is facing. The same goes for constraints, such as memory and/or CPU-cycle limitations in embedded systems, which can add considerable complexity. In many cases, the influence of the development team on this ‘externally’ imposed complexity is limited. Still, by providing feedback to stakeholders and engaging in discussion for alternatives, complexity imposed by requirements and constraints might be mitigated.

However, this externally imposed complexity by means of requirements and constraints, is not the only complexity the development team is facing. Secondary complexity imposed on the team, is the complexity of the existing software in which the new requirements need to be added.

Code produced today is a legacy tomorrow

Today’s software development is incremental and lasts for many years or maybe even decades. This implies that decisions taken on the design and implementation will have a big influence on future development. The complexity induced by these decisions is the secondary source of complexity the team is facing. In other words, the complexity created today by the team will be faced in the future. The good news of this complexity is that the developers are in full control.

As a software developer you create the complexity of tomorrow!

“Complexity is anything related to the structure of a software system that makes it hard to understand and modify it”, says John Ousterhout in his book “A Philosophy of Software Design”. Two important aspects of complexity of a software system are dependencies and obscurity.

Reduce dependencies

Because it is known that dependencies are an important aspect of complexity, one should focus on reducing dependencies in the software as much as possible. It is no coincidence that design paradigms like ‘low-coupling & high-cohesion’ are the basis for the SOLID design principles, which have the goal to reduce dependencies in the software such that engineers can change entities of the software without having to change others. Applying these design principles in a proper way does mitigate complexity of this software.

Reduce obscurity

Not understanding the intention of the software, or to be more specific of the code, increases complexity as well. This is exactly what should be covered by creating so-called ‘Clean Code’. Clean Code is code that works and is understandable by other human beings. Code which is hard or nearly impossible to understand by other human beings is called Bad Code. In many cases Bad Code is associated with complex, big functions containing deeply nested constructions and a high cyclomatic complexity. However, one should take into account that also seemingly simple small pieces of code can be obscure. Examples of these are the usage of misleading names of variables and not obvious constructions for simple operations.

Once, I saw a little piece of code:

for (i = 1; i <= m; i++)

Thinking about what was happening here and knowing that m was an unsigned integer;

number = number + m;

would do the trick as well.

It started with quite a seemingly simple little piece of code, easy to understand. Still, I would call this simple piece of code obscure, simply because you start to wonder…, why? Why is it programmed in this way? Why is a simple addition programmed in a loop? This seemingly simple piece of code raises questions and therefore creates complexity.

Spaghetti code

Software with a high number of unnecessary dependencies is often referred to as “spaghetti code”. You might understand where the term “spaghetti code” comes from. All the different spaghetti strands mixed up with each other, do visualize the dependencies between the different software entities. You can imagine that due to this obscure mess of dependencies complexity is ramping up.

It is for a reason that the first value statement in the “Manifesto for Software Craftsmanship” mentions: “Not only working software, but also well-crafted software.”

As a software engineer, take your responsibility and develop well-crafted software to mitigate complexity!

The gaps between the intended-, implemented- and understood design

Designing software is the process to create and define the structure of this software to accomplish a certain set of requirements. Typically the design consists out of different decomposition levels in which the software is decomposed into different entities, which do interact with each other. As such, one could conclude that we will have one design for the software comprehending different decomposition levels. What many people do not realized is, that we do have different types of design, being the intended design and the implemented design.

The intended- and implemented design.

The intended design is the design as it is designed to be implemented. It is the intention that the intended design will be implemented as such in the actual code. Typically, the intended design is the design as it is documented in a tool like Enterprise Architect.

The implemented design is the design as it is actually implemented in the code. In an ideal situation this  implemented design will be equal to the intended design. However, this never seems to be the case in practice. Differences will always exist between the intended design and how it is actually implemented in the actual code; the implemented design.

Figure 1 illustrates two components (cubicles) each containing a number of functions (black dots). The lines are interfaces between different functions.

Figure 1

Let’s suppose that the implementation of this intended design is exactly implemented in this way; the implemented design equals the intended design.

Whenever a change is required and a certain function would need data from another function in the other component, the needed communication should be implemented by means of the well-defined interface between the two components. However, it might be decided to directly call this function in the other component without using that interface. If this happens, an unintended interface between the components will be realized (visualized by an additional call between the components). In case this happens multiple times, we will come across a situation as reflected in Figure 2, illustrating the gap between the intended design and the implemented design.

Figure 2

It is obvious that one would need to mitigate the gap between the intended– and implemented design as much as possible.

One can think of several reasons causing the gap between the intended– and implemented design, like an inexperienced engineer not being aware of the intended design, an engineer implementing that hack due to time pressure or even not updating the documentation of the intended design after a necessary change was applied in the code.

The understood design

In “Who needs an Architect?”[1] Martin Fowler stated:

“The expert developers on the software will have some common understanding of how the thing works.
And it is that common understanding which is effectively the architecture.”.

Taking into account that architecture is your highest level of design, this puts a new perspective on the design of a piece of software. Asides the intended design and implemented design, apparently there is something as an understood design; the common understanding by the experts of how the software works.


During my career, I’ve seen many situations in which all three design-types were inconsistent with each other. Dependent of the size of the gap between specifically, the understood design and implemented design, unexpected side effects, needed rework and even instable software as a result was experienced. In some cases, in which the gap between the implemented design and the other design-types was huge, the software was not maintainable anymore. Engineers did not dare to ‘touch’ the code anymore, afraid they would break it.

Therefore, it is important to mitigate the gaps between the intended-, implemented and understood design as much as possible by documenting and maintaining the intended design, by sticking to the defined architectural and design rules when implementing and by running static design analysis by reverse engineering tools like Lattix, to get insight in the implemented design.


GOOD, CHEAP & FAST in software development.

“We offer 3 kinds of services: GOOD-CHEAP and FAST, but you can pick only two.” A well-known statement in Project Management. Meaning that you cannot have all three of them during your development and you will have to balance and make your choices.

GOOD & CHEAP won’t be FAST, FAST & GOOD won’t be CHEAP and CHEAP & FAST won’t be GOOD. True?

GOOD is clearly referring to quality, FAST is referring to speed and CHEAP is referring, of course, to money. If you want to develop something of high quality it will cost you either a lot of money and/or will need time to develop.

Let’s put the GOOD-FAST-CHEAP triangle in perspective of software development and see whether this triangle is true for software development. I would say, it is not. I would reformulate. The only way to be FAST and to be CHEAP is to be GOOD.

Being FAST.

Let’s have a closer look at being fast. Being fast is mainly determined by the level of complexity. The more complex the problem to be solved the longer it takes. Of course, this needs no further explanation. Complexity comes from two directions. The first one is external, meaning by requirements. Requirements will determine the complexity of the product and therefore are of high influence on costs and duration of the needed development. The second direction of complexity comes from, is internally. The level of the internal quality of software (What is Software Quality?) determines the complexity to deal with. Internal quality of software is the quality of the architecture/design and the quality of the code. Whenever you have a complex design, possibly caused by increasing technical debt (Help…, my software rots!), it is more complex to add features and adapt the software. If your code is not well readable it will take more time to understand to be able to adapt. As you can see, low quality will lead to higher complexity and thus being slower.

However, whenever you would start a green-field software development, from scratch, no existing software to be re-used, there would be no internal complexity yet. In such a case it is possible to be fast and cheap without being good (low quality). But, internal complexity will increase and speed will decrease accordingly. So even being fast without being good will only last for a short period as visualized in the picture below.

Since software is evolving and imperfections are accumulated during development your technical debt will grow and with that the internal complexity will grow resulting in becoming slow.

Being CHEAP.

Ok, now you might be convinced that you will need GOOD to be FAST. So according to the GOOD-FAST-CHEAP triangle you will not be CHEAP.

Hmmm……, is that the case? Of course software development is not cheap, but due to one of the properties of software development there is a direct relation between duration and costs in software development. Costs are determined mainly by labor in software development. Engineers developing and maintaining software are the main cost item of a software development project. This implies that there is a direct relation between duration and costs. Whenever it takes long to develop, it will cost you more. From this we can simply conclude that being FAST in software development equals being CHEAP for which, as explained above, low complexity and thus GOOD is needed.

Therefore, in software development the only way to be FAST and to be CHEAP is to be GOOD.

An Alternative Flow

credits: Nederlandse Spoorwegen

This week I wanted to create an user account with the NS (Nederlandse Spoorwegen, Dutch Railways) via their website. After entering the necessary details everything seemed to progress well until I received a so-called ‘confirmation-mail’ in which I had to click a link to finalize the process. This click directed me again to the website of the NS to fill out some additional information. However, I was confused they even needed more additional information and I closed the web window before filling it out. When clicking the link in the confirmation-mail again, I got an error message informing me that this was an invalid link. Multiple ‘re-clicks’ did not improve the situation and I realized I made a mistake in closing this web window. Frustrated as I was I decided to call the help desk, but after waiting for more than 15 minutes in the waiting queue I decided to search for an alternative flow, hoping I would find one.

As it is impossible to test every possible execution path through code, it is important to think about clever scenarios of testing. These scenarios need to be chosen to cover as much as possible to increase the chance of finding present defects as much as possible.

In defining these testcases, three possible kinds of scenarios can be considered:

  • Happy Flows
  • Alternative Flows
  • Sad Flows

Happy flows

Test scenarios performing the intended usage of the software are called happy flows. Normally, these test scenarios would cover the majority of the software’s usage.

Alternative flows

An alternative flow is a test scenario testing a flow other than the intended usage of the software that, however, will result in the completion of the scenario’s goal. By means of an alternative flow, an alternative execution path through the code is taken to achieve the goal.

Sad flows

Sad flows are test scenarios testing error situations in which the intended goal of the flow is not achieved. An example would be to provide invalid input to the software. A sad flow tests how the software reacts to error situations.

Still, when defining different flows for testing a user story, it seems to be most difficult to think about alternative flows. Which flow of steps or actions, which are not specified, will still deliver the desired result to the customer? This brings me back to my, so far, failed user account request with the NS.

Apparently I was in the situation in which the happy flow in requesting a user account with the NS was not working anymore as I got the error message of an invalid link. And of course logging in using my e-mail address as user name, which I already entered in the first step of the registration, was also not feasible. Simply because I did not yet set a password.

When realizing I did not yet set a password, I thought to try to use the always present ‘forgotten password’ link; a savior whenever you cannot continue to login. Surprisingly,  clicking this ‘forgotten password’ link and entering my e-mail address resulted into receiving a new confirmation-mail with a link to click to finalize the user account creation process. And guess what….? Yes! This link worked and showed me the webpage in which the additional information for the user account was asked. After entering this additional information, including setting the password, my account was created….. I was relieved I had found an alternative flow to create my user account.

This alternative flow would look like:

  1. Request creation of an user account.
  2. Fill out needed information.
  3. Wait for confirmation mail.
  4. Click the link in the confirmation mail.
  5. Do not fill out additional information but immediately close the ‘confirmation web window’.
  6. Open login page.
  7. Request to reset the password via the ‘forgotten password’ link.
  8. Enter your e-mail address.
  9. Wait for confirmation mail.
  10. Click the link in the newly received confirmation mail.
  11. Fill out additional requested information including a password.
  12. Finalize the process.

I wonder whether the NS did consider this alternative flow in their testing…….

We need to discuss technical debt……

Last week an engineer, who I did not know, approached me for having a short talk. He explained to me that he is working on a piece of embedded software containing significant levels of technical debt. He would like to achieve that a certain percentage of the team effort would be spent on handling technical debt, only 10% would already be appreciated.

However, due to time pressure to deliver functionality the product owner does not want to grant the percentage for handling technical debt. The engineers question was whether I had some tricks and tips which could help him. He had read some parts of my book “What is Software Quality?”.

I pointed out that we, as a software community, need to stand for our profession. It is our responsibility to mitigate technical debt in a context of finding the right balance between short term and long term. We need to engage the dialogue on the subject with our stakeholders like product owners, project managers and management in general. We need to point out the consequences of technical debt and why it needs to be addressed.

First of all we need to have a look at ourselves; what can we do to mitigate technical debt. Like, developing software as it should be; applying good development practices with craftsmanship and producing clean code while demonstrating the needed discipline to do so despite time pressure. And even then, technical debt is inevitable and will creep into our software. So, we need to do something additional.

Considering technical debt itself we can distinguish between small TD and big TD. An example of small TD is this compiler warning which is not yet fixed. Or this function or method with a too high cyclomatic complexity. Small TD can be handled by the boy-scout rule; leaving code cleaner than you encountered it. Whenever altering a piece of code, get rid of small TD in this piece of code. We always should apply the boy scout rule and, in my opinion, we do not need to ask for ‘permission’ to do so. It is part of our craftsmanship.

An example of big TD is a required redesign of a module which will take a significant amount of effort. In this case the technical debt is so big that it cannot be solved instantly. We need to register big TD on e.g. a Technical Debt Backlog (TDB). This TDB then needs to be considered on a regular basis in the context of the features to be planned for the next release. Which TDB-items are needed to be addressed for the implementation of the prioritized User Stories? Preferably TDB-items will be ‘connected’ to one or multiple User Stories, thus whenever the User Story is prioritized the ‘connected’ TDB-items will be as well.

To be able to discuss and prioritize big TD with the stakeholders it is important the stakeholders do understand what technical debt is and which consequences it has. Therefore, they need to be educated by us; by the software professionals. In my book I try to explain technical debt in such a way that it can be understood by people who do not have software knowledge as well.

Engaging with the stakeholders and explaining and discussing the consequences of technical debt is necessary; using metaphors, explaining the complexity of execution paths, visualizing the size of software and showing the vast diversity of technical debt and pointing out the long term consequences of technical debt on development speed and efficiency. Personally, I like to talk about ‘a sustainable pace of development’ instead of ‘development speed’, in accordance with the Formula-1 in which they talk about ‘race-pace’ instead of ‘race-speed’. This has reason, focus on speed in Formula-1 will increase tire-wear like focus on speed in software development will increase software-wear. In both cases velocity declines. Let’s take our responsibility and start discussing technical debt with our stakeholders, using metaphors like tire-wear in Formula-1.