An Alternative Flow

credits: Nederlandse Spoorwegen

This week I wanted to create an user account with the NS (Nederlandse Spoorwegen, Dutch Railways) via their website. After entering the necessary details everything seemed to progress well until I received a so-called ‘confirmation-mail’ in which I had to click a link to finalize the process. This click directed me again to the website of the NS to fill out some additional information. However, I was confused they even needed more additional information and I closed the web window before filling it out. When clicking the link in the confirmation-mail again, I got an error message informing me that this was an invalid link. Multiple ‘re-clicks’ did not improve the situation and I realized I made a mistake in closing this web window. Frustrated as I was I decided to call the help desk, but after waiting for more than 15 minutes in the waiting queue I decided to search for an alternative flow, hoping I would find one.

As it is impossible to test every possible execution path through code, it is important to think about clever scenarios of testing. These scenarios need to be chosen to cover as much as possible to increase the chance of finding present defects as much as possible.

In defining these testcases, three possible kinds of scenarios can be considered:

  • Happy Flows
  • Alternative Flows
  • Sad Flows

Happy flows

Test scenarios performing the intended usage of the software are called happy flows. Normally, these test scenarios would cover the majority of the software’s usage.

Alternative flows

An alternative flow is a test scenario testing a flow other than the intended usage of the software that, however, will result in the completion of the scenario’s goal. By means of an alternative flow, an alternative execution path through the code is taken to achieve the goal.

Sad flows

Sad flows are test scenarios testing error situations in which the intended goal of the flow is not achieved. An example would be to provide invalid input to the software. A sad flow tests how the software reacts to error situations.

Still, when defining different flows for testing a user story, it seems to be most difficult to think about alternative flows. Which flow of steps or actions, which are not specified, will still deliver the desired result to the customer? This brings me back to my, so far, failed user account request with the NS.

Apparently I was in the situation in which the happy flow in requesting a user account with the NS was not working anymore as I got the error message of an invalid link. And of course logging in using my e-mail address as user name, which I already entered in the first step of the registration, was also not feasible. Simply because I did not yet set a password.

When realizing I did not yet set a password, I thought to try to use the always present ‘forgotten password’ link; a savior whenever you cannot continue to login. Surprisingly,  clicking this ‘forgotten password’ link and entering my e-mail address resulted into receiving a new confirmation-mail with a link to click to finalize the user account creation process. And guess what….? Yes! This link worked and showed me the webpage in which the additional information for the user account was asked. After entering this additional information, including setting the password, my account was created….. I was relieved I had found an alternative flow to create my user account.

This alternative flow would look like:

  1. Request creation of an user account.
  2. Fill out needed information.
  3. Wait for confirmation mail.
  4. Click the link in the confirmation mail.
  5. Do not fill out additional information but immediately close the ‘confirmation web window’.
  6. Open login page.
  7. Request to reset the password via the ‘forgotten password’ link.
  8. Enter your e-mail address.
  9. Wait for confirmation mail.
  10. Click the link in the newly received confirmation mail.
  11. Fill out additional requested information including a password.
  12. Finalize the process.

I wonder whether the NS did consider this alternative flow in their testing…….

We need to discuss technical debt……

Last week an engineer, who I did not know, approached me for having a short talk. He explained to me that he is working on a piece of embedded software containing significant levels of technical debt. He would like to achieve that a certain percentage of the team effort would be spent on handling technical debt, only 10% would already be appreciated.

However, due to time pressure to deliver functionality the product owner does not want to grant the percentage for handling technical debt. The engineers question was whether I had some tricks and tips which could help him. He had read some parts of my book “What is Software Quality?”.

I pointed out that we, as a software community, need to stand for our profession. It is our responsibility to mitigate technical debt in a context of finding the right balance between short term and long term. We need to engage the dialogue on the subject with our stakeholders like product owners, project managers and management in general. We need to point out the consequences of technical debt and why it needs to be addressed.

First of all we need to have a look at ourselves; what can we do to mitigate technical debt. Like, developing software as it should be; applying good development practices with craftsmanship and producing clean code while demonstrating the needed discipline to do so despite time pressure. And even then, technical debt is inevitable and will creep into our software. So, we need to do something additional.

Considering technical debt itself we can distinguish between small TD and big TD. An example of small TD is this compiler warning which is not yet fixed. Or this function or method with a too high cyclomatic complexity. Small TD can be handled by the boy-scout rule; leaving code cleaner than you encountered it. Whenever altering a piece of code, get rid of small TD in this piece of code. We always should apply the boy scout rule and, in my opinion, we do not need to ask for ‘permission’ to do so. It is part of our craftsmanship.

An example of big TD is a required redesign of a module which will take a significant amount of effort. In this case the technical debt is so big that it cannot be solved instantly. We need to register big TD on e.g. a Technical Debt Backlog (TDB). This TDB then needs to be considered on a regular basis in the context of the features to be planned for the next release. Which TDB-items are needed to be addressed for the implementation of the prioritized User Stories? Preferably TDB-items will be ‘connected’ to one or multiple User Stories, thus whenever the User Story is prioritized the ‘connected’ TDB-items will be as well.

To be able to discuss and prioritize big TD with the stakeholders it is important the stakeholders do understand what technical debt is and which consequences it has. Therefore, they need to be educated by us; by the software professionals. In my book I try to explain technical debt in such a way that it can be understood by people who do not have software knowledge as well.

Engaging with the stakeholders and explaining and discussing the consequences of technical debt is necessary; using metaphors, explaining the complexity of execution paths, visualizing the size of software and showing the vast diversity of technical debt and pointing out the long term consequences of technical debt on development speed and efficiency. Personally, I like to talk about ‘a sustainable pace of development’ instead of ‘development speed’, in accordance with the Formula-1 in which they talk about ‘race-pace’ instead of ‘race-speed’. This has reason, focus on speed in Formula-1 will increase tire-wear like focus on speed in software development will increase software-wear. In both cases velocity declines. Let’s take our responsibility and start discussing technical debt with our stakeholders, using metaphors like tire-wear in Formula-1.

Should Software Engineers be Certified?

Software is everywhere and its relevance is growing at a phenomenal pace. It would not be an exaggeration to say that software runs the world. And still, as expressed in a previous blog of mine, “Trial and Error Programming”, programming skills need to improve significantly. Besides programming skills, software engineering skills also need to improve. As an example, how is that not many software engineers are using UML for modeling requirements and design?

Low threshold to become a software engineer

Let’s have a closer look at the issues related to becoming a software engineer. If there is a lack of professionals in a certain profession, people will jump on it. There will be jobs all over the place. Combine this with the very low threshold to produce any software and it will be very easy to jump on the possibilities we have due to the lack of software engineers.

How easy is it to download a compiler and write and execute your first program? How easy is it to search the internet for any piece of code and copy it into your program to get a result? How easy is it to learn Python and start programming? Unlikely in other engineering disciplines, everybody can start programming easily. Tools are available and downloadable from the internet; on the other hand, production is done by compiling. With partial understanding of a programming language, first results can be easily achieved. How different is it from building your first electronics circuitry? Building electronic circuitry without understanding voltage, current, resistors, coils, capacitors, transistors, or digital circuitry is not even possible.

The low threshold to becoming a software engineer coupled with the high need of software engineers in the market is the reason why we do have high numbers of software engineers who are not adequately educated.

Software engineering is more than programming alone.

If we take software development seriously, we should make sure that our software engineers are well trained in necessary software engineering practices, next to “only” knowing a programming language. Engineering practices such as requirements engineering, modeling, design methodologies and design patterns, Clean Design, algorithms, Clean Code, and testing assume even greater importance than knowing the programming language itself. Additionally, our software engineers should be educated in computer architectures, computer networks and security, databases, operating systems, communication protocols, and so on.

In order to build high quality software, it is not sufficient to “only” know a programming language. Therefore, should we certify our software engineers to ensure they are well educated in relevant topics associated with software engineering? This is especially important for determinative software in our society, like software in aerospace or automotive or in our financial systems. Certification of our software engineers would help us ensure that only well-educated software engineers are working on our most critical software!

Process vs Skill

by Ger Cloudt, author of “What is Software Quality?”

One of the struggles of software development is the assurance of quality. Sure, if you talk about quality assurance, people will refer to testing. Or, one might want to assure quality by applying proper processes and best practices. Process improvement and reference models like ASPICE and CMMi are used in the industry to increase quality or to determine the capability of the organization to develop high quality software. However, when using reference models like ASPICE or CMMi there is a pitfall, using ASPICE or CMMi might disturb the balance between process and skill.

Process and Skill

According to Wikipedia.org a process is defined as “a series or set of activities that interact to produce a result”.
Additionally, to execute a process one needs to have a skill. According to Wikipedia.org a skill is “the learned ability to perform an action with determined results”.

Process and skill are complementary, so you will need both. To execute actions or activities you need a certain level of skill, to achieve a result multiple actions need to be structured by a process. However, the importance of process compared to the importance of skill is dependent on the kind of activities to be performed.


In figure below it is tried to visualize the relevance of process versus the relevance of skill to perform a certain task. On the left side of the figure tasks are positioned for which process is highly relevant and the needed skill-level is relatively low. An example is assembly-line work. Many people are able to perform these tasks as the needed skill-level is low, however the order of activities to be performed are very strict and therefore the process relevance is high. The same applies to building this furniture bought at IKEA. A manual describing how to assemble (process) the cupboard or chair is important whilst the needed skill-level is quite modest.

At the other side of the scale we have activities for which process relevance is low but a very high skill-level is needed which only a few people master. An example would be painting the Mona Lisa as Leonardo da Vinci did.

Software Engineer

The question is where to position the software engineer in the figure? If you draw a line reflecting the position at which process relevance and needed skill-level are equal, would the software engineer be placed on the left or right of this line? In my humble opinion the software engineer would be positioned on the right of this line, meaning that I think the needed skill-level is more important than the process for performing the role of software engineer. To put it differently: a process is a tool that helps you apply your skills. If the skill isn’t available, a process doesn’t help you. However, if the skill is available, a process can help to apply this skill.

ASPICE or CMMI level

Process improvement models like ASPICE and CMMI define different maturity levels which are used in some industries as a minimum requirement for software or system suppliers. This requirement or target setting to be process compliant within a certain process reference model disturbs the balance between process and skill.

By setting a target to be process compliant and achieve a certain ASPICE or CMMI maturity level the focus is directed to the less important part in the process-relevance versus needed skill-level balance. Achieving the target-level of process compliance does not tell the full story and therefore there is the danger that it is assumed that the process compliance is equal to achieving high-quality software which it is not. Process compliance assures that defined and necessary activities are performed, however it does not say anything about how well these activities are performed. Therefore, additionally to checking process compliance, one should check the content, the result of the activities, as well. Simply, because for performing the activities well, skills are needed. Good skills will lead to good results, bad skills will lead to bad results.

Software Estimations – why is it so hard?

by Ger Cloudt, author of “What is Software Quality?”

Image by  OpenClipart-Vectors from  Pixabay

Organizations like predictability in their development projects. High predictability enables e.g. Sales to sell and actually deliver. It enables the organization to negotiate contracts which can be fulfilled and if obligations are met no sales will be lost.
However, it seems that predictability of software development is low. Agile principles, like developing in small increments with a Potential Shippable Product at the end of each sprint, is a way to address the predictability problem in software development. Having small increments enables frequent deliveries, in time, but maybe not with the wanted scope. This is a less problem because a next release, which will be available soon, will contain the missing features.
But still, this incremental approach is not applicable to every software development and thus we are back to the predictability of software development.

In the development of multi-disciplinary embedded products predictability still is important. Since releasing and updating the software is not as frequent as in a true Agile environment, available functionality in a release becomes more important because if not in, the customer needs to wait for a next release which might take a considerable amount of time or even worse if the product is not remotely upgradable.

Cone of uncertainty

In 1995 Boehm et al presented the estimate converge graph, also known as the cone of uncertainty stating that for any given feature set estimation precision can improve only as the software itself becomes more refined.

Investing more time and effort in developing requirements, understanding requirements , analyzing requirements and even building the software itself will result in more accurate estimates as the cone of uncertainty shows. But still, even if everything seems to be clear, complete and understood we still will have uncertainty which will cause deviations from the plan. Uncertainty cannot be banned and therefore predictability will never be as accurate as 100%.

Puzzle analogy

Then, how to explain that software estimations are difficult and unprecise? For this I would like to use an analogy of solving puzzles like e.g. crosswords, cryptograms or Sudoku’s.

If I want to explain the difficulties we experience in software estimation to somebody I would ask the person to estimate how long it would take to solve a booklet of puzzles? Most likely you will get questions in return like, what is the difficulty level of the puzzles? What kind of puzzles are your referring to? How many puzzles need to be solved? How big are these puzzles? Am I allowed to use a dictionary? Well…., I do not know but anyway please provide me an estimate. You can imagine the accuracy of such an estimate.
This question of estimating the needed effort to solve an unknown bunch of puzzles can be compared to asking estimates typically for road mapping purposes. The development team is provided with some one-liners of the requirements and an estimate is asked, to be able to plot a roadmap.

Then we can take a next step and provide the actual booklet of puzzles. Having a look into the booklet will provide better insights and most likely the initial estimate as provided will be adjusted towards the new insights.
The more time you spend to have a look at the puzzles and understand the difficulty, the better the estimation to complete will become. This is in real software development comparable to your collecting, understanding and analyzing of requirements and maybe here and there perform some pre-development or prototyping for high risk areas.

To achieve an even better estimate you could not only have a look at the booklet of puzzles, but actually solve some puzzles. Measure how long it takes to solve and count the remaining, not yet solved, puzzles. This is what we call measuring velocity and applying it to the remaining work to predict when to deliver.
You would expect a pretty accurate estimate, right?

However, when several puzzles are made and velocity seems to be stable I will come in and tear out a number of puzzles and add another number of puzzles to the work to be done. Typically this can be compared to changing requirements and adding requirements during the project which will happen throughout the development.

Another problem you will encounter during your puzzle solving is that suddenly you will encounter puzzles with an unexpected very high complexity. If the puzzles you solved so far are in the complexity area between 3 or 5 stars you will encounter some puzzles with a complexity much higher and your velocity will drop tremendously. Encountering these high complexity puzzles can be compared in encountering difficult problems during development, hard or nearly impossible to reproduce and even harder to solve. Also you will encounter puzzles which have a relation with previously solved puzzles and to be able to solve these new puzzle you have to redo the previously solved puzzles.

And then we did not yet talk about external influences in your puzzle-solving. What if you are out of pencils? Just because I come into the room and want to replace all pencils by a cheaper type of pencils? Or suddenly your dictionary will be lost? Compare it to Corporate IT performing a security update on the network. If you are lucky the update is done in the weekend and not affecting your project but there is a risk you will have problems on Monday.

If estimating puzzle solving is already hard… how about estimating software?

As you can see, estimating solving puzzles, an activity everybody can perform, is already hard and inaccurate. Imagine the development of a big software system which only can be done by highly educated engineers. How can we expect accurate estimates?