Friday, November 01, 2019
Friday, October 25, 2019
Wednesday, October 16, 2019
Wednesday, October 09, 2019
Monday, October 07, 2019
A common way to describe an IT architecture is to use abstraction layers. A layer hides away implementation details of a subsystem, allowing separation of concerns 1. In other words, a layer is only aware of its sub-layer (but without knowing the inner working details or further sub-layers) and does not know anything about layers above.
Many attempts have been made to model an IoT architecture using layers. Depending on what specific challenge a model tries to solves, the focus can be on different viewpoints, for example, functional features versus data processing.
In the following diagram I bring two common models together.
On the left we have the three layer functional model defined by the ETSI Standards group 2.
This model is mainly used in the context of machine-to-machine (M2M) communication.
In the middle model, we split data storage from the application layer.
On the right we reach the 7 layer model defined by the IoT World Forum 3.
Another viewpoint is to look at where data is processed (credit to the excellent IoT fundamentals course available on O’Reilly4 which goes into more detail about models mentioned here): at cloud, fog or mist level5.
This introduces a different viewpoint focussing on data processing. At the mist level, data processing occurs right where sensors are located. The fog level lies below cloud where the infrastructure connects end devices with the central server. The cloud is the final destination.
Following up on my last article about how to evaluate technology options I’d like to take one example and describe how to evaluate cloud mock testing frameworks.
I won’t go into the details of the two choices (I leave this for another update), but explain the first step of any evaluation process: making a list of categories that we use to compare the options. This is mostly a brainstorming exercise, with the goal to have a list of attributes that is exhaustive and mutually exclusive.
This might be the most important attribute. Because quite often, testing is seen as something useful and required but annoying to do. And I believe the reason for that can be found in something I call developer laziness, which I mean in a positive way (including myself in that category)!
We developers are inherently lazy and that’s why we have an urge to automate mundane tasks as much as possible. This is a subject for another article, what is relevant here is that the if a testing library makes writing tests harder to do, there will be a negative effect on the overall quality of testing.
Ease of use can be subdivided into further attributes:
I’m unsure if that is actually a word (however, wikipedia3 does list it as a system quality attribute), but it is easy to understand in our example: does the framework help us with debugging our code?
The main purpose of testing is to make sure the code we write does what it is supposed to do. But very often, writing a test has another advantage: it makes debugging simpler (sometimes local testing is even the only way to debug code).
This is certainly true for software that runs on the cloud. It can be difficult to debug code that depends on remote services. Having a local mock simplifies that a lot.
These attributes are binary in the sense that if a framework doesn’t meet those requirements we can’t use it.
Completeness. With that I mean: does it the framework mock all the services we want to test? If not, we can’t use it.
Correctness. Does the mock behave the same way as the real service? If not, we obviously can’t use the mock.
The longer testing takes to execute, the more reluctance we have to use it. Long execution time can become a major efficiency problem.
As mentioned before, the answer to the increasing complexity of technology is to provide ever more frameworks, tools and other solutions. Everyone involved in development has to evaluate and choose among options constantly.
Quite often, we use what we know. I call this “developer laziness” and I mean that in a good way. If a tool is too complex to use or understand, it has to provide a substantial advantage to warrant convincing others to invest the time and effort to learn how to use it.
However, not looking beyond what we currently know severely limits the ability to increase the quality of our software implementation. It is especially the role of an architect to find new and better ways to improve the status quo and explore alternatives.
Comparing technology choices is a three step process:
The steps are as follows:
Step 1) is a brain-storming exercise. I find it very useful to involve other members in this step. If you do so, I recommend having everyone make a list on their own and then merge the findings into a list. Having more than one person come up with categories increase the chances that our list is exhaustive. The final merge of all answers into a final list can be best accomplished by one person to keep the list mutually exclusive.
Step 2) is then best done by the expert of that field. If the technology we explore is new (which quite often it is), implementing a quick proof-of-concept is an effective way to get a “gut feel” for the technology.
Presenting Step 3) is then best done with the team and in front of the stakeholders. If the choice we have to make is substantial, the desired outcome of the process is to give the stakeholder a clear picture of the pros and cons and enable her to make an informed decision.
One question is how much time we should spend on this process. The more important the decision the more thorough we have to be. It is important to make the evaluation of technology itself a task that has resources and time allocated to it. The only way to find a good solution is to thoroughly understand the problem and that requires to give importance to the evaluation process.
I will come back to this process many times with concrete use cases.
Nobody knows what to do.
Admitted, that sounds rather clickbaity, but sometimes being a bit extreme can help make a point.
Technology has become way too complex for one individual to fully comprehend. Any fundamental change will lead to uncertaintenties that make it impossible to forsee every possible outcome.
And we can’t research everything before starting to implement the change we want to have. The time we can invest into reading documentation, interviewing experts and building proof of concepts is limited (mostly but not only because of economical reasons).
We are all guessing, more or less. Some may have a lot of experience in one area, and that certainly helps, but requirements are ever changing as is the environment.
So instead of exactly knowing what to do, we are placing bets. We make (hopefully) good informed decisions and see how they work out.
But what if our bets don’t work out?
Then we correct our assumptions and place another, hopefully improved bet.
To greatly increase the chances that this iterative process leads to a good solution, one concept is essential: it’s called ownership.
Only if someone owns the process of placing bets we can make sure every bet brings us closer to our goal.
If nobody owns the consequence of a decision, we leave it up to chance that someone will take charge to create the improved, modified new bet. However, this new person still doesn’t know.
The following diagram summarizes my point:
I kick this newsletter off with a little exploration of the word “strategy”, in particular in the context of software development. It is such a generic word that I think it would be useful to specify what I mean by it. An understanding of something broad and generic is likely to change over time so this is my first attempt but I may come back and refine what I said.
Let me start with what is NOT a strategy:
So then what is it?
Strategy is a high level plan to achieve one or more goals under conditions of uncertainty. 1
Let’s analyze this definition in the context of software architecture by starting at the end and going backwards.
The most common source of uncertainty stems from entropy. It still amazes me every day how quickly something simingly simple becomes all of a sudden complex and difficult to understand.
Computer professionals answer the challenge of entropy with creating a lot of frameworks, programming languages, design patterns etc. Going one step further, the uncertrainty of software development comes from making the right (technological, design) choice (if there was, for example, only one programming language available (“FORTRAN”), then there would be no uncertrainty in choosing it).
The goal of software can vary a lot depending on the context where it is used (e.g. open-source versus comercial software). In this newsletter, I solely focus on developing comercial software. That means, the primary goal is to meet business goals. Without a functioning business, there is no software. Does the design and architecture of our software ensure that it delivers everything necessary to succeed as a business?
Out of the business goals we can derive functional and non-functional requirements. Being an architect, I write mostly about non-functional requirements or goals
Now the final missing piece of the definition: what do we mean with a “high-level plan” in terms of software architecture?
“Having a plan” is what most people would think of what makes a strategy.
In software development, a plan consists of three ingredients:
Requirements can be functional and non-functional, options are technological and architectural design choices and actions are concrete steps to implement software the meets the requirements.
Depicted what was said we get following diagram:
Business goals define requirements. Entropy and uncertainties motivate creation of options (technological, design, execution) and choosing from those options lead to actions. Strategic planning (requirements + options + actions) helps to meet business goals.
Taking everything together, I use the word strategy in the context of software development as follows: a strategy consist of strategic planning which is a three step process of requirements gathering, evaluation of options and specification of actions in order to meet business goals in an environment of increasing entropy.