2017-08-14

Beauty of #microservices - from #DevOps to #BizDevOps via #microservices first

As we all know, usage of MicroService Architecture (MSA) requires the very comprehensive operational practices and infrastructure. A microservice is a unit-of-functionality (or “class” in the informal IT terminology) within its own unit-of-deployment (or “component” in the informal IT terminology) acting as a unit-of-execution (or “computing process” in the informal IT terminology). Some applications may comprise a few hundred of microservices. This is certainly a serious barrier for exploiting MSA benefits such as easy to update and easy to scale to absorb heavy workloads.

Fortunately, as we know, various performance characteristics (e.g. easy to update, easy to scale) are not spread uniformly within applications. For example, 95% of CPU consumption is located in 5% of program code. Thus, it is not necessary to implement the whole application via microservices.

Let us ask a simple question, if a microservice is, actually, a service then can we use microservices and services together? Yes, and some functionality from platforms or monoliths may be used (via API) as well.

Now, let us reformulate the problem. Let us consider that any application is built from many units-of-functionality which must be deployed and then executed. What is the optimal arrangement of units-of-functionality into units-of-deployment and then units-of-execution? In other words,
  • which units-of-functionality have to be implemented as microservices (microservices are agile and good for easy to update, but have some execution and management overhead);
  • which units-of-functionality have to be implemented as monoliths (monoliths are not agile and not easy to update, but have no execution and management overhead);
  • which units-of-functionality have to be implemented as services (classic services are something in between microservices and monoliths).
Thus, a few recommendations may be formulated.
  • Units-of-functionality which are “often” updates must be implemented as microservices (so BizDevOps will be happy).
  • Units-of-functionality which require to absorb heavy workloads must be implemented as microservices (so DevOps will be happy).
  • Units-of-functionally which are “rarely” updated may be packed in a few units-of-deployment (different “packing” criteria may be used) and each unit-of-deployment has its own computing process (so DevOps will be happy). Another option is dynamic loading of those units-of-functionality.
  • Units-of-functionality which are “never” updated may be packed as a monolith or platform, i.e. one unit-of-deployment and one unit-of-execution (so DevOps will be extremely happy).
Applying these recommendations to some phases of the whole application life cycle (conception, development, deployment, production, support, retirement and destruction) the following recommendations may be formulated:
  • At the beginning of the application life cycle (concept, i.e. prototyping, and initial development), the majority of the units-of-functionality must be implemented as microservices, because easy to update characteristic is very important (especially for the business people) and, fortunately, performance characteristics are not an issue. 
  • More close to the end of the development phase, it becomes clear which units-of-functionality have to changed more often than others; so those others may be considered as services and even monoliths or platforms.
  • Also, the load tests (during the development and deployment phases) must show which units-of-functionality will require to absorb heavy workloads thus to be implemented as microservices.
  • Other criteria may be considered as risk, security, etc. 

Obviously, that “moving” a unit-functionality from microservice-like implementation to service-like implementation and to platform-like implementation is much easier that “moving” a unit-of-functionality from monolith-like implementation to service-like implementation and to microservice-like implementation.

This confirms the primacy of the “microservices first” approach. This approach, actually, provides support for BizDevOps practices ( see http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html ). Additionally, this approach enables interesting transformations such as automatic reconfiguration of applications to absorb the heavy workloads by moving temporarily some units-of-functionality from service-like implementation to microservice-like implementation.

Remember from prof. Knuth "Premature optimisation is the root of all evil".

Thanks,
AS

The collection of posts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservice 

2017-07-27

Better Architecting With – systems approach

All blogposts on this topic are at the URL http://improving-bpm-systems.blogspot.ch/search/label/%23BAW 

1 The systems approach basics


The systems approach is a holistic approach to understanding a system and its elements in the context of their behaviour and their relationships to one another and to their environment. Use of the systems approach makes explicit the structure of a system and the rules governing the behaviour of the system.

The systems approach is based on the consideration that functional and structural engineering, system-wide interfaces and compositional system properties become more and more important due to the increasing complexity, convergence and interrelationship of technologies.

The goal of the systems approach is to walk people and organisations working on complex systems through various stages and steps of analysis and synthesis in order to build a comprehensive understanding of the system-of-interest and, ultimately, be able to architect and engineer that system at any desired level of detail depth.

The systems approach helps to produce the following digital work products.
  • artefacts (entities made by creative human work) which are used to implement the system-of-interest;
  • system-of-interest terminology to explain various system approach concepts and relationships between them
  • nomenclatures (or classifications) of artefacts of the same type;
  • models to formally codify some relationships between some artefacts;
  • views (collections of views) to address of some concerns of some stakeholders, and
  • architecture descriptions which consists of several views.

To facilitate the production of those digital work products, the systems approach provides:
  • system approach terminology to explain various concepts of the system approach and relationships between them;
  • several templates to define various artefacts;
  • several nomenclatures with artefacts related to the systems approach;
  • several model kinds which formally defines views;
  • several architecture viewpoints conventions which can include languages, notations, model kinds, design rules, and/or modelling methods, analysis techniques and other operations on architecture views; architecture views are system-of-interest dependent and architecture viewpoints are system-of-interest independent, and
  • several patterns with techniques for transforming (not necessary fully automatically) some model kinds into other models kinds.

Many viewpoints and views are possible.

   


Different stakeholders see the same system differently and recognise different artefacts. 


2 Four levels of architecting


If the system-of-interest is rather complex, then it is recommended to use the following four levels of architecting:
  1. reference model is an abstract framework for understanding concepts and relationships between them in a particular problem space (actually, this is terminology)
  2. reference architecture is a template for solution architectures which realizes a predefined set of requirements
    Note: A reference architecture uses its subject field reference model (as the next higher level of abstraction) and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken 
  3. solution architecture is an architecture of the system-of-interest
    Note: A solution architecture (also known as a blueprint) can be a tailored version of a particular reference architecture (which is the next higher level of abstraction)
  4. implementation is a realisation of a system-of-interest

The dependencies between these 4 levels are shown in illustration below.


The purpose of the reference architecture is the following:
  • Explain to any stakeholder how future implementations (which are based on the reference architecture) can address his/her requirements and change his/her personal, professional and social life for the better; for example, via an explicitly link between stakeholders’ high-level requirements and the principles of reference architecture.
  • Provide a common methodology for architecting the system-of-interest in the particular problem space, thus different people in similar situations find similar solutions or propose innovations.

In case of the very complex system to be implemented in several projects and the necessity to collaborate and coordinate between those projects, it is recommended to develop a reference solution architecture and, if required, a reference implementation (see illustration below). It helps to identify smaller systems elements (e.g. services, data, etc.) and relationships between them (e.g. interfaces) thus they can be shared between projects.


The reference solution architecture and the reference implementation are often experimental prototypes which are not production quality.

3 An example of digital work products


The digital work products below are listed in an approximate order because some modifications of a digital work product may necessitate some modifications in some other digital work products. The patterns to transform some digital work products into some other digital work products are not mentioned below.

3.1 Value viewpoint

The value viewpoint comprises several digital work products which describe the problem space, and provides some ideas about the future solution and its expected value for the stakeholders. The digital work products of this viewpoint:
  • problem space description;
  • system-of-interest terminology (as an initial version the system-of-interest ontology);
  • business drivers;
  • problem space high-level requirements (or some kind of guiding principles);
  • dependencies between viewpoints, stakeholders and stakeholders’ roles;
  • dependencies between viewpoints, stakeholders, stakeholders’ roles, stakeholders’ concerns and categories of concerns;
  • beneficiaries, i.e. stakeholders who/which benefit from the system-of-interest;
  • beneficiaries’ high-level requirements;
  • scope of the future solution space;
  • mission statement and vision statement, and
  • goals (if the vision statement must be further detailed).

3.2 Big picture viewpoint

The big picture viewpoint comprises several digital work products which describe the future solution as the whole::
  • system-of-interest ontology as a reference model;
  • some classifications which are specific for this solution space;
  • illustrative model;
  • essential characteristics of the future solution;
  • dependency matrix: high-level requirements vs. essential characteristics;
  • architecture principles model kind, and
  • dependency matrix: essential characteristics vs. architecture principles.

3.3 Capability viewpoint

The capability viewpoint comprises several digital work products which describe the future solution as a set of capabilities:
  • level 1 capability map;
  • level 2 capability map;
  • level 3 capability map (if necessary), and
  • heat maps (if necessary).

3.4 Engineering viewpoint

The engineering viewpoint comprises several digital work products which describe the future solution as sets of some artefacts:
  • data model
  • process map
  • function map
  • service map
  • information flow map
  • document/content classification
  • etc.

3.5 Some other viewpoints

  • Organisational viewpoint
  • Operational viewpoint
  • Implementation viewpoint
  • Deployment viewpoint
  • Compliance framework
  • Regulations framework
  • Security, safety, privacy, reliability and resilience framework
  • Evolution viewpoint
  • etc.

4 Some definitions


1. reference model

abstract framework for understanding concepts and relationships between them in a particular problem space or subject field
  • Note 1 to entry: A reference model is independent of the technologies, protocols and products, and other concrete implementation details.
  • Note 2 to entry: A reference model uses a concept system for a particular problem space or subject field.
  • Note 3 to entry: A reference model is often used for the comparison of different approaches in a particular problem space or subject field.
  • Note 4 to entry: A reference model is usually a commonly agreed document, such as an International Standard or industry standard.

2. reference architecture
template for solution architectures which realize a predefined set of high-level requirements (or needs)
  • Note 1 to entry: A reference model is the next higher level of abstraction to the reference architecture.
  • Note 2 to entry: A reference architecture uses its subject field reference model and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken. 
  • Note 3 to entry: There may be several reference architectures for a single reference model.
  • Note 4 to entry: A reference architecture is universally valid within a particular problem space (or subject field).
  • Note 5 to entry: An important driving factor for the creation of a reference architecture is to improve the effectiveness of creating products, product lines and product portfolios by
    • managing synergy,
    • providing guidance, e.g. architecture principles and good practices,
    • providing an architecture baseline and an architecture blueprint, and
    • capturing and sharing (architectural) patterns.

3. solution architecture
system architecture (or solution blueprint)
architecture of the system-of-interest
  • Note 1: A solution architecture can be a tailored version of a particular reference architecture which is the next higher level of abstraction.
  • Note 2: For experimentation and validation purposes, a reference solution architecture may be created. It helps in the creation of other solution architectures and implementations.

4. implementation
realisation of the system-of-interest in accordance with its solution architecture
  • Note 1: A reference implementation is a realisation of the system-of-interest in accordance with its reference solution architecture. It can be production quality or not.

Thanks,
AS

2017-06-20

Smart Cities from the systems point of view

Thanks,
AS

2017-06-17

Better Architecting With – big picture

This blogpost continues the blogpost “#entarch frameworks are typical monoliths which have to be disassembled for better architecting” ( see http://improving-bpm-systems.blogspot.bg/2017/06/entarch-frameworks-are-typical.html ) and uses some feedback from a LinkedIn discussion https://www.linkedin.com/feed/update/urn:li:activity:6278239461654437888/

This blogpost outlines a “big picture” including the components and operating model for Better Architecting With (BAW).

Again, the goals of BAW are:
  • to standardise a good set of #entarch common components (viewpoints, artefacts, models, etc.)
  • to enable the users to add their own components, if necessary
  • to provide formal and repeatable guidance how to achieve unique user’s needs with available components.
BAW follows the “Platform-Enables Agile Solutions” (PEAS) pattern ( see http://improving-bpm-systems.blogspot.bg/2011/04/enterprise-patterns-peas.html ). BAW comprises the following:
  1. BAW platform;
  2. a set of ready-to-use popular BAW solutions (consider them as recipes) which you may try as-is and gradually adapt for your unique needs, and
  3. BAW guidance (including some obvious documentation).
The BAW platform comprises the following components:
  • BAW ontology – a set of about 200 concepts (in my estimation) which are already defined in many sources and needed to be aligned.
  • BAW artefacts – a set of about 50 (in my estimation) well-known artefacts to be aligned.
  • BAW viewpoints – a user-extendable set of about 20-30 (for now) viewpoints.
  • BAW model kinds – a user-extendable set of about 50-70 (for now) model kinds.
  • BAW patterns – a user-extendable set of techniques for transforming (not necessary fully automatically) some model kinds into other models kinds.
The most innovative part of the BAW platform is the BAW patterns because they capture architecting knowledge in a formal and reproducible way. BAW patterns are formalised as small processes with human and automated activities. Some example of such patterns are in http://improving-bpm-systems.blogspot.bg/search/label/enterprise%20patterns

The BAW solutions comprise the following:
  • BAW scenarios – a set of popular architectural works such as designing data-entry application or process-based applications, defining business architecture, formulating IT strategy, etc.
  • BAW skeletons – a set of existing #entarch frameworks
The BAW guidance is the most important part of BAW. In accordance with the selected scenario, the user is guided what views and models must be developed and how to develop them. The order of development can be almost arbitrary because the user must be able to adjust his/her models in the “pinball” way.


Again, the whole BAW must be organised in a way that anyone can add new viewpoints, model kinds, patterns and related documentation to enrich BAW with formalised and repeatable knowledge.

Thanks,
AS

2017-06-07

#entarch frameworks are typical monoliths which have to be disassembled for better architecting

#entarch frameworks are considered as a must for any serious #entarch work. There are about 1 000 #entarch frameworks on this planet. The most popular of them are typical monoliths – huge in size, contain a lot of overlaps, slow to evolve, difficult to adapt to particular needs, expensive to learn, tricky to explain, etc.

Not surprising that some organisations have to use a mixture of #entarch frameworks although some #entarch frameworks allow some tailoring. For example, an organisation has to use FEA because they work with the local government, TOGAF for solutions and ZF as a foundation.

Considering that organisations are demolishing/modernizing/transforming their application monoliths, let us, enterprise architects, apply the same tendency to #entarch frameworks. Such a transformation must:
  1. preserve and externalise (from the monolith frameworks) the knowledge which is accumulated by those #entarch frameworks, and
  2. provide a guidance how to build and operationalize unique #entarch practices from a coherent set of repeatable (proven or innovative) #entarch techniques and methodologies.
De facto, the “erosion” of the monolith nature of #entarch frameworks is already ongoing (examples, Tom Graves work).

Let us outline the target way of architecting:

The process of architecting will be as the following:
  • use the configurator to describe the problem space and generate a set of viewpoints for the solution space
  • use techniques and methods to specify an initial set of models
  • obtain OK from all the stakeholders
  • use techniques and methods to specify all the rest models
Again, the key point is a set of techniques and methodologies to link models. Repeatable techniques and methodologies will lead to better #entarch tools and high level of automation. The whole architecting process will be faster, better and cheaper.


Thanks,
AS







2017-06-02

#GDPR as an #BPM application

This blogpost explains how to implement the EU General Data Protection Regulation (GDPR) by design and by default via Business Process Management (BPM). This blogpost describes only a reference solution architecture without much implementation details. It focuses, primarily, on the artefacts such as capabilities, rules, roles, data-structures, documents, explicit coordination and audit trails.



1 Terminology in the GDPR


Although the information security domain is developed well, the GDPR document (see article 3) uses rather exotic terminology (sources are not provided). For example, many concepts, available from the standard privacy framework (ISO/IEC 29100) have different designations (terms). Another example is that the concepts “data” and “information” do not follow the DIKW “pyramid”.

Although a mapping between 16 terms of the GDPR and existing terminology is not difficult, it would be better to avoid such a mapping.

2 The main element


The main element of the GDPR is a data-structure object “Personally Identifiable Information” (PII) in the ISO/IEC 29100 terminology or “personal data” in the GDPR terminology. It must be explicitly and carefully protected:
  • for its confidentiality, integrity and availability
  • in rest, in transit and in use
  • throughout its life cycle
Usually, the life cycle of PII objects is very simple and covered by 4 actions which are known as the CRUD pattern (although each update may create a new version of the PII object). 

3 The core processes


Those actions, namely, Create, Read, Update and Delete, must be presented as small processes (or workflows) to provide the design and execution traceability. Considering that any PII is owned by the “PII principal” (or a natural person to whom a set of personally identifiable information relates), he/she must approve some actions on his/her PII object.

For example, an PII principal must provide his/her consent to process his/her PII object and such a consent must be kept as a record by the “PII processor” (privacy stakeholder that processes personally identifiable information on behalf of and in accordance with the instructions of a “PII controller”).

4 Some supporting capabilities


The core (or life cycle) processes use several capabilities (services or processes) such as:
  • identity management
  • access management
  • anonymization 
  • encryption
  • etc.
Also, some error- and exception-handling processes are necessary to properly handle privacy incidents.

5 Related roles


The essential roles are the following:
  • “PII principal” in the ISO/IEC 29100 terminology or “data subject” in the GDPR terminology – the owner of the PII, 
  • “PII processor” – persons or organisations who/which execute the GDPR processes,
  • “PII controller” – authority which, alone or jointly with others, determines the purposes and means of the processing of personal data, and
  • “Data Protection Officer (DPO)” – person who is the owner of the GDPR processes.

6 Rules


The execution of the GDPR processes is guided by numerous rules which are; actually, the majority of the GDPR documents. For example, if a PPI principal is a citizen of an EU countries then “PII processes” must follow the GDPR.

Unfortunately, it is unknown if these rules comply to the MECE principle (no overlaps and no holes). 

7 Complex scenarios


There are a few scenarios which involve more than one PII object. For example, split, merge, export, transportation, correlation, etc.

8 Conclusion


Use of BPM to implement the GDPR addresses all the GDPR concerns. Explicit and machine-executable processes are mandatory to achieve by design and by default the listed below key points.
  • Compliance – all privacy-related activities and coordination between them can be easy analysed.
  • Accountability – generated audit trails provide factual and objective information about why did what.
  • Data protection officers (DPOs) – as a role which owns all the GDPR processes.
  • Consent – is achieved by the design of the GDPR processes and records management.
  • Enhanced rights for individuals – is achieved by the design of the GDPR processes.
  • Privacy policies – all PII controllers and PII processors must analyse their privacy policies via the logic of explicit processes.
  • International transfers – also become processes.
  • Breach notification – as an integral part of the privacy incident GDPR processes.


Thanks,
AS

2017-05-31

Beauty of #microservices - making them practical

The classic definition of the microservice architectural style “as an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanism” creates a lot of fears and misunderstandings:
  • Application monoliths are evils, but having too many microservices sounds like creating an (unknown) evil as well.
  • Everything has to be re-developed.
  • Microservices will create a huge backlog for our agile team.
  • Microservices? They are neither architecture nor architectural style – just a technical stack.
As usual in IT, any new technology or methodology (which pretends to revolutionized everything) must be used together with many existing ones. Let us “intermix” MSA with some existing and proven technologies and methodologies.

MicroService Architecture (MSA) bring two major concepts:
  1. microservice as a unit-of-functionality, unit-of-deployment and unit-of execution with the same boundaries, and
  2. assembling a whole application from microservices of different origins: off-the-shelf (commercial and FOSS), brought, rented, built, provided from SaaS, PaaS, APaaS, etc.
Using these two concepts, let us try to find a practical balance between monolith architecture and MSA.

Firstly, it is necessary to think about any application as a set of the following artefacts
  • Events
  • Roles (actually, access rights management)
  • Rules (or decisions)
  • Business objects – data structures
  • Business objects – documents
  • Human activities (or screens or interactive services)
  • Automation activities (or scripting fragments or automation services)
  • Coordination
  • Audit trails
  • KPIs
  • Reports
Secondly, consider that each artefact must be, ideally, handled
  • Explicitly
  • As a set of microservices
  • Via APIs
  • With versioning 
  • By a specialized OTS tool, e.g. data structures are handled by a database, processes are handled by a BPM-suite tool
  • In a Domain Specific Language (DSL), e.g. BPMN for processes, DMN for rules
  • Over its whole life cycle
Thirdly, understand specialised tools for that each artefacts:
  • Coordination as explicit and machine-executable processes via a BPM-suite tool
  • Roles via an access management tool
  • Documents via an ECM product
  • Automation fragments as scripts in an interpretive language and execution robots
  • Audit trail and reports via BI tools
  • etc.
Fourthly, prepare two common “pool” for future tools, services and microservices:
  • technological pool for generic off-the-shelf products; their functionality is available via APIs
  • enabling pool for services, microservices, tools which are a) specific for the particular organisation and b) potentially reusable within organisation; their functionality is available via APIs
For each monolith application, sort its functionality out into 2 common pools and an individual pool.


At the result, we got a corporate unified business execution platform which standardise and simplify core elements of the corporate-wide computing system. For any elements outside the platform, new opportunities should be explored using agile principles. These twin approaches should be mutually reinforcing:
  • The platform frees up resource to focus on new opportunities while successful agile innovations are rapidly scaled up when incorporated into the platform.
  • An agile approach requires coordination at a system level.
  • To minimise duplication of effort in solving the same problems, there needs to be system-wide transparency of agile initiatives.
  • Existing elements of the platform also need periodic challenge. Transparency, publishing feedback and the results of experiments openly, will help to keep the pressure on the platform for continual improvement as well as short-term cost savings.
Obviously, do not forget to the a good application architecture - http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html and http://improving-bpm-systems.blogspot.ch/2016/08/better-application-architecture-apparch.html


Thanks,
AS

Other blogposts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservices