Difference between revisions of "09. Service Validation and QoS"

From ssebok AFIS
Jump to: navigation, search
m (CT3SAI moved page 09. Quality of Service to 09. Sevice Validation and QoS: New title according the content)
 
(No difference)

Latest revision as of 19:55, 23 February 2019

Version 0.1

Latest edition: Jean-Luc Garnier, February 23, 2019, 17:00.


This chapter is the result of a collaborative work of experts who are members of the Technical Committee “Systems of Systems, Services - Architecture and Engineering” (CT 3SAI) of French Chapter of INCOSE (AFIS). This chapter is dedicated to Service Validation and Quality of Service. It is intended to be the 9th chapter of a “Service Systems Engineering – Body of Knowledge”.

Service Systems Engineering Body of Knowledge Project Manager:

  • Pierre-Olivier Robic - Thales

AFIS CT 3SAI: Leaders

  • André Ayoun – Airbus Defense & Space
  • Claude Pourcel – École d'Ingénieurs en Génie des Systèmes Industriels (EIGSI)

Contents

Introduction

The objective of this chapter is to address the element regarding the extended concept of validation in a service perspective. The objective is to define the criteria specific to a service validation compare to a traditional (technological) system validation.

In this chapter, we want to extend the concept of validation to the concept of satisfaction including non-functional features.

We also try to reconcile approaches coming from engineering but also from human science & marketing & economy.

Service Validation

« Foundation Service Validation: a service architecture perspective » (EG)

Service validation context

“Dubito ergo cogito, cogito ergo sum, sum ergo Deus est” / « Je doute donc je pense, je pense donc je suis, je suis donc Dieu est/ existe ».

René DESCARTES dans le Discours de la Méthode

Services is VALIDATED at the moment of the measure without any implication of continuous validation … This leads to a need of having a continuous monitoring and when possible to rely on a capability to provide analytics & forecast.

An implicit is that the concerned system to design (system of service) must include:

  • A Management System
  • A monitoring system
  • A “actionable” system

In our context, we will have to define the different stakeholders (Provider, Suppliers/Sub-Partners, Acquirer & Users a minima) and to analyse their own needs.

In addition, we will have to structure the gaps and overlap regarding the needs & constraints from the different stakeholders.

Besides, we assume that in this chapter, we consider that most of the service offers rely on several stakeholders, leading to a system of systems of services providing the service.

Element of Perception theory

Bias in a psychology perspective including Emotion stability (psychology meaning)

Chapter9-PsyPerspective.png

Figure 1: psychology perspective

Service (effect) Validation

  • Compliant to co-delivery / co-creation
  • Encompass usage by the end user
  • Note: responsibility assessment
    • See MDAL approach from LCC
  • Close to REX activities

Military: see adoption / appropriation concept against lifetime

Service Capability Verification

  • Consumed Service (could be in simulation)

Service Capacity Verification

  • Representative environment

Service Valuation

  • “Optimize” perspective
  • Need learning
  • Correspond to multiple cycles
  • Need to implement use case with specific stakeholders
    • HUMS use case : predictive maintenance
    • Defense
    • Vehicles
  • Need to check capability and capacity at system & operational level in SYS-EM
    • Memo E
  • Need to introduce intermediate « Justification File » and « VV »
  • Need to introduce information quantity for bias (explicit/implicit , verbal/non-verbal, known / unknown, media & understanding …)
  • Need to clarify temporal aspect
  • Need to address upfront
    1. Expected Effect in dedicated context
    2. Capacity
    3. Capability
    4. Solution
    5. System of Service
  • Need to provide an anchor of UX technics, story telling (cf. service design) cf. bias prevention

Note – Lemme

  • The concept of SoS is not fully relevant in a service composition
  • Service needs a holistic approach to be undertaken in terms of effect
  • Reductionism approach is only applicable at System of Service level, including in a system of (system of service) perspective

Note – valuation (quantification de la valeur) : Lemme

  • Any valuation would suppose the ability to define a scale, meaning a zero and a range scale.
  • In a service perspective, including multiple stakeholders (especially in an acquisition organisation schema) & context, the art of service valuation will be to reached an acceptable consensus (ISO meaning) in terms of expectations regarding provided value.
  • The provided value, as a simplification, will higher as the provision is compliant with the « request » and in conclusion that the provision achieve the expected goals / effects

Element of narration

Introduction & Principles (Methods)

To answer the very simple question: “Which objects do you touch when having a cup of coffee (or tea, or …) in the morning?”, you probably react in the following way.

You think about the objects that are closed to your cup of coffee, and you will enumerate things like: “spoon, table, sugar, coffeepot (or teapot) …”. This kind of reasoning should be called spatial reasoning. You look for the neighbourhood of the given object, enlarging as far as possible the set of objects closed to the cup of coffee.

Another way to approach such a question is to have a temporal reasoning, meaning thinking of what occur before, during and possibly after you are having your cup of coffee.

Such an approach forces you to think about the context in which the event takes place. It’s ‘in the morning’, ok but are you awakening or at the contrary going to sleep? You could be a night worker, and thus coming back from your job, take a cup of coffee before going to bed! When you take this cup of coffee, are you at home, in a bar, in a B&B during a weekend or holydays …

Thus telling the story that includes the moment where you have this cup of coffee will inform you about the entire set of object that you will touch. Moreover there is no on unique story which includes the fact that you take a cup of coffee in the morning, thus selecting the appropriate story is also part of the process.

For example, previously the spatial approach has identified the coffeepot as an important piece in the neighbourhood of ‘taking a cup of coffee’. When switching to temporal reasoning, it is quite clear that at one moment the coffeepot will appear in the story. However, it will be handled by someone that will use it to pour the coffee into the mug. And the one who handles it has a major significance in the way of telling the story. If you are alone at home, doing it yourself, you will be the one that with use the coffeepot, use it to pour the coffee in your mug. If you are spending a weekend in a nice hotel, or taking a coffee in a bar, a waiter/waitress or a barman/barmaid will take care of pouring the coffee and thus handle and use the coffeepot. This seems to be just insignificant details. Perhaps if we consider the only simple story of having a cup of coffee. But in the frame of service engineering it calls for deeper perspective and analysis: “Who owns the coffeepot?”, “Who is responsible for pouring the coffee?”, “Which protocol/procedure … is to be used (pouring correctly the tea could require some kind of expertise …)?”, “Has the waiter/waitress pouring the coffee been properly trained?”, ‘Does He/She smiles to you when pouring the coffee?” …

In a spatial approach, where we only identify objects and some spatial relationships (closed to, connected to …), the reasoning could be a bit limited to serve the specification and the proper comprehension of what the service is. Temporal reasoning, through story telling overcomes these limitations.

The art of storytelling appears to be a major skill for service engineers or architects. Telling a story forces to serialize activities and events, and it forces to give logic (a causal order) to manipulated items. It is also a way to explicit the value brought to the various involved stakeholders. A service implies that at least two parts being involved (the beneficiary and the provider), and sometimes more (E.g., a mediator could be needed). The resultant proposed scenario has to explicitly demonstrate that major implied stakeholders find some reasonable value (or benefits) from the scenario, unless you are defining an undesired scenario.

So, is there a unique way to write a scenario? For sure, no. Let’s figure out what could be significant alternatives of telling the same story in different ways. In fact , they are the different ways of writing a narration in literature, that can be grasped in asking ourselves “Who is the narrator?”

  • The narrator is omniscient. He knows and sees everything. So it has only to explain things. This is the classical position we took when writing scenario in an engineering context. However, this apparently strong position could be enhanced by other ones, and thus allowing re-writing the scenario in others ways.
  • The narrator could be external, and in some ways not knowing anything. He discovers the events one after the other. He tries to be an objective observer and invite you to follow him in its discovery journey. This is the case when the story is about an investigation about a crime, and the narrator is the policeman or the sleuth. This could be an interesting point of view when what is at stake is the validation of the service (“Does it deliver the proper value to all implied stakeholders?”).
  • The narrator could be internal, thus being one of the main protagonists. If there is only one selected narrator, it could lead to completely fool the reader (E.g., The Murder of Roger Ackroyd written by Agatha Christie, or the movie Usual Suspects of Bryan Singer, written by Christopher McQuarrie). We will prefer to concurrently define several alternatives of the same scenario by changing the narrator, exhausting one by one the implied stakeholders (E.g. the revisited version of Little Red Riding Hood, the independent movie Hoodwinked!). By doing this, each story should exhibit which value the given narrator obtain from running this scenario. In other words, if you are not able to tell the story in a positive way from the point of view of a specific involved stakeholder, you could be doubtful about the fact that the service definition is well balanced, and that everyone will be committed.

Are there technics or methods to write good scenarios? The literature, the theatre, the cinema has already provided many techniques and are a quite inexhaustible repository of good examples of scenarios.

On our engineering domain, may we suggest to apply the INVEST mnemonic, created by Bill Wake in the frame of agile software projects as a reminder of the characteristics of a good quality Product Backlog Item commonly written in user story format.

  • Independent: the story must be actionable and complete on its own (it should not be inherently dependent on another story).
  • Negotiable: allowance for change is built in.
  • Valuable: it actually delivers value to a customer/user/stakeholder.
  • Estimable: you have to be able to size it.
  • Small: the story needs to be small enough to be able to estimate and plan for easily. If too big, break it down into smaller stories.
  • Testable: the story must have a test it is supposed to pass in order to be complete a way to assess.

Take away:

There is here a major difference when it relates to service engineering. We do not only speak in term of functions (and functional exchange) and components (and links/connections between those components), like in a traditional system engineering approach, but in term of activity, responsibility, exchange … “Who does what for whom and when?” are some stringent questions arising. Temporal reasoning put the emphasis on that kind of topics, and thus fosters the engineering of services.

  • Think TIME and SPACE (movie versus picture)!
  • Think scenarios!

Concepts Layers (perspective / projection)

Technical / Infrastructure <==> “Service transaction” Action JLW

  • Infrastructure perspective : Energy, information, material,
  • Processing : people
  • Economic perspective : Finance (Money), physical (good & service),

Similar to Operational, Service and System views cf. former NAF? Action EG

  • Propose alignment or analogy and identify gap (energy, material, economic perspective)
  • Alignment with OCD

Extract from NATO AF V3: Simplified Meta-Model

Chapter9-NAFv3SMM.png

Figure 2: NAFv3 Simplified Meta-Model

The purpose of the NAF V3 System Service Provision view (NSV-12) is to illustrate:

  • Which system contributes to the provision of which services? It is a mapping of system building blocks to services.
  • Which organization owns the system which provisions the service? It is a mapping of organization to provided services.

Therefore, the service provision view identifies system as well as organizational resources required for delivery of each service.

To ease verification of models w.r.t specified service chains, each service provision view has been created with respect to a complete service chain

Viewpoint, perspective &  narration

Bias Action EG

  • Relevant Environment of operation
    • Référence to CONOPS pour theatre d’opération dans les OCD
  • « VABF, VSR » versus « Service Valuation »
    • Référence C4I
    • Expliquer qu’il s’agit d’une validation a priori et d’une a posteriori donc pas en continu
    • Voir le lien avec SLA
    • POR demande à Patrick E.
  • Relevant solution (constituting systems) demander à JLG le document sur les point de vue solution (client vs fournisseur) élaboré avec J2R

Operational Experimental measure

Survey: hot / cold ==> measure context especially regarding timeline, Testimony bias cf. investigation cf. Experimental psychology Action POR

Measure theory element See QoS ongoing discussion with TA & YC & DR

Detail bias analysis in survey but survey are addressed in sub chapter 5 by DR

Nota: penser à éviter le sondage juste après, penser juste necessaire sinon pb adherence

Voir systèmes de vote ex Booking

  1. Rappel psychologie expérimentale

  2. Lien avec perception et biais (rapide)

  3. Techniques

    1. Interview

    2. Shadowing

Service Validation versus System Validation (POR, DR, TA, PE)

Introduction

Before being used for Service delivery the service systems have to be designed, developed and deployed. This leads to the need for Service “IVTV” (Integration, Verification, Transition, Validation in a systems engineering perspective). However service needs due to customer/consumers, environment, technology, business trends… aren’t to be considered as stable during the whole service life time. Consequently, Service systems engineering should:

  • Look for emerging needs and translate them into extended capacities (emergence and customer retention)
  • Accommodate systems for evolutions of services (growth potential)
  • Be adaptive to support the continuous evolution of the deployed service systems (operational agility)

In this context, and due to the needs of early analysis & evaluation, modelling & simulation technics have to be considered in order to have a priori feedback & insights as representative of the expected usage. One of the key challenges will be to simulate the usage of the system. Operational data from previous similar projects could be used as data set for simulation. Even in a run phase, the data sets could be used to refine the models for future usage. In this context, one of the key aspects will be the Data Quality. In addition, in the perspective of a contractual or legal agreement, these data sets have to be agreed by the stakeholders as reference.

As an example, an analogy can be made against the MDAL (Master Data Assumption List) approach for Life Cycle Cost activities. For example, in a UK perspective, the proposed process is:

Chapter9-MDAL.png

Figure 3: Master Data Assumption List

In this perspective, the elements of agreement are key.

Besides, Data Quality is also key. For these aspects, xxxxxx Note : demander à Juliette M. les éléments sur la qualité des données.

Challenge: what is specific to service validation compare to solution validation?

In this chapter, our perspective is to focus on the outcome of the service or to the effect. This will lead to key distinction between Service & System validation:

Chapter9-ServicevsSystem.PNG

Figure 4: Service vs System of Services

Stakeholders perspective / Viewpoints versus Service / System

Stakeholders

Stakholders identification & context definition is key to ensure the satisfaction of them. Relation, influence & needs have to be clarified and formalized as much as possible.

Stakeholders analysis is the technique used to identify and assess the influence and importance of key people, groups of people, or organisations that may significantly impact the success of your activity or project (Friedman and Miles 2006).

Methodology :

1 - Identity external stakeholders

Too often, the word ‘customer’ replaces the word stakeholder and tends to focus on the customer as the only external stakeholder.

We should recognise that are at least there two distinct external stakeholders: the client and the user(s). Here is a list (surely non exhaustive) of potential external stakeholders to consider:

  • Customer: either the sponsor, or a different person or organization who pays for the project
  • Users: there may be different kind of users depending on the usage. End users, administrators, service operators, technical operators, maintenance operators, front-office and back office operators, help-desk, etc.
  • Owners, investors, sellers
  • Business partners, contractors
  • Competitors (Don’t forget those. You are not alone …)
  • Suppliers, Vendors
  • Regulators
  • Governmental institutions
  • Special interest groups
  • The public, Society
  • etc.

2 - Identify internal stakeholders

The following is a list (surely non exhaustive) of potential internal stakeholders to consider:

  • Project sponsor
  • Organisational and functional groups (finance, marketing, quality, etc.)
  • Subject matter experts
  • Steering committees
  • Project Management Office
  • Team members
  • etc

3 - Characterize and categorise your stakeholders

All of the stakeholders are not the same, and they do not play the same role in the same way against the considered opportunity.They can have different:

  • Importance: refers to those stakeholders whose problems, needs and interests are priority for an organisation. If these important stakeholders are not assessed effectively then the project cannot be deemed a success.
  • Influence: refers to how powerful a stakeholder is, in terms of influencing the direction of the project and outcomes.

Other attributes or qualifications could be considered:

  • Support level: refers to how strong the support provided by the stakeholder may be.
  • Activity level; refers to the amount of effort that the stakeholder will spend for the considered opportunity.
  • Contribution: refers to the depth and value of the  information, counsel or expertise of those stakeholders with regard to the considered opportunity.
  • Legitimacy: refers to how legitimate the stakeholder’s claim for engagement is.
  • Willingness to engage: refers to how willing the stakeholder is to engage.
  • Necessity of involvement: Is this someone who could derail or delegitimize the process if they were not included in the engagement?
  • etc.

Some questions that may help you in assessing the importance and the influence of stakeholders are the following:

  • Who is directly responsible for decisions on issues that are important to the project?
  • Who holds a position of responsibility within interested organizations?
  • Who is influential in the project area?
  • Who will be affected by the project?
  • Who will promote/support the project, provided that they are involved?
  • Who will obstruct/hinder the project if they are not involved?
  • Who has been involved in the past?
  • Who has not been involved up to now but should have been?
  • Manage your stakeholders accordingly
  • Depending of their category, define how to best engage your stakeholders. There are many frameworks to help you in your strategy for engaging stakeholders. A basic one relates to a quadrant importance X influence leading to four kinds of engagement tactics. See the picture below.

Chapter9-PowerVSInterest.PNG

Figure 5: NAFv3 Simplified Meta-Model

  • A more elaborate framework is the ladder of stakeholder management and engagement (Friedman and Miles 2006:162).

Chapter9-FriedmanandMiles2006.png

Figure 6: Ladder of stakeholder management and engagement

  • Depending on the engagement tactics, you should select appropriate techniques or formats of engagement. E.g., (from J Morris and F. Baddache, BSR, 2012) :

Chapter9-MorrisandBaddache2012.png

Figure 7: formats of engagement

4 - Share a stakeholder map

It could be worthwhile to represent the various identified stakeholders in a schema or a picture to share this information with the team. The following are some examples.

Chapter9-InformationSharing.png

Figure 8: Information sharing within a team

Chapter9-StakeholderMap.png

Figure 9: Stakeholder map

This representation could be enhanced by categorising them and exhibiting the various relationships between identified stakeholders. Here is an example.

Chapter9-StakeholderRelationships.png

Figure 10: Relationships between stakeholders

Among different additional tools & technics, we have identified some references useful to frame the environment. Among them we can find:

  • OMG/BMM (Business Motivation Model

Chapter9-OMG-BMM.png

Figure 11: Business Motivation Model (OMG)

  • SPECTRED

La méthode SPECTRED est utilisée pour l'analyse d'un contexte lors d'une étude pays par exemple.

SPECTRED est l'acronyme des différents domaines de l'environnement à prendre en compte pour réaliser une étude complète :

  • S comme Social : Organisation de la société, mode de vie, religion, activité dominante...
  • P comme Politique : Les régimes, stabilité, pouvoir...
  • E comme Environnementale : géographie, infrastructure, frontière, climat...
  • C comme Culturel : niveau d'étude, tradition, coutume, art...
  • T comme Technologique : les réseaux informatiques, la recherche-développement, téléphonie...
  • R comme Réglementaire : Lois, normes, labels, droit du travail, les contrats...
  • E comme Economique : cycle, taux d'intérêt, chômage, revenu disponible, tous les indicateurs...
  • D comme Démographique : Population, espérance de vie, structure population...

Une fois les informations réunies, il faut les classer en opportunités ou menaces pour l'entreprise afin de prendre une décision sur la faisabilité d'un projet d'exportation dans ce pays.

  • PESTEL

Chapter9-PESTEL.png

Figure 12: PESTEL

  • FIC : Australia – Fundamental Inputs to Capability

Chapter9-FIC.png

Figure 13: Fundamental Inputs to Capability l

  • DOTMLPF: US DoD

Chapter9-DOTMLPF.png

Figure 14: US - DOTMLPF

  • DOTMLPFI: NATO

Chapter9-DOTMLPFI.png

Figure 15: NATO - DOTMLPF

  • TEPIDOIL: UK MoD

Chapter9-TEPIDOIL.png

Figure 16: UK-TEPIDOIL

  • PRICIE: Canada

Chapter9-PRICIE.png

Figure 17: Canada - PRICIE

  • Synthesis:

Chapter9-ContextFactors.png

Figure 18: Synthesis of the context factors

Process perspectives

System Perspective

Integration

System integration is the bringing together of the parts of a system in a logical and controlled manner and the evaluation of the system design, behaviour, interactions and performance.

Integration “builds the system”.

System verification

System verification confirms through the provision of objective evidences from inspection, analysis, demonstration or testing, that the requirements against which the system has been designed are fulfilled.

Verification ensures that “you built the system right”.

Validation

Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.

Validation ensures that “you built the right thing”.

Service Perspective

Service Integration

  • Service integration is the bringing together of the parts of a system of system.
  • Integration “builds the service”

Service verification

  • Service verification confirms through the provision of specific tools from inspection, analysis, demonstration or testing …, that the requirements against which the service has been designed are fulfilled.
  • Verification ensures that “you are running the service right”.
    • Compliant against the (engineering) plan

Service Validation

  • Validation needs to be and is performed against explicit defined commitment, consistent with operational needs (could be a set of commitment associated with context [mission]) will lead to segmentation of the continuum
  • Needs to be framed (legal & contractual one) defining the commitment and defining responsibilities need to define the boarders / limits & exclusions<ref>Note : consideration of the reference laws & court of competent jurisdiction</ref>
  • Service Validation is a continuous and dynamic (related to environment, context, stakeholders) activity
  • Objective evidence : in the case of a service, effect and satisfaction can’t be objective can be as far as possible (reasonably) objective thanks by example to agreed evidences within the stakeholders or interested parts
    • Evidence objective : OK for the System of Service (SoI one)
    • Objective evidence : measurable
    • Sampling : close to continuous

Validation in an acceptation perspective

  • Confirmation, through the provision of objective evidence, that the requirements for a specific intended effect have been fulfilled.

Note: are there requirements pending of the Operational Capability (Capacity?)

This current def. address more “Specification validation against operational needs

See « fiche de caractérisation militaire : contient des scenario ? »

Change proposal

  • Validation ensures that “you are running the right service”.
    • Meaning of right: Right: conform to needs or to request?
    • Acceptation perspective compliant to explicit request

Operational Analysis

Operational Concept Description

Chapter9-OCDconcepts.png

Figure 19: OCD concepts

Chapter9-OCDtoTestCases.png

Figure 20: From Use Case to Test Case

Competence

Chapter9-CSEKA.png

Figure 21: Competence

Measures and metrics

Metrics Effect’s objective metrics:

  • SLA, commitment
  • QoS
  • Performances (including e.g. technical ones at system level and processes performances and even competencies (UK style))

MoE / MoP

From Technical Measurement, A Collaborative Project of PSM, INCOSE, and Industry

Technical Report Prepared by Garry J. Roedler, Lockheed Martin, Cheryl Jones, US Army

27 December 2005, Version 1.0

MOEs are "operational" measures of success that are closely related to the achievement of mission or operational objectives; i.e., they provide insight into the accomplishment of the mission needs independent of the chosen solution

MOPs characterize the physical or functional attributes relating to the system operation; i.e., they provide insight into the performance of the specific system

TPMs measure attributes of a system element within the system to determine how well the system or system element is satisfying specified requirements

KPPs are a critical subset of the performance parameters representing the most critical capabilities and characteristics

Chapter9-RelationshipMesaures.png

Figure 22: Relationship of the Measures

Measures of Effectiveness (MOEs)

The "operational" measures of success that are closely related to the achievement of the mission or operational objective being evaluated, in the intended operational environment under a specified set of conditions; i.e., how well the solution achieves the intended purpose. (Adapted from DoD 5000.2, DAU, and INCOSE)

MOEs, which are stated from the acquirer (customer/user) viewpoint, are the acquirer' s key indicators of achieving the mission needs for performance, suitability, and affordability across the life cycle. Although they are independent of any particular solution, MOEs are the overall operational success criteria (e.g., mission performance, safety, operability, operational availability, etc.) to be used by the acquirer for the delivered system, services, and/or processes.

MOEs focus on the system' s capability to achieve mission success within the total operational environment. MOEs represent the acquirer' s most important evaluation and acceptance criteria against which the quality of a solution is assessed. They are specific properties that any alternative technical solution must exhibit to be acceptable to the acquirer (i.e., the Standard of Acceptance). In addition to using MOEs to compare and evaluate alternatives, they can also be used for sensitivity analysis of performance from variations of key assumptions and parameters of the potential alternatives. They are also important for test and evaluation because they determine how test results will be judged. Since test planning is directed toward obtaining these measures, it is important that they be defined early.

MOEs are used to:

  • Compare operational alternatives

  • Investigate performance sensitivities to changes in assumptions from the user' s view

  • Define operational requirement values

  • Evaluate achievement of key operational performance

  • Serve as the Standard of Acceptance for the technical solution

Measures of Performance (MOPs)

The measures that characterize physical or functional attributes relating to the system operation, measured or estimated under specified testing and/or operational environment conditions. (Adapted from DoD 5000.2, DAU, INCOSE, and EPI 280-04, LM Integrated Measurement Guidebook)

MOPs measure attributes considered as important to ensure that the system has the capability to achieve operational objectives. MOPs are used to assess whether the system meets design or performance requirements that are necessary to satisfy the Measures of Effectiveness (MOEs). MOPs should be derived from or provide insight for MOEs or other user needs. The relationship between MOEs and MOPs is illustrated in section 3.2.6. MOPs are derived from the supplier' s viewpoint and look at how well the delivered system performs or is expected to perform against system level requirements. They address an aspect of the system performance or capability. MOPs often map to Key Performance Parameters (KPPs) or requirements in the system specification. They are expressed in terms of distinctly quantifiable performance features, such as speed, payload, range, or frequency. They are progressively monitored and used during project execution as input to management, including as indicators to aid managing technical risks.

MOPs are used to:

  • Compare alternatives to quantify technical or performance requirements as derived from MOEs
> Support assessment of system design alternatives

> Support assessment of technical impact of proposed system change alternatives

  • Investigate performance sensitivities to changes in assumptions from the technical view

  • Refine KPP definitions

  • Assess achievement KPPs

This guide treats Measures of Suitability (MOS), as a type of MOP and thus has not included separate guidance. The MOS specifically measures the extent to which the technical solution will integrate into the operational environment. As such, they are often focused on the usability and interoperability aspects of the system, but may also include other quality factors. In some cases, it may be necessary to define and track MOSs separately from the MOPs.

Service Validation WRT Digital & HUMS perspective

A consistent and satisfying user experience is more important than ever. Digital transformation has changed how brands approach customers and vice versa.

Ensuring the best in customer support as well as providing them with the perfect and balanced user experience is the key to engaging customers for and offering targeted services and benefits.

The best way to achieve this is through proper utilization of Digital Key Technologies such as Big Data or IoT.

Our use of technology has created an explosion of data which can be analysed with the use of AI and computational power of modern computers to offer meaningful insights into the requirements and expectations of customers.

In addition, some classical engineering methods and principles are reinforced in the digital services perspective such as Devops, continuous development & continuous optimisation, Agility, Lean, Test and Learn

HUMS:

In terms of Service Delivery, in a Support to Equipment Services perspective, the product characteristics in its use are key. Indeed, the intrinsic characteristics such as reliability is a key driver to dimension the support system and by consequence a key driver for operational availability.

The reliability model, based on operational profile appears as a must have. In order to improve the accuracy of the models and even more, the HUMS systems appear as a key enabler.

Consequently, the need to raise data from the products and even more appears as key in a Digital perspective. The need for “HUMS ready product” is critical.

Due to the huge development of IoT and connected objects, coupled with Big Data through miniaturization of sensors and better algorithms (AI), the exploitation of these big data allow now efficient synergies. HUMS is one of them. Health and usage monitoring systems (HUMS) is a generic term given to activities that utilize data collection and analysis techniques to help ensure optimised performance, availability, life time, maintainability, etc. and globally total cost of ownership and usage of the systems and products.

All platforms or systems (civil as defense) are concerned. HUMS are consequently growing strategic activities, concerning all BGU & Businesses, addressing Predictive and or Prescriptive Maintenance, Fleet Management, Mission Preparation, Product enhancement & Process enhancement including Product Dynamic Reconfiguration. This is even faster by Digital Transformation, as it rely on the 4 core digital technologies that are at the heart of the Digital Transformation: Connectivity and IoT (Internet Of Things), Big Data, Artificial Intelligence (AI), & Cyber security and in particular data protection.

HUMS implementation relies on a consistent definition of solution addressing especially the following key topics: Legal (Data access & ownership, data valorisation …), Cyber security, Analytics & AI and Big Data processing. Therefore, the importance of data policy and data management appears as a key pillar, as well as architecture vision.

In addition of the challenges, regarding AI, dealing with maintenance in complex systems requires identifying the set of functions required in the task to be fulfilled, and then choose, for every function, the algorithm most fit to its completion. The recent trend to cluster these processing operations behind the terms “big data” or “cognitive analytics” can hide the reality of a vast and complex algorithmic ecosystem, where a full-scale analysis system necessarily relies on a sequence of specific algorithms chosen depending on their functional interest, sometimes even specifically tailored to fit an ad-hoc purpose.

Nota: reference

  • PHM Society
  • ICOLIM Article
  • APIA Conference

Academics

?????????????

Through Life Cycle solution concept ( JLG, SF, FP, YC, AH, FC)

Solution

System of Interest + System of Support + System of Operation

System Monitoring

System Management

Solution Lifecycle

Exploitation versus Design Reference

Engineering (EG)

Service Integration concepts (P.Esteve)

Architecture de Service (JLG, FP, POR)

Service Value & QoS (YC)

QoS & Service Quality Feature (extended incl. non-functional ones) (YC, AH, POR, DR, AA, TA)

OCD

User Experience & Non Functional Features

Features & References

  • References 
  • Non-functional features

QoS & Service Design - Value Proposition + scoring technics (YC, AH, POR, DR, AA, TA):  Proposed Process & Method to consider non-functional QoS

Design Thninking perspective : Value Proposition Canvas & extension with non-functional QoS

Quantification Process

Scope approach in a SoS of Service perspective (From Basic Service to Integrated Services)

Business Landscape (DR)

Service Product Line (DR, POR, SF, AS pour coordination avec CT PG)

Composition des Services dans une Ligne de Services (Synchronisation, Variations, …)

Service sur Tangible Product Line

Service Validation (DR, POR)

VP, Pricing,

DEVOPS

Appendix

Thesaurus

  • Effect
  • Adequation / consistency / appropriateness
  • Prospective (MKT perspective)
  • Capacity
  • MoE/MoP
  • Commitment
  • Responsability
  • Value Chain
  • Vulnerability
  • Value “objectivation”
  • Reserve
  • Interoperability / Openness / Flexibility Agility …
  • Resilience
  • SCHEME / MEME
  • Observation
  • Branding
  • Certitude / Incertitude
  • Trust
  • Topology
  • Proximity
  • Croyance / Beliefs
  • Set Based Thinking

Terms, acronyms & definition

Key references

  • Academic & Science Society
    • Psychology
    • Legal
    • Economic
  • Information theory
  • Communication theory
    • Psychology
    • Technical
    • Standards

<references />