Dr Mark Rice, ICT Business Relationship Manager, gives a comprehensive introduction to Test Maturity Model integration (TMMi).
The software industry does not operate in a zero‑defect environment, and, arguably, it never will. In the face of this truism, numerous techniques to reduce the number and severity of defects in software have been developed, with the ultimate, albeit unobtainable, goal of defect elimination. Such optimistic thinking has led to significant improvements in software quality over the past decade, notwithstanding increased software complexity and customer demands.1
One such defect elimination approach is maturity models. Broadly, these are structures which state where an organisation sits on a maturity scale, where its failings lie and what should be done to improve the situation using process improvement frameworks. The archetypal maturity model is the Capability Maturity Model Integration (CMMI)2, in addition to its predecessor, the Capability Maturity Model (CMM).
In the beginning: the Capability Maturity Model Integration (CMMI)
CMMI is a complex and multifaceted model which focuses on organisational maturity and capability in terms of service development, service management and service acquisition3. Software development is the principal subject area of this model and the model may be adopted in a continuous or staged form. The former approach places emphasis on capability over maturity and “enables organisations to incrementally improve processes corresponding to an individual process area (or group of process areas) selected by the organisation”4. The latter path “enables organisations to improve a set of related processes by incrementally addressing successive sets of process areas”5. In other words, using the continuous approach, the user selects what matters most, as well as the order in which to implement improvements, while the staged approach demands that the model itself dictates these factors. The staged model has five maturity stages: initial, managed, defined, quantitatively managed and optimising, while the continuous model has four capability levels: incomplete, performed, managed and defined. CMMI is made up of process areas, goals and practices, and the extent to which these elements are satisfied by the organisation determines its capability/maturity level.
From CMMI to TMMi
Although CMMI deals with software development organisational maturity, it only provides limited content on software testing maturity6 and it is this limitation which spurred the development of a closely related maturity model called the Test Maturity Model (TMM)7, which has since been superseded by the Test Maturity Model integration (TMMi), created by the TMMi Foundation8. Other testing‑related maturity models exist9, but TMMi is the focus of this article.
The first thing the observer will notice is the similarity between TMMi and CMMI. This is to be expected since TMMi is based on, and designed to be complementary to, the CMMI framework10; functioning as an adjunct to the CMMI testing maturity measure, in addition to exploiting CMMI to support its own implementation11. TMMi currently does not have a continuous version of its model12; it is staged only, which means that the path of improvement is general rather than user‑specific13. TMMi has five maturity levels arranged on a ‘ladder’. Figure 1 shows the maturity levels of TMMi along with each level’s process areas.
The constituents of a process area
Each process area is made up of goals and practices. The relationships between the constituents are intricate, but are best explained by the relational diagram shown in Figure 2.
This diagram shows that each process area is made up of both specific and generic goals and practices; that is to say, goals and practices which are either particular to each process area or applicable to multiple process areas respectively. Goals signify what needs to be done to satisfy a process area, while practices break down a goal into smaller objectives. Goals are required components of a process area, while practices are expected components. Some generic goals are only triggered when the organisations attempts to move past a particular maturity level, such as from level two to level three14. Informative components support the comprehension of each process area, and include sub‑practices, examples and work products.
For example, the process area Test Policy and Strategy, from Maturity Level Two (Managed), contains three specific goals:
- SG1: Establish a Test Policy.
- SG2: Establish a Test Strategy.
- SG3: Establish Test Performance Indicators.
Each of these goals is made up of specific practices. SG1: Establish a Test Policy, includes:
- 1: Define Test Goals.
- 2: Define Test Policy.
- 3: Distribute the Test Policy to Stakeholders.
How do we know where we are on the TMMi ladder?
TMMi has a dedicated assessment method called the TMMi Assessment Method Application Requirements (TAMAR)15. Typically, TAMAR consists of four phases: planning, preparation, interview and reporting, as shown in Figure 3.
Assessments may be informal or formal. An informal assessment is quick, requires relatively fewer sources of evidence and is an inexpensive way of getting an approximate verdict on the maturity level of an organisation. A formal assessment is more‑in‑depth, more expensive, resource‑heavy and requires relatively more sources of evidence, but it gives a more accurate picture of the situation and is the only assessment type, which is officially recognised by the TMMi community. Thus, the formal assessment is a key marketing tool for organisations to advertise their TMMi credentials. Formal and informal assessments also differ regarding the make‑up of the TMMi assessment team. This includes the number, experience and qualifications16 of team members. Usually, a series of informal assessments is carried out just before a formal assessment, to help ensure that an organisation is ready to increase its maturity level.
The minutiae of the assessment method are complex, and somewhat subject to the specific ‘TAMAR‑compliant’ approach used by an organisation, but each maturity level being assessed is assigned a value according to the extent to which it has been achieved, which is determined by the extent to which its constituent parts (i.e. process areas, goals and practices) have also been achieved. The lowest rated constituent typically determines the value of its parent component, known as the inheritance principle17. The Little TMMi states that:
To be able to classify a specific or a generic goal, the classification of the underlying specific and generic practices needs to be determined. A process area as a whole is classified in accordance with the lowest classified goal that is met. The maturity level is determined in accordance with the lowest classified process area within that maturity level.18
There are four possible values: N (not achieved), P (partially achieved), L (largely achieved) and F (fully achieved). Maturity levels which are L or F are considered to have been achieved. Two ancillary ratings are N/A (not applicable, meaning not relevant to the organisation) and NR (not rated, when there is disagreement or insufficient evidence to decide a value). Percentage equivalents of each value rating are provided in the TMMi syllabus and are used to rate practices, though some approaches – which are not necessarily TAMAR‑compliant – just use ‘yes’ or ‘no’ when determining whether or not a practice has been met, to determine a mean goal value.
Assessments do not cover all maturity levels each time, this would be expensive and counterproductive. Usually, one or two maturity levels above the existing level are assessed each time, though the constituents of all levels below the target maturity level are assessed (or reassessed) during each assessment19. Because preceding maturity levels form the foundation for higher levels, ‘skipping a maturity level’ during an assessment is not recommended. Yet sometimes it is useful to implement a process area of a significantly higher maturity level in order to assist with the implementation of lower level process areas20.
At its core, TMMi is fundamentally a list of ‘good practices’. Following an assessment, it is up to the testing organisation itself to decide the nature of the improvements it will make to rectify hitherto failing TMMi components, in addition to specifying how these improvements will be implemented. However, TMMi does suggest a framework for implementing changes in recognised areas of weakness: the IDEAL model (Figure 4), but there is no standard approach21. The user could just as readily use the Six Sigma or Plan‑Do‑Check‑Act (PDCA) approaches. IDEAL is shorthand for five stages of process improvement: initiating, diagnosing, establishing, acting and learning.
Conclusion: taking TMMi further
In the context of ever‑increasing demands for better and more complex software, delivered more quickly, to a higher quality and at a cheaper price, the role of TMMi is two‑fold. On one hand it is a microscope. It details starkly the testing‑related vulnerabilities of an organisation – particularly when staff interviews are taken into account – and punctuates them with a maturity level which can then be compared with those of other organisations. In tandem with this, TMMi is a roadmap. It lists ‘good practices’ and suggests a framework within which poor ones can be replaced. TMMi does not dictate the specifics of improvements; it encourages the organisation itself to decide the best way to design and implement them. Moreover, for a maturity model practitioner, the TMMi structure is easily adaptable to testing‑related departments, such as software release management or business analysis. The associated frameworks can be made as complex (practices, goals, process areas and maturity levels) or as high‑level (just maturity levels and process areas) as the needs and commitment of each department require them to be. In sum, TMMi is a sound approach to improving the test process.
- v. Veenendaal and J. J. Cannegieter, The Little TMMi, (The Netherlands: UTN, 2011), p. 11.
- More information can be found at: http://www.cmmiinstitute.com/ (16.02.2016.)
- CMMI Product Team, CMMI for Development, Version 1.3, (USA, Carnegie Mellon, 2010), p. 21.
- Verification and validation, see Veenendaal and Cannegieter, loc. cit.
- See: https://science.iit.edu/computer‑science/research/testing‑maturity‑model‑tmm (16.02.2016.)
- http://www.tmmi.org/ (16.02.2016.)
- For instance, TPI Next, covered by B. Weston, ‘Ready, Set and Be Prepared’, TEST Magazine, September 2015, pp. 36‑
- In addition to CMMI and TMM, TMMi developed from other models including the Evolutionary model, after Gelperin and Hetzel.
- Veenendaal and Cannegieter, op. cit., p. 79‑
- , p. 13.
- TMMi is also a process reference model, rather than a content‑based model. For information on this distinction, as well as more on continuous and staged models, see E. v. Veenendaal, ‘TMMi and ISO/IEC 29119: Friends or Foes’, TMMi Foundation, 2016, pp. 1‑
- Veenendaal and Cannegieter, op. cit., pp. 28‑
- TAMAR is essentially a set of requirements which needs to be satisfied for an assessment to be declared acceptable, rather than strict instructions on how an organisation must assess. An organisation has freedom on how it goes about satisfying TAMAR. However, the TMMi Foundation has developed a system called the TMMi Assessment Method (TAM), which satisfies the requirements of TAMAR (http://www.erikvanveenendaal.nl/UK/tmmi,34.html, 08.04.2016).
- Only assessors accredited by the TMMi Foundation may perform formal assessments. An example of a TMMi qualification is the TMMi Professional.
- Veenendaal and Cannegieter, op. cit., p. 13.
- , p. 65.
- , p. 61.
- , p. 19.
- , p. 67