What’s wrong with technology readiness levels (TRLs) and how to fix them

Attempts to integrate technologies into products before those technologies are mature often lead to budget and schedule overruns. Companies therefore want to avoid this, but how can they know when a technology is “ready”?
This is what led NASA engineers to develop the technology readiness level (TRL)
scale in the 1970s. This scale assigns a readiness level on a 9-point
scale to a technology, starting from TRL 1 (Basic principles observed and
reported
) all the way to TRL 9 (Actual system “flight proven” through
successful mission operations
).
The TRL scale has since spread beyond NASA and is now also . TRLs can be a useful tool to assess the maturity of technologies. However, they also have limitations, which are discussed in this week’s paper.
The challenges related to the TRL scale can be grouped into three categories: system complexity, planning and review, and assessment validity.
Many important challenges emerge from the inclusion of new technologies in highly complex systems:
- The TRL describes how components progress from isolated concepts to being part of a subsystem, but offers little insight into integration. It says nothing about how a component works with its dependencies or fits into an architecture, both of which are just as important as the maturity of the component itself: .
- Another challenge arises when a proven technology is used in a new environment, or when a very small but novel component is used in a proven system. What should the resulting TRL be? Technically, the system then becomes unproven, resulting in a low TRL, but this may not accurately reflect the technical risk.
-
would like an expansion of (component-level) TRL assessments to a more
comprehensive readiness level that measures the maturity of the system as a
whole, which may include aspects such as its integration and architecture. But
to quote a NASA technologist:
nobody has come up with anything that’s useful yet
.
- Another challenge with the TRL is that it is not always clear what the appropriate level of granularity is for TRL assessments. The TRL should be used to identify which components have a high technology risk, but assessments are often too detailed. This is not only very resource-intensive, but also makes it harder to identify key risks.
- Finally, the TRL doesn’t describe what managers and developers should do with the results and what their priorities should be. This is also made harder by a lack of effective ways to visually present TRL information in such a way that the architecture, alternatives, difficulty, and confidence are clear.
Some challenges are related to the integration of TRL assessments with existing organisational processes:
- The TRL doesn’t say anything about the likelihood that a technology or component can progress to the next level, nor does it provide any information about the effort (time and resources) required to achieve the next level. This makes the TRL less useful as a planning tool.
- Many organisations use the achievement of specific TRLs as deliverables, but there is no general guidance for aligning TRLs with an organisation’s own internal (project) milestones.
- It may happen that a project reaches a stage gate with one or more components at TRLs lower than required, e.g. due to issues with suppliers, unexpected results, or new insights. It’s common practice to then waive the TRL requirement, but there is little guidance on how and when such a decision can be made.
- It’s good practice to identify backup plans for risky critical technologies, but those backup plans are not necessarily mature themselves and are often not included in TRL requirements.
- Organisations can use technology roadmaps to perform long-term planning of product lines and innovation pipelines. There is currently no well-established way to determine when a component is mature enough to be included in the roadmap.
The reliability and repeatability of the scale itself also pose challenges to users:
- The TRL simplifies technology development to nine steps. This makes the TRL very useful for communication, but can simultaneously frustrate those who need more precision for assessments. For example, how many tests does a component need to achieve a TRL? And did it pass marginally or easily? The criteria for TRLs are too broad. To be practically useful, they should not only be industry-specific, but possibly even specific to a product line or product type.
- Finally, the TRL assessment process can sometimes seem subjective. It typically requires consensus among a number of stakeholders in a project. In practice, different stakeholders may disagree about the maturity level of a component. Power and influence then become the decisive factors when it comes to deciding the TRL.
Solutions to some of these challenges have been described in academic literature and industrial guidelines. Here are two solutions that I found useful myself:
-
In practice, the overall TRL of a system is often determined using a “weakest link” method, where a subsystem is assigned the minimum TRL of its components. The integration readiness level (IRL) and system readiness level (SRL) are two extensions to the TRL that aim to address this issue, but they are not entirely without controversy either.
-
TRL assessments of individual components can be improved in several ways, for instance by calculating a confidence interval to determine the accuracy of assessments, or via the inclusion of failure modes, critical parameters, engineering requirements, or technical specifications.
-
The technology readiness level (TRL) is a scale for determining the technological maturity of a component or (sub)system
-
Although widely used, it can be challenging to interpret assessments made using the TRL

