Risk, Uncertainty, and Confidence

The missing dimension in defensible security decisions

Security risk is often expressed through numerical scores, matrices, and dashboards intended to support decision-making. These representations can be useful for comparison and prioritisation. They can also invite misplaced confidence if the judgements behind them are not made explicit.

Security risk disciplines define their core concepts in terms of threats, threat agents, and vulnerabilities linked by deliberate intent to cause harm, loss or disruption. Vulnerability is commonly described as a weakness that can be exploited by a threat agent to realise harm (Rausand, 2011). This framing alone distinguishes security risk from safety risk. It is adversarial, conditional, and shaped by human behaviour rather than random failure.

There is an adversary.
There is intent.
There is capability and adaptation.
And where vulnerability exists, there is opportunity.

In practice, many decisions are justified through risk ratings and matrices when these representations are treated as conclusions rather than as summaries of underlying judgement. This persists even though security risk is shaped by assumptions, incomplete knowledge, and changing conditions.

This is not a failure of professionalism. It is an understandable consequence of treating likelihood as something fixed that can be calculated and archived, rather than as a judgement that must remain conditional and open to revision.

Security risk does not behave that way.

Likelihood: Is it a Judgement or Measurement?

In safety disciplines, probability often refers to frequency. Engineers examine how often systems fail, components degrade, or processes break down. This approach works tolerably well for non-adversarial failure.

It breaks down when harm is intentional and adaptive.

Security outcomes do not simply occur. They are caused.

In adversarial security contexts, likelihood is not usually directly measurable as a stable property. It must be inferred from available knowledge about threat capability and intent, system exposure, control performance, and the conditions under which vulnerability creates opportunity.

In subjective decision theory, probability may be understood as a degree of belief based on available information and assumptions, rather than only as an observed frequency (de Finetti, 1974; Savage, 1954).

Expressing likelihood in a matrix is not inherently problematic. The risk arises when the judgement behind that expression is neither examined nor revisited.

Treating likelihood as static assumes that threats are passive, systems are fixed, and controls behave consistently over time.

None of these assumptions hold in security.

Risk Emerges from Interaction

A system is not vulnerable in the abstract. It is vulnerable to a specific threat, under specific conditions, at a specific point in time. In security science, vulnerability is defined as a weakness that can be exploited by a threat agent to realise harm (Rausand, 2011).

Risk emerges from interaction, not from components viewed in isolation. Threat actors adapt. Capabilities evolve. Context shifts. These forces lie largely beyond organisational control.

What remains within control is opportunity.

Opportunity is created by vulnerability and shaped by how rigorously it is understood, challenged, and reduced. It is where threat capability meets system weakness under specific conditions, and where security judgement has its greatest leverage.

Controls rarely fail only through visible breakdown. More often, opportunity re-emerges quietly as assumptions age, environments change, or adversaries adapt faster than controls were designed to constrain.

When likelihood ratings are treated as fixed while opportunity evolves, the assessment no longer reflects how risk is forming. It reflects only how it once was.

Risk Matrices as Judgement Aids

Security-specific risk literature treats matrices as neutral tools whose value depends entirely on how well likelihood, consequence, and risk statements are defined. Used well, they structure judgement. Used without sufficient discipline, they can create a misleading sense of precision and confidence.

Risk matrices remain widely used for good reasons. They are intuitive, communicable, and useful for expressing relative priority. Risk management standards recognise these strengths while also noting limitations such as oversimplification, loss of context, and false precision (ISO, 2018; Aven, 2015).

The risk is not the matrix itself. It arises when the matrix output becomes the decision rather than supporting one.

A matrix can structure discussion and assist prioritisation. It can help compare concerns and allocate attention. It does not perform well to validate assumptions, surface unknowns, or express confidence in the underlying assessment.

Treating matrix outputs as conclusions risks reducing judgement to arithmetic. What remains may look rigorous, but it is no longer fully defensible.

Readers seeking deeper, security-specific insight into this balance may find the work of Julian Talbot particularly valuable. His writing does not reject risk matrices, but places clear boundaries around what they can and cannot reasonably be expected to do.

Evidence Must Be Allowed to Change the Answer

One indicator of weak security maturity is an assessment that never changes.

Incidents, anomalies, near-misses, and control failures are not administrative noise. They are evidence. They should test and refine assumptions about threat capability, control performance, and opportunity.

Risk literature cautions that uncertainty is often under-reported, leading to narrow risk descriptions that appear more decisive than the evidence warrants (Aven & Zio, 2014).

An assessment that does not evolve as knowledge accumulates is unlikely to remain valid.

Uncertainty and Confidence

How uncertainty is treated varies widely, but it is often left implicit rather than examined directly.

Within risk scholarship, uncertainty is frequently described as an expression of confidence in assessment results, not merely a lack of data (Hansson, 2014). Some uncertainty is irreducible. Much of it arises from incomplete evidence, untested response capability, or controls whose performance has never been verified.

Yet final risk ratings are often presented with a level of confidence they do not deserve.

A risk estimate that does not make uncertainty and confidence explicit is incomplete, regardless of how precise the score appears.

Confidence Is a Decision Variable

Risk decisions are not made on estimates alone. They are made on the level of confidence placed in those estimates.

In practice, risk reporting often emphasises what the risk is, while giving less attention to how certain those assessments are, which assumptions are most critical, what would change the judgement, and where surprise is most likely to arise.

Research on judgement under uncertainty shows that confidence can increase more rapidly than understanding, particularly in environments where feedback is limited or delayed (Kahneman & Tversky, 1979).

Confidence does not require certainty. It requires disciplined reasoning, transparent assumptions, and evidence that supports how conclusions have been reached.

Security Risk as a Living Judgement

Security risk is not a fixed state. It is a continuously evolving judgement informed by evidence and constrained by uncertainty.

The most defensible security decisions are not those supported by the cleanest matrices, but those grounded in transparent assumptions, explicit acknowledgement of uncertainty, disciplined evaluation of control performance, and a willingness to revise conclusions as conditions change.

This is not a call for more complex models. It is a call for better thinking.

Security effectiveness is determined not by the measures in place, but by the quality of judgement used to interpret their performance, reliability, and limitations under uncertainty.

Conclusion

Effective security is not defined by fixed estimates, but by the ability to continuously refine judgement as understanding evolves.