Correctly understanding what a Diagnostic is

People typically don’t understand what a Diagnostic is. They then get all kinds of things wrong – from the initiation of the Diagnostic, right through to the conclusions they draw from it – and miss out significantly (even entirely) on the true insights and power that Diagnostics possess.
Online Diagnostics are the means by which Value Management is delivered.
Without them, people can’t be engaged with the speed, security and scale needed to effectively deploy Value Codes and harness the insights and expertise those people possess.
But the term “diagnostic” is rarely understood properly. And with those misunderstandings, huge opportunities for transformation get missed.
So what do people get wrong? Why do they get these things wrong? And why does this all matter?
To begin with, we need to understand what makes a “diagnostic” a “diagnostic”.
A Diagnostic is not a survey
The first thing to say about what makes a diagnostic a diagnostic is what it’s not: a diagnostic is absolutely not a survey.
People often use the words “diagnostic” and “survey” interchangeably, but this needs to stop.
Yes, the interface may look superficially similar – a set of parameters for respondents to evaluate on a scale with a range of options to choose from – and both surveys and diagnostics produce some consolidated output.
But beyond these superficial similarities, the differences are profound in every way:
- Why they’re deployed – their intended purpose.
- When they’re scheduled.
- What measures they contain – where they come from and what they’re like.
- How they work in engaging people and in terms of understanding and using the insight.
- Who gets engaged, both in terms of responding and in terms of taking responsibility for what happens next.
This table sums-up these profound differences:

The first row of the table is pivotal:
- Surveys are typically about loose issues and topics, and are usually presented with a “breezy” 1-5 scale to quickly capture a perception.
- In direct contrast, Diagnostics are about delivering content-rich Value Codes, so they’re focussed and precise, aiming to catalyze reflection and insight around precise descriptions.
This doesn’t mean that perceptions aren’t captured in diagnostics; far from it.
But it does mean that those perceptions are anchored in specific statements and are often explained in supporting comments (which diagnostics encourage).
Diagnostics are therefore not just about perception; they’re also about objectivity and precision, and about joining these to perceptions – to get the best of both worlds.
And from this first distinction, all the other differences follow – especially that:
- Diagnostics emerge (or ought to emerge) from the Things That Matter to participants; this isn’t someone else’s pre-defined agenda or priorities being imposed from outside.
- Where the primary purpose of surveys is generally to gather information for whoever’s running the survey, diagnostics are primarily to benefit the participants – indeed, Value Codes are ideally written by (or at least refined by) front-line people.
- What people are facing changes, and that means that diagnostics adapt and evolve over time.
- And that means that diagnostics are also intended to be iterative; not one-off or occasional. They ought to be conceived of as part of a bigger process; never just about information gathering, benchmarking or a single point of time.
What does this all look like in practice, though? How do diagnostics “work”?
How diagnostics work
How Diagnostics work has two aspects: their content (Value Codes) and the diagnostic process.
In both these aspects – content and process – the distinction between surveys and diagnostics is borne-out, and especially in harnessing not just perception, but also insights and clarity through the objectivity and precision that Value Codes deliver.
So let’s first look at Value Codes and how they work – particularly in the context of a Diagnostic.
Value Codes
The first thing to say here is that – unlike simple survey questions – Value Codes harmonize and balance the two broad kinds of thinking we do:
- Objective: execution-focused, narrowing-down and centered on what is already known.
- Subjective: creativity-focused, expansive and looking to new possibilities.
Both these “modes” of thought are critical.
Now, it should be said that surveys do aim to capture perceptions about specific things, so on some level they do loosely try to “combine” both these modes of thought, but they ultimately lack both the depth and contextual framing to achieve this at all effectively:
- There is very little depth in most survey content: a quick description or statement with a 1-5 (often “strongly disagree” through to “strongly agree”)
- Framing-wise, because they come from “outside” (even if the intent may be genuinely to address things that are important to the survey participants), they have a much greater chance of failing to resonate with and engage people.
(And that’s before even considering “survey fatigue”, which we’ll come back to shortly.)
In contrast, Value Codes consciously seek to optimize and harmonize insights from both:

- They flow from the Things That Matter, around which people naturally coalesce and self-organize, rather than being directed, fostering empowerment and discernment.
- Indeed, the links back to the related Things That Matter provide context – why the Value Code is there and what it contributes to – channelling thoughts and perceptions around real-life situations.
- Immediately, though, the Value Code structure and content bound and ground an aspect of the Things That Matter into something specific that “pushes” the person evaluating it for clarity around how things are and/or how they could (or ought) to be.
So, there is a conscious harnessing of both subjective thought – general perceptions, led by the Things That Matter, around how things “feel” – and objective thought, i.e. which statement most accurately and specifically captures how things “are” (or could be).
Value Codes therefore put relentless focus on the creative tension between objectivity and subjectivity – between specific and general; between current and desired states – to stimulate mindful and insightful inputs.
And that’s why comments are not only encouraged, but so readily flow – there is now a catalyst and outlet for creative and focussed thought.
With all this in mind, let’s now turn to the Diagnostic process.
The Diagnostic process
The first thing to say here is that Diagnostics naturally support evaluation, grouping related things together in sections to further help focus attention, and actively fusing subjective and objective thought:

They also make evaluation clear and easy to do: labels and descriptions are on the left, evaluation statements in the middle (objective) to choose from using slider bars (subjective), and there’s then space on the right to enter comments (a combination of both subjective and objective).
But online Diagnostics are far more than just an intuitive UI; they also readily provide the kind of rich reporting needed to gain insights: overviews, more detail, and – crucially – highlighting perception gaps, e.g.:

And the online Diagnostic process also provides:
- Limitless scalability to any number of participants.
- Immediacy: there are no scheduling constraints; no writing-up, processing and collating; it just happens automatically.
- Security and discretion (anonymous by default), so it’s a safe space to be completely frank: far less “confronting” (for lack of a better word) than workshops or interviews.
- Inclusivity in terms of numbers of people (beyond those that would be e.g. at workshops) and in terms of personality type (i.e. those that are more naturally introverted and/or more reflective, but who then often have the most nuanced and valuable insights).
“But“, I can possibly hear you say, “those things are all true of surveys, too: they can also involve 100s of people, they can also be anonymous, and they can also produce pretty charts!”
And you’d be right.
But that’s why you need to remember that we’re talking about process and content.
And it’s finally why you need to understand why and how this unique combination of process and content means that Diagnostic output should be used completely differently from survey output.
Using Diagnostic output
We’ve already touched on some of this when reflecting that the primary purpose of Diagnostics is to serve the participants and not whoever has instigated the process.
This firstly means that they are entirely different types of “measurement”, so should be used for entirely different purposes and in entirely different ways:

Indeed, the only thing that survey output and Diagnostic output have in common is that the worst thing you can do with both is ignore it.
Beyond that, it’s entirely different…
…and this becomes clearer if we consider how survey output is usually used in light of what we’ve seen so far with Diagnostics.
Because if you do with Diagnostic output what you usually do with survey output, you’ll at best miss a huge opportunity; at worst, you’ll end up worse than you started – with false optimism, complacency and rushing to the lowest common denominator.
1. Look to use the feedback to validate existing decisions and biases
This is where, influenced by surveys, you cherry-pick “scores” and comments to fit an existing agenda, e.g.:
- Senior management confirming what they’ve already decided to do or what they already thought were the main priorities.
- Consultants steamrollering past the feedback to deliver their standard training or workshop.
In both cases, there may be a nod to what’s been said, but having “teased” people with a new type and focus of measures, you’ve essentially done what you were going to do anyway.
You can then kiss goodbye to them engaging again: you managed to engage them this time under the guise of “this is different” (engagement at scale and/or with a radically different kind of measure); they won’t believe you a second time – at best, they’ll just tick the box as quickly as possible to give you “feedback” in future (after all, you clearly just “ticked the box” in asking them for feedback this time).
2. Restrict access, involvement and outcomes
Reflecting a survey mentality, this can again take two forms:
- Limiting who gets to see the output – e.g. presenting it just at management meetings – which lacks transparency and reinforces the idea that a privileged few are responsible for understanding where things are at.
- Issuing conclusions from the exercise, e.g. in a report, a new “strategic initiative” or by implementing training.
Of course, this is what happens with surveys: the feedback is collected by whoever’s instigating the survey, it’s for them, and they get to decide what it means and what to do with it.
Most typically, this takes the form of training or “coaching” – either the exercise has revealed that those providing feedback have revealed “faults” that need to be corrected, or the issues will be resolved “in the round”, whether through:
- A workshop, which reinforces that only a few people really “matter”.
- Training, which is often pre-canned – so comes with the sense that specifics don’t matter (so what was the point of asking about them?) – and often pre-planned (at its worst, bundled in with the “diagnostic”), so was always going to happen.
In both cases, responsibility for insight and change are significantly reduced – limited to a “privileged” few or delegated to an internal or external consultant – and that insight and change are funnelled through traditional vehicles (workshops and training).
(Remember the old maxim of “Insanity is doing the same thing over and over again and expecting different results“.)
3. Focus on averages and benchmarks
This is probably where the legacy of surveys is most keenly felt – after all, surveys are so light on content, numbers are pretty much all you’ve got to go on – and so we see several seriously flawed approaches.
Firstly, generating overall averages – for individual responses; across all of a Diagnostic’s feedback (both for specific Value Codes and overall); at its most ridiculous, across Diagnostics, and even for entire sectors and domains:
- Of course, the more inputs you have, the more everything tends to the middle: differentiation becomes minimal.
- And what does a context-free average “mean” in any case? (Except that the focus so often then becomes “let’s improve the average…” or “hopefully our average is better than that one”… “because that means we must be doing better”…!)
- Assuming averages above the “middle” are “good” and those “below” are “bad”.
Most of all, and to expand on both of these points, distribution is completely ignored. Ask yourself this question:

Averages totally mask misalignment – the number one obstacle to effective teamwork – and in turn lead to several completely misguided and short-sighted approaches.
Perhaps the worst of these is benchmarking against other averages – either against a “neutral” best practice gauge or other organisations – which leads to false confidence and complacency:
- It’s a rush to the lowest common denominator (“as long as we’re better than the average, we’re good”).
- A loss of all nuance and variation within the feedback.
- A sense that improving the average by e.g. 0.02 means improvement (yes, we really have seen things this idiotic, missing entirely that averages can go up whilst fragmentation increases… quite aside from how such a tiny increase is pretty much meaningless, in any case).
- Downloading the raw data to do further statistical analysis (which makes quantitative data even more the focus).
- Ignoring the grouping/quadrants of VCs (which misses potential patterns)
- Focusing on where there are low scores (which assumes that low scores are always bad).
- Focusing on averages as an indication of current stafeedback is te, especially overall averages (totally missing the alignment piece, and making the numbers the goal, which also tends to mean comments are disregarded).
- Assuming that averages above 3 are “good” (when they may not be, including because they reflect effort in areas that don’t matter).
Jumping to improvement actions (which pre-supposes what the priorities are and what the ways to get there are).
Neglecting the comments and assuming the scores “speak” for them (which was somewhat forgiveable in the past, due to volume, but means missing nuance and LLM analysis)
In a relationship context, not engaging the other party (which is reasonable to leave until you’ve sorted your own side, but is stupid to neglect)