Skip to Content

The Real Problem Isn’t Prompting

It's How We Think About AI
22 April 2026 by
Dr Bryce Antony

Professionals are not struggling with AI because they lack skill.


Professional AI 'power' users are facing challenges because they are operating within the wrong model.


Over the past year, AI has been widely adopted as a tool for productivity, insight, and decision support. The dominant interaction model has followed a simple pattern. Ask a question. Receive an answer. Refine the prompt. Improve the output. This approach feels intuitive, and in many cases, it produces impressive results.


However, this model contains a fundamental flaw.


It assumes that each interaction with AI is complete in itself.


This assumption sits at the centre of the problem.


When AI is treated as a reactive system, where each prompt is an isolated event, performance becomes inherently unstable. Outputs may appear strong in one instance and weak in the next. Quality may fluctuate without clear explanation. Confidence in the system begins to erode, not because the underlying capability is lacking, but because the interaction model is incomplete.


This is not immediately obvious.


In fact, the model feels correct at first. Early interactions often produce high-quality responses. The system appears capable, responsive, and efficient. This reinforces the belief that prompting is the primary skill required to unlock value.


Yet over time, a pattern emerges.


Consistency does not follow.


The reason for this is not technical complexity alone. It is structural.


The prevailing model treats AI as if it behaves like a search engine or a digital assistant. Input leads to output, and the interaction ends there. Each request is framed as a standalone task, disconnected from previous interactions and future expectations.


This creates a hidden limitation.


When every interaction is treated as independent, there is no continuity. There is no accumulation of context, no reinforcement of expectations, and no mechanism for stabilising behaviour across repeated use. Each interaction, in effect, begins from zero.


The result is predictable.


Variability becomes the norm.


This is where many organisations begin to misdiagnose the issue. When outputs vary, the natural conclusion is that the model itself is inconsistent. Efforts then focus on improving prompts, refining language, or selecting different tools. While these adjustments may produce incremental improvements, they do not address the underlying problem.


Because the problem is not located at the surface.


It sits beneath the interaction.


To understand this more clearly, it is necessary to examine how consistent performance is achieved in other domains. In cybersecurity and digital forensics, repeatability is not optional. Processes are defined, controlled, and monitored. Outcomes are expected to be consistent because the method of engagement is consistent.


When these principles are absent, results become unreliable, regardless of the capability of the tools being used.


AI is no different.


When it is used informally, without a consistent approach to engagement, outputs reflect that informality. When it is engaged with discipline, outcomes begin to stabilise. The system itself has not changed. The method has.


However, the current ecosystem does little to emphasise this distinction.


AI tools are designed for accessibility. They encourage rapid interaction and immediate feedback. Training materials focus on prompt techniques, quick wins, and isolated use cases. This lowers barriers to entry, which has been essential for widespread adoption.


Yet it also reinforces a narrow view of what effective use looks like.


The emphasis remains on interaction, not on how interaction fits into a broader system of use.


This creates a gap between perception and reality.


Organisations believe they are adopting AI, yet struggle to embed it into repeatable workflows. Individuals believe they are becoming proficient, yet cannot consistently reproduce high-quality outputs. The technology is present, but the capability is not fully realised.


At a strategic level, this gap introduces risk.


Not in the sense of external threat, but in terms of operational reliability. When outputs cannot be trusted to remain consistent, they cannot be relied upon in critical processes. Decision-making becomes cautious. Integration slows. The organisation remains at the level of experimentation, unable to move into sustained capability.


This is where the conversation must shift.


The issue is not whether AI can produce high-quality outputs. It clearly can. The issue is whether those outputs can be produced consistently, reliably, and at scale.


That outcome cannot be achieved through isolated interactions alone.


It requires a different way of thinking about how AI is used.


At the centre of this shift is a simple but often overlooked observation.


AI interaction is not a single step.


It is part of a system.


This does not refer to technical architecture alone. It refers to how interactions are structured, how context is established, how expectations are communicated, and how outputs are evaluated and refined over time. It introduces the idea that effective use is not defined by individual prompts, but by how those prompts sit within a broader pattern of engagement.


When this layer is missing, variability is inevitable.


When it is present, behaviour begins to stabilise.


However, this layer is rarely discussed explicitly.


Instead, the focus remains on improving individual interactions, as though consistency can emerge from isolated improvements. In reality, consistency requires something more deliberate. It requires continuity. It requires alignment between interactions. It requires a mechanism for ensuring that each engagement contributes to a broader objective.


Without this, even the most carefully constructed prompts operate in isolation.


This is the limitation that many professionals encounter without fully recognising it. They refine their prompts, adjust their language, and experiment with different approaches. Some results improve. Others do not. The overall experience remains inconsistent.


The missing piece is not additional effort at the surface.


It is a shift in how the interaction itself is conceptualised.


This shift has implications beyond individual use.


At an organisational level, treating AI as a series of isolated interactions prevents it from being integrated into structured workflows. Processes remain fragmented. Outputs cannot be standardised. Quality cannot be reliably controlled. The technology becomes an accessory rather than an embedded capability.


For organisations seeking to derive sustained value from AI, this is a critical barrier.


It is not a limitation of the technology. It is a limitation of the model being applied to it.


The distinction is important.


If the issue is perceived as technological, the response is to wait for improvement. If the issue is recognised as conceptual, the response shifts to how the system is used.


This is where meaningful progress begins.


It also leads to a more important question.


If AI interaction is not simply a matter of asking better questions, and if consistency cannot emerge from isolated prompts, then what actually governs reliable outcomes?


What ensures that outputs remain stable across repeated use? What allows AI to move from occasional success to dependable performance?


These questions point to a layer of practice that sits beneath the surface of interaction.


A layer that is not immediately visible, but becomes essential as usage matures.


Understanding this layer is the key to moving beyond experimentation.


It is the difference between using AI occasionally and integrating it effectively.


And it is where the next stage of the conversation must begin.


The problem is not that AI is inconsistent.


It is that we are using it in a way that produces inconsistency.


We have learned how to prompt.


We have not yet learned how to work effectively with AI.

Understanding AI is one step; applying it consistently is another, and that is the focus of our book

Train Your AI: Structured Human-AI Collaboration: Professional Edition

Buy Now

Photo by Cash Macanaya on Unsplash, edited by CyberForensics