Generative models haven't actually closed the gap between having an idea and executing it, they've closed the gap between giving an instruction and getting a result. As the cost of producing a polished document, a detailed analysis, or a line of code drops toward zero, the finished product often fails to signal true expertise. In many knowledge-work roles, the value an employee brings is moving away from producing artifacts and toward deciding what to ask, what to include, and how to judge the results.
Luis Lozano Paredes comes at this transition from a unique angle: he is an architect-turned-urban economist and governance researcher. As a Lecturer at the Transdisciplinary School at the University of Technology Sydney and a former fellow at George Mason University, his work examines how platform technologies act as informal governing systems. He says that the value people can bring to organizations in a world where AI increasingly handles more and more of the human workload lies in the fact that AI will always lack the proper experiential context.
"AI is context-blind by definition. By the very nature of how AI technology is constructed, it will never have the situated logic to know what something is. It can learn about the history of people, but not about the human experience of these people in a particular way. You have to force context into it," says Lozano Paredes. AI is putting real pressure on what he calls enterprises’ “epistemic infrastructure”: the ways organizations check whether people actually understand the work behind their outputs. Now that AI is swallowing the “learning by doing” phase of knowledge work, some teams have focused on strict guardrails and acceptable-use policies. But that leaves a harder problem: how to tell whether people actually know what they are talking about when a model can instantly produce a convincing essay, slide deck, or report. He believes a better approach involves a fused, human-centric AI amplification strategy anchored by continuous human participation.
Simulating the soul: “AI is a simulation of intelligence, but you cannot simulate experience,” Lozano Paredes says. “Even if you build a persona, you have to force the context into that persona. You still need to establish where it is, who it is, what it is doing, and why. I am 99.9 percent convinced that this steering is the thing that cannot be replaced.”
Centaur of attention: The key, therefore, is not just keeping a “human in the loop,” but making sure that human brings something the model cannot. He reaches back to his architectural training to explain what that fusion looks like in practice. “‘Human in the loop’ is valuable, but it assumes there is a loop of activity and you just intervene to see what's happening,” he says. He uses a centaur analogy, one part human, one part AI, to explain. “The centaur analogy means you become a true duality with the model. But how do you preserve yourself? The human must be the core of the animal. Because if not, then you're completely replaceable.”
To redefine value, he suggests a return to theory. A bank analyst might know how to run a discounted cash flow model, but Lozano Paredes wants to know whether they actually understand economics, “the macro and the micro” behind the spreadsheet. Taken together, he sees embodied practice and theoretical grounding as what “makes a good prompter and a good centaur.” Without them, the human role naturally shrinks to just clicking approve.
Such training is a direct response to one of AI’s core structural limitations. Large language models can generalize across vast corpora of human text and simulate intelligent reasoning, but by how they are built, they lack what he calls “situated reasoning.” This underpins his more empirical work on how models behave in practice. In a recent preprint on temporal contextual probing, he explores how large language models handle questions about the Global South.
Bay Area bias: In his tests, models trained primarily on Western sources frequently default to a Silicon Valley framing unless he actively pushes them away from it. “When you start to talk to it, it will always go back to San Francisco, especially the American models,” he says. “It will always go back to Silicon Valley, not explicitly, but in the language, in the framings.”
The human tug-of-war: “Because of its training data, a model can recognize sociological characteristics and read academic papers,” he says. “But you still need to systematically probe it. Poke it with: no, here; no, this date; no, this location; no, this type of people. Bring it back to the context that you're analyzing for that particular vignette. You have to literally force it back.”
His experiments suggest that this tendency to generalize and universalize is a structural property of current large language models, from proprietary systems like Gemini and GPT-class models to open-source alternatives. Left to their own defaults, they tend to slide toward generic contexts and dominant worldviews embedded in their training data.
In Lozano Paredes’s framework, the work of “forcing it back” rounds out a triad of human skills: curation (what to feed a model), judgment (how to assess what comes back), and situated contextualization (how to anchor all of that in a particular time, place, and population). As he sees it, that triad becomes the human half of the AI-human centaur.
Death of the document: These limitations carry implications not only for how enterprises govern AI, but for how they measure competence at all. If anyone can prompt a model to generate a polished report, then the report itself may no longer carry the same weight as proof that a person understands the work behind it. For some organizations, this challenges the long-standing assumption that written products are the primary unit of institutional knowledge. “Socrates argued that writing things down would make us stupider because we would stop discussing ideas,” he says. “In a sense, we did become stupider by focusing entirely on the production of documents. We need to explore going back to discourse, debate, and conversation as our primary infrastructures for communicating knowledge. He suggests that in many institutions, treating the written PDF as the primary ‘unit’ of knowledge is due for a rethink.”
That does not mean abandoning the written word. Lozano Paredes is quick to note its enduring value for archiving and coordination. But as enterprise AI pushes the cost of production down, organizations have an opportunity to value the messier, more human processes surrounding the document: how people argue over it, test it, and apply it to a specific context.
In that world, the centaur is not just a metaphor for using tools. It is a description of work that is half automated execution, half embodied, contextual human judgment, and a reminder that the hard part is now everything that happens before, around, and after the prompt.