The Business Engineer

The Business Engineer

The AI Orchestrator's Leverage Points

Gennaro Cuofano's avatar
Gennaro Cuofano
Apr 27, 2026
∙ Paid

In the system prompting guide I shared, the key point is this: given how powerful current LLMs and agentic systems are, their effectiveness hinges on how well you encode context, nuance, and directional intent. A well-constructed prompt can even shape behaviors like temperature implicitly. In that sense, prompting is no longer just natural language input, it is an engineered architecture.

The System Prompting Guide for The Business Engineer

The System Prompting Guide for The Business Engineer

Gennaro Cuofano
·
Apr 19
Read full story

In 1999, Donella Meadows published a short paper that would become one of the most cited texts in systems thinking: Leverage Points: Places to Intervene in a System.

The central argument was deceptively simple. Every system has places where a small change produces large effects. The problem — the structural problem that makes this insight difficult to use — is that these high-leverage points are almost always the opposite of where practitioners look.

People focus on numbers: budgets, headcounts, parameters. Numbers are visible and adjustable. Adjusting them feels like an intervention. But adjusting numbers almost never changes a system’s behavior in any fundamental way.

The real leverage lives in the rules, goals, and paradigms that determine why the system produces what it produces. By the time you are adjusting parameters, you are already at the least-leveraged end of the hierarchy.

Agentic architectures are systems in the Meadows sense. They have stocks: the probability mass in model priors, accumulated context in memory, knowledge embedded in retrieval systems.

They have flows: API calls, tool invocations, prompts moving through pipeline stages. They have feedback loops: human review cycles, automated evaluation, and scoring systems that determine which outputs get reinforced.

They have goals: objective functions, system prompts that declare optimization targets, and reward signals that shape behavior over time. And they have a paradigm: the shared mental model of what agents fundamentally are.

The orchestrator’s job is exactly what Meadows described. Find where in the system leverage actually lives. Intervene there — not at the most visible point.

Most orchestrators are not doing this.



User's avatar

Continue reading this post for free, courtesy of Gennaro Cuofano.

Or purchase a paid subscription.
© 2026 Gennaro Cuofano · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture