Mo's Blog.

Thoughts on AI, Human Behavior, and Kevin Warsh!

Cover Image for Thoughts on AI, Human Behavior, and Kevin Warsh!
MOHAMED SHIBL
MOHAMED SHIBL

Photo by Vladislav Reshetnyak: https://www.pexels.com/photo/full-frame-shot-of-eye-251287/

Recently, I learned that Kevin Warsh was appointed Fed Chair. I was curious to learn about his approach, so I watched a couple of interviews, an in-depth discussion on the Uncommon Knowledge series, which is hosted by Peter Robinson at the Hoover Institute, and another on the panel of the 2025 Reagan Economic Forum. I didn’t intend to spend that much time thinking about central banking! But something in his framing kept my attention. He spoke less about short-term forecasts and more about institutional boundaries; about how monetary policy, once stretched too far, begins to blur into fiscal territory, and how credibility erodes gradually rather than all at once.

Listening to these discussions stirred something older in me. Before I became a software engineer working in AI, I spent years deeply interested in economics. I worked as a graduate teaching assistant for Professor David Jay Green at Hult Business School in San Francisco, and that experience trained me to think in systems; about incentives, feedback loops, distortions, and second-order effects. Even after moving into technology, those instincts never disappeared. Watching Warsh brought them back into focus.

In my early 20s I came to the US to study economics, I assumed economics was a solid science; perhaps incomplete, but fundamentally stable. The models seemed clean. The graphs were elegant. Supply met demand; equilibrium emerged; incentives explained everything. I was keen to learn this science in depth with the hope of one day wielding it's powers!

It didn’t take long for that assumption to unravel.

The deeper I studied, the more I realized how fragile many of those conclusions were; I learned how much economics depended on assumptions about expectations, institutions, politics, and above all, human behavior. In his interview Warsh recalled a statement by the late Nobel laureate Milton Friedman who said:

“The only thing we understand in economics is Econ-1; everything else is made up”

Beyond introductory supply and demand, economics becomes layered with imperfect understandings dressed in formal precision.

Warsh himself captured that tension when he said:

“There had been too much scientism, scholasticism, trying to make precise of that which we have still imperfect understandings of.”

The critique was aimed at central banking, but it applies more broadly. Economics has often tried to present itself as more precise than its behavioral foundations justify.

This is partly why behavioral economics never quite rose to the level of a full “school” of thought. Keynesianism offers a macro framework. Monetarism offers a theory of inflation and money. Austrian economics offers a philosophy of markets and cycles. Behavioral economics, by contrast, has been descriptive. It identifies biases, loss aversion, anchoring, present bias; but it does not provide a coherent macro system. It tells us that humans deviate from rationality; it does not tell us how to run an economy.

In that sense, behavioral economics has always been a corrective, not a governing framework.

But what if AI changes the terrain?

For the first time, we are building systems that can model patterns of human decision-making at massive scale. Consumer behavior, attention flows, sentiment shifts, risk preferencesl; these are no longer abstract constructs inferred from surveys and lab experiments. They are datasets. They are predictive targets. They are optimizable variables.

If behavioral economics taught us that humans are predictably irrational, AI may make that predictability computationally tractable.

And that invites a new possibility: what if behavioral economics, powered by AI, becomes operational? Not just descriptive, but prescriptive.

We already live inside an early version of this reality. Social media platforms have spent more than a decade refining algorithms that predict engagement and adjust feeds accordingly. The result has not been neutral observation. It has been behavior shaping; attention steering, preference amplification, belief reinforcement. In other words, social media did not merely reflect behavior; it engineered it.

Extend that logic into finance, monetary policy, and macroeconomic expectations, and the implications deepen. If AI systems can forecast how households respond to rate changes, how investors respond to signals, how consumers react to fiscal incentives, then the temptation to fine-tune those responses becomes immense.

The boundary between forecasting and steering begins to dissolve.

If central banks once struggled because they lacked sufficient data and precision, the next era may present the opposite challenge: an abundance of behavioral visibility. Policymakers may find themselves able to measure expectations in real time, simulate reactions, and design interventions with unprecedented granularity.

This does not automatically imply dystopia. It may improve stability. It may reduce volatility. It may allow earlier detection of crises. But it also concentrates power. The capacity to predict behavior at scale carries with it the capacity to influence it; and influence at scale is never neutral.

Perhaps the central question of this technological era is not whether AI will accelerate productivity or reshape labor markets. It is whether the tools that make human behavior measurable will quietly transform economics itself; from a system that interprets markets into one that increasingly engineers them.