AI as a new operating model
Sometime this year, without a press release or a countdown clock, it became obvious to me that something had shifted. Not in the usual places. Not in vendor announcements or glossy demos. Those have been loud for a while now. The real signal came from quieter moments. From conversations that drifted off agenda. From people lowering their voice slightly and saying things like, “This is starting to change how we decide things,” or “I’m not sure our current rules still apply.”
That’s when I stopped thinking about AI as a tool. Tools fit into existing habits. You install them, train people, adjust a process or two, and move on. What I’m seeing doesn’t fit that pattern. AI is not sliding neatly into the organization. It’s nudging it. Sometimes gently. Sometimes not. The first place where this shows up is decision-making.
AI doesn’t just automate tasks. It proposes. It recommends. It prioritizes. It subtly shifts where authority sits. Decisions that used to be the exclusive domain of managers or committees are now influenced by models that work on probabilities, patterns, and correlations. That forces uncomfortable but necessary questions. Who is accountable when a recommendation comes from a system? Who validates it? How do you explain it when the reasoning is not linear? These are no longer just IT questions. They are governance questions.
Skills are another area where the ground is moving. Everyone talks about learning to “use AI,” but that’s not what seems scarce to me right now. What’s scarce is judgment. Context. The ability to frame the right problem, to know when a suggestion makes sense and when it doesn’t. In practice, the people who add the most value are not the ones with the fanciest prompts, but the ones who understand the business well enough to challenge the output.
That’s a cultural shift, more than a technical one. Governance models are starting to feel the strain as well. Most of our existing controls assume predictability. You approve something, it behaves as designed, and deviations are treated as exceptions. AI doesn’t behave that way. You manage ranges, confidence levels, and acceptable risk. Control becomes continuous rather than binary. Approval turns into supervision. Prevention gives way to calibration.
Calling that a simple tooling change feels increasingly dishonest. What also strikes me is how porous the boundaries have become. Between IT and business. Between central teams and local initiatives. Between sanctioned use and experimentation. People are not waiting. They are trying things. Some experiments are smart. Some are messy. All of them are faster than most formal governance cycles. At that point, organizations face a choice. Tighten the screws and try to force AI back into familiar shapes. Or accept that the operating model itself needs adjustment.
I’ve seen this movie before, with cloud, with mobility, with data. Pretending a structural change is just another rollout never ends well. It creates shadow practices, frustration, and eventually mistrust. Acknowledging that something deeper is happening is uncomfortable, but it’s also the only honest starting point.
I think the big one has started.
AI was already impressive. But because it crossed the line where it stopped being optional and started reshaping how work, decisions, and responsibility actually flow inside organizations. The companies that will navigate this well won’t be the ones chasing every new feature. They’ll be the ones willing to rethink governance, skills, and decision-making with a bit of realism, a bit of humility, and a sense of proportion.
The others will keep asking which tool to deploy next, and wonder why it feels like something is slipping through their fingers.


