I saw a SharePoint MVP's post recently. Genuine excitement. Markdown support had landed in SharePoint. Not a joke — real, earned enthusiasm from someone who knows their domain inside out.
And I get it. In the SharePoint world, that's real progress. It matters for real users solving real problems.
What stopped me wasn't his post. It was the contrast — with what I used to get excited about, and what I'm working on now.
Eighteen months ago, I'd have written that exact post. But I've spent the last year backing into a different way of working. Over a long weekend, I built the infrastructure for agents to work autonomously — agent loops, error recovery, quality gates. By Sunday night, those agents had scaffolded 111 SharePoint web parts and 5 backend services. Design, build, test. All local. No human hands on the code.
Three days of tooling produced months of human output. But the output wasn't the impressive part — the steep learning curve was.
Over those three days, something broke roughly every few hours. Not metaphorically — literally. macOS permissions. Broken model configs (empty model name, nothing works for 2 hours). SCSS written for Gulp, not Heft. A Yeoman generator silently ignoring CLI flags. C++ native modules refusing to compile on Node 22. Then the agents started looping — repeating the same three broken commands until I hardened the loop detection.
None of this is in a tutorial. You can't watch a video for it. You have to live through it — hands on, late at night, no shortcut.
This is the unglamorous truth about building agent infrastructure: you're not engineering features. You're engineering resilience. Before agents can build web parts autonomously, they have to survive the environment. There is no "prompt engineering" your way out of this.
These two things — celebrating markdown support and watching agents build entire applications autonomously — are happening in the same industry, on the same platform, to people with the same job title.
The gap isn't between smart people and slow people. It's between two entirely different models of what software development is becoming. In one model, we're incrementally improving the tools we already know. In the other, the tools are learning to use themselves.
I almost missed it. I was a SharePoint developer — not a machine learning engineer, not an AI researcher. What changed was a simple question: "What if I stopped prompting AI and started architecting workflows for it?"
The gap isn't closing. It's widening. The tools are getting better faster than the mental models are updating.
Three things I think are true:
The question I've been sitting with: what room am I in right now, feeling perfectly current, that already looks like markdown support from the outside?
This post originally appeared on dev.to. The full methodology is documented at workswithagents.com/learn.