Building in the AI Era: What Changes, What Doesn't
I've been building software professionally since the late 90s. For the last decade-plus, that's mostly meant running my consulting company, Crunchy Bananas. Shipping products, helping teams scale, and getting called in when things were on fire.
In late 2023, after nine years of consulting, I hit pause and took a Director of Engineering role. I'm still in that seat today. In those two years, the way we build software has shifted hard enough that it feels weird not to write about it.
The fundamentals haven't changed. Good code is still good code, good architecture still matters. But how we build? That's a different story.
GitHub’s research on the new identity of developers in the AI era captures this pretty well: we’re going through distinct phases of AI adoption, and different people are at very different stages.
This blog is my running log of what it actually looks like to build in that new reality.
The Crunchy Bananas Story
Crunchy Bananas was my consulting company for nine years. We:
- Built products from scratch
- Helped teams scale and unstick themselves
- Shipped code that mattered for real businesses
It was good work with good people. But it was also the usual consulting loop:
- Find clients
- Build their software
- Ship
- Repeat, forever
By late 2023, I wanted a different kind of challenge and a more stable platform to experiment from. So I made a deliberate choice:
Pause consulting. Take a Director of Engineering role. Work with one product, one codebase, one set of constraints and go deep.
I’m glad I did. But the timing ended up being very interesting.
What Changed Between 2023 and 2025
When I stepped away from consulting in late 2023, the landscape looked roughly like this:
- GitHub Copilot was useful but basic.
- ChatGPT was impressive, but for most devs it was still “that cool website you paste code into.”
- “AI coding tools” basically meant autocomplete on steroids.
By 2025, that picture looks completely different:
- Copilot evolved dramatically – From “complete this line” to “refactor this whole flow across multiple files.”
- Claude, GPT-4, and friends – Capable of real architectural reasoning, not just pasting boilerplate.
- MCP (Model Context Protocol) – A way for AI tools to plug into your actual tools and data instead of guessing from one file.
- Cursor, Windsurf, etc. – Editors built around AI-first workflows instead of AI being a sidebar bolt-on.
- Agentic workflows – AI that can chain steps, call tools, and reason across systems, not just spit out snippets.
What’s wild is the range of adoption:
- Some devs are all-in and won’t touch a file without an AI assistant.
- Some are cautiously experimenting.
- Some are still in “this is just fancy autocomplete” mode.
There's a lot of uncertainty, and that's understandable. The ground is still moving.
The “Vibe Coding” Criticism
You’ve probably seen these takes:
“AI just does vibe coding.”
“It can’t reason about architecture.”
“You still need to know what you’re doing.”
“It’s just autocomplete.”
The last one is technically true and also completely misses the point.
Yes, you still have to know what you’re doing.
Yes, AI makes mistakes.
Yes, you still need to review everything.
But none of that is the interesting part.
What People Are Missing
The thing that actually matters:
The feedback loop collapsed.
Before AI tools were any good, the loop looked like this:
- Think about the problem
- Write code
- Test it
- Debug it
- Refactor it
- Repeat
Now, for a huge chunk of everyday work, it looks more like:
- Think about the problem
- Describe what you want
- Review generated code
- Iterate
The thinking hasn’t gone away.
The architecture decisions haven’t gone away.
Code review hasn’t gone away.
What’s largely gone is the mechanical translation of intent into implementation.
And once that translation step speeds up by 3–5x, a lot of other things become possible.
MCP: The Real Game-Changer
Model Context Protocol (MCP) is what makes this practical at scale.
In plain English: MCP is a standard way for AI tools to see the same things you see (tickets, designs, code, docs) instead of guessing based on one file and a vague prompt.
With MCP wired up, I can have Copilot:
- Look at a Jira card – Read requirements and acceptance criteria
- Review Figma designs – See what the UI is supposed to look like
- Examine the codebase – Understand our patterns, conventions, and existing components
- Apply our guidelines – Use tuned Copilot instructions built around our workflow
This isn’t “vibe coding.” This is context-aware development at scale.
The AI isn’t hallucinating intent from a single function. It’s looking at:
- The ticket
- The design
- The surrounding code
- The team’s conventions
…just like a human would. It’s just much faster at the mechanical part.
What This Looks Like in Practice
Real example from a recent week:
Task: Add a new filtering option to a data table component.
Traditional approach (pre-AI / low-AI)
- Read the Jira ticket – ~5 minutes
- Find the design in Figma – ~3 minutes
- Locate the component in the codebase – ~5 minutes
- Remember (or re-learn) how our filter system works – ~5 minutes
- Write the new filter behavior – ~30 minutes
- Test it – ~10 minutes
- Fix the bugs I just introduced – ~15 minutes
Rough total: ~70 minutes
MCP-enabled approach
- Point Copilot at the Jira ticket
- Point Copilot at the Figma design
- Let Copilot inspect the existing table + filter components
- Ask it to implement the new filter following our patterns
- Review, tweak, and ship
Rough total: ~15 minutes
What changed in that 55-minute difference?
The 15 minutes I still spend are on the parts that matter:
- Is the logic actually correct?
- Does it handle edge cases?
- Is it consistent with our patterns?
- Does it solve the real user problem?
All the mechanical stuff (imports, boilerplate, matching our style, wiring up the right props) gets handled by the tooling.
That’s the collapsed feedback loop in action.
Why This Matters for You
If you’re still writing every line by hand, you’re not being “pure” or proving you’re more skilled. You’re just leaving a lot of leverage on the table.
The market won’t wait. Your competitors won’t wait. The pace of change definitely isn’t slowing down.
The point isn’t “AI replaces developers.” It’s:
AI multiplies developers.
- A senior dev with the right AI setup is easily 3–5x more productive.
- A junior dev with AI and good guidance can hit mid-level output in months instead of years.
- A team that leans into this ships faster, iterates faster, and learns faster.
The folks dismissing all of this as “vibe coding” are making the same kind of mistake as people who dismissed:
- IDEs in the 1990s – "Real programmers use
vi." - Stack Overflow in 2008 – "Real programmers don't copy-paste."
- Frameworks in the 2000s – "Real programmers build everything from scratch."
- TypeScript in 2012 – "Real programmers don't need types."
Those takes did not age well.
What I’m Sharing Here
This blog is where I’m going to share what this looks like in real projects, in real time:
- Real project work – Actual products, actual constraints, not toy examples.
- AI transparency – When AI does the heavy lifting, I’ll say so. When it fails miserably, I’ll show that too.
- Practical guides – Less theory, less hype. More “here’s the exact setup and why it worked.”
- Modern stacks – Ember, TypeScript, Tailwind, Vite, MCP-enabled workflows, and whatever else earns its spot.
- Honest takes – What’s good, what’s broken, and what you can mostly ignore for now.
I’ll call out the gotchas, share the mistakes, show the time savings, and learn in public.
Why Ember?
You’ll notice we’re using Ember for this blog and for a bunch of the projects I’ll talk about. That’s deliberate.
Short version: Ember’s conventions and stability are perfect for AI-assisted development.
When a framework has strong opinions:
- AI can learn those conventions.
- Patterns are consistent across the codebase.
- There are fewer “well, it depends” decisions per file.
That’s exactly what AI tools are good at: recognizing and extending patterns.
React’s “do whatever you want” flexibility is powerful, but from the AI’s perspective it’s chaos:
- Every project is structured differently.
- Every team invents its own mini-framework.
- “Best practices” are often local to that one repo.
AI has to guess.
Ember’s “convention over configuration” is the opposite:
- Directory structure is predictable.
- Patterns are reusable and repeatable.
- Core APIs don’t churn every six months.
AI learns the conventions once and can apply them everywhere in the app.
I’ll dive deep into this in the next post, but that’s the gist of why Ember is such a good fit here.
Why Share This Now?
Two years in a Director role (since late 2023) has given me a different vantage point.
I see:
- Where teams get real leverage from AI
- Where things quietly break
- Where process, tooling, and culture are out of sync
- How big the gap is between “traditional” and “AI-augmented” development
That gap is already big, and it’s widening.
The teams that figure this out will ship 3x faster and feel calmer while doing it. The teams that don’t will spend a lot of time wondering why they’re always behind, even though everyone on the team is working hard.
I’m learning this stuff in real-time, at scale, with real constraints and real stakes.
Might as well share it.
What’s Next
Over the coming posts, I’ll get into things like:
- Why Ember is such a good fit for AI-assisted workflows
- How we’re setting up modern tooling (Vite, TypeScript, Tailwind)
- How we’re using MCP in practice (not just in diagrams)
- Real project builds with “before vs after AI” time comparisons
- Team workflows that actually unlock AI instead of fighting it
- What’s working, what’s not, and what’s still just hype
The goal isn’t to convince you that AI is perfect. It’s not.
The goal is to show you how to use it effectively and honestly so you can decide what makes sense for you and your team.
Traditional development isn’t dead. But it’s not optimal anymore.
And if you’re building a business, optimal matters.
AI Transparency: This post was written with assistance from Claude Sonnet 4.5 and GPT-4.1.