Building Apps with AI: The Good, the Bad, and the Hand-Holding

Over the past year, it’s been impossible to ignore the hype around AI coding tools. Headlines promised that AI tools would “redefine programming” and “let developers focus only on ideas, not code.” Friends and colleagues were raving about how AI was writing entire components for them in seconds. Naturally, I wanted to see for myself if these tools could really change the way I build apps.

My first test was building a mobile app using Flutter and Firebase. This should have been simple: with the built-in tools, you can spin up a working project with just two lines in the terminal. But when I asked AI to do it, things went downhill. Instead of giving me the straightforward setup, it spent hours trying to “engineer” a project structure. The result? Broken code, mismatched dependencies, and a project that wouldn’t even compile. I eventually gave up and went back to the manual two-line setup – which worked instantly.

That said, once the project was up and running, I found that for the mobile app things generally worked. AI could generate UI components, basic Firebase hooks, and navigation code with reasonable success. adding more and more screens and functionality was a breeze. It wasn’t perfect, but it saved some time, especially this is something I don’y usually do and I would have spent hours figuring out how thing should be done.

there were moments when it actually surprised me. While setting up authentication, it proposed a few field structures I hadn’t considered. They weren’t perfect, but they made me think differently about my data model. It felt less like “cheating” and more like brainstorming with a slightly eccentric colleague.

The story was very different when I switched to building a web app with Vue. Here, the AI’s output failed miserably. The browser spewed error after error, and the app barely ran. When I fed those errors back to the AI, it would usually fix them – but the process felt endless. I became less of a developer and more of a QA tester, copy-pasting issues back and forth until something finally worked.

The Reality: Hand-Holding

That set the tone for my experience.

AI’s are less like autonomous senior developers and more like incredibly fast junior programmers with encyclopedic knowledge but zero real-world context. They can be brilliant, they can be dense, and most of the time, I find myself having to hold their (virtual) hand to get anything truly useful for complex tasks. It’s a relationship with incredible highs and frustrating lows.

Most of the time, working with AI feels less like doing pair programming and more like guiding a junior developer. I need to:

  • Treat it Like an Intern: I give it small, well-defined, isolated tasks. I would never ask an intern to design the application architecture, and I won’t ask the AI either.
  • Be Hyper-Specific: Vague prompts lead to vague, useless code. I provide as much context as possible, even @-mentioning specific files and functions in Cursor to narrow its focus.
  • Scrutinize Every Line: I’ve turned my trust level down to zero. I treat every AI suggestion as a proposal, not a solution. I read it, understand it, and test it before committing.
  • Know When to Walk Away: If I spend more than two minutes trying to coax the right code out of the AI, I stop. It’s a clear sign that the task is too complex for it, and I’m better off just writing it myself. When needed, rewrite or adjust code so it actually works in context.

In other words, AI didn’t remove the effort – it just shifted it. Instead of typing everything myself, I spend energy steering and verifying. which is not a bad thing on it’s own.

Lessons Learned

After a lot of trial and error, I realized AI is best for supporting work, not leading it.

  • Use it for scaffolding, boilerplate, and reminders.
  • Don’t rely on it for architecture, security, or complex features.
  • Always double-check everything.

Confident Sounding Nonsense

AI assistants are also prone to “hallucinations.” They will invent library functions that don’t exist or write code that looks plausible but contains subtle, logic-destroying bugs. The danger here is that the code looks professional and correct, making it easy to accept without rigorous testing. I’ve wasted hours debugging problems that were introduced by a confident but incorrect AI suggestion. on an embedded system I found my self arguing with the AI about the device data sheet, while the Ai sent me to check my wiring…

AI coding tools are useful, but they’re not magic. They can speed things up in some areas, but they’re not ready to replace careful design, debugging, or problem-solving. For me, AI has been more like training wheels – helpful in motion, but still requiring a steady hand to keep things upright.

Final Thoughts

AI coding assistants are a powerful new category of tool, but they are not the revolution we were promised – at least, not yet. They can’t reason or understand the big picture. For now, they are powerful accelerators for the “small stuff”. The real skill for developers in this new era isn’t about letting AI write the code, but about learning the fine art of managing its strengths and weaknesses.

That said, I remind myself this is only the beginning. These tools are still young, and every few months they evolve noticeably. Models are getting better at understanding larger codebases, keeping track of context, and even suggesting higher-level design choices. It’s fascinating to think where this might go in a few years – maybe one day the “co-pilot” promise will feel real, not just marketing hype.

I’ll keep experimenting

note: yes, I did use AI to help me write this post 🙂

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *