Architect, Problem Solver, Photographer, Aspiring Writer and Community Builder
status: surviving capitalism & building pouch

Garbage In, Magic Out

There's a thing nobody warns you about when you start leaning on AI tools seriously: they don't make you smarter. They make you more of whatever you already are.

I've been on both sides of this. Let me explain.

When You Know the Terrain

At work, I spend most of my time inside a large, established codebase. Payment infrastructure, banking integrations, the kind of system where every file has a reason for being where it is and every pattern has been argued over. Brownfield work. I know the conventions, the folder structure, the way data flows through the services. I've lived in it long enough that the architecture isn't something I have to think about. It's just context I carry.

This is where AI becomes something close to magic. I can take a Jira ticket, break it down, and tell the model exactly where the code should go, what patterns to follow, what the existing tests look like. I'm not asking it to think for me. I'm steering it. And because I already have the mental map, I can read what it gives back and know almost immediately whether it's right. It almost always is. Not because the AI is brilliant, but because I gave it the right constraints. My knowledge is the guardrail.

It goes beyond just writing code too. I recently designed a tech spec and architected the integration flow for a major piece of our infrastructure. That spec became the foundation the team is now building on. When you've done that kind of thinking yourself, when you've mapped the systems, made the tradeoff decisions, and defined the boundaries, AI slots in naturally. It can help you flesh out sections, pressure-test edge cases, draft implementation details. But the architecture was yours. The decisions were yours. AI just helped you move through them faster.

There are also times I'm digging into docs for something I'm not deeply familiar with. A new API surface, an unfamiliar configuration. And AI is genuinely great there too, as a companion. We go through the documentation together, I ask questions, it helps me build understanding in real time. The key is that I'm still learning. I'm still the one building the mental model. AI is just making the climb faster.

When You Don't

Now flip it. I've been in situations where I was working in a domain I didn't know well. Maybe an unfamiliar part of the infrastructure, some tooling I hadn't touched before. And I tried to use AI the same way. Same confidence. Same workflow. Completely different result.

The problem wasn't that AI gave me wrong answers. The problem was I couldn't tell they were wrong. I'd look at the output, think "yeah, that looks reasonable," and move forward. Except "looks reasonable" and "is correct" are very different things when you don't have the instinct for what correct looks like in that context.

I'd end up going back and forth, reprompting, adjusting, and only later realize I was just going in circles. Not because the AI was broken, but because I didn't have enough knowledge to ask the right questions or evaluate the answers. I was steering blind and the AI was happily driving wherever I pointed.

Every time this has happened, the fix was never a better prompt. It was getting my knowledge up. Reading the actual docs. Talking to someone who understood the system. Building the foundation that AI needs to be useful.

The Mirror

I think about it like this: AI is a mirror for your understanding. If you know your stuff, it reflects that back as speed, as leverage, as reach. You move faster than you could alone. But if your understanding is shallow, it reflects that too. As false confidence, as polished output that hides the gaps.

The dangerous part is that both reflections look the same on the surface. Clean code is clean code. Well-structured prose is well-structured prose. You can't tell from the output alone whether the person behind it knew what they were doing or was just trusting the machine. But the cracks show up eventually. In production, in a code review, in the moment someone asks you to explain what you shipped.

What I've Learned

I've started being more deliberate about this. Before I bring AI into a task, I try to honestly assess: do I actually understand this domain, or am I hoping AI will cover for me? If it's the latter, I slow down. I read first. I sketch things out by hand. I build enough understanding that when I do use AI, I can tell good output from garbage.

It's not always comfortable. There's a real temptation to just paste the problem and trust the answer, especially when you're under pressure. But I've learned that the time you "save" by skipping the understanding phase, you pay back with interest. Debugging something you never really understood, or worse, shipping something that works until it doesn't.

AI rewards what you bring to the table. The more you know going in, the more you get out. The less you know, the more you're gambling.

So the question I keep coming back to isn't "how do I use AI better?" It's "what am I doing to make sure I'm worth amplifying?"

Because it will amplify whatever you give it. That's either the best tool you've ever had, or a very efficient way to scale your blind spots.

Back to all posts
KA
AI-powered assistantAsk me anything
Hi! Ask me anything about software engineering, fintech, photography, or any topic I write about.
AI-generated responses may not reflect current views.