code Mar 15, 2026

AI Discipline

A lot has been written about the huge leap forward the different AI models went through in late 2025. Before this jump, I used AI sporadically in my day job, but certanily not daily. I used it for very small, distinct tasks, such as “Given the inputs of x and y, create a function that performs Z and returns a boolean value.” I would need to give it context for what language I wanted, the coding standards I wanted followed, and never to use the JavaScript framework React (if you know, you know). However, during my winter break in-between school semesters, I had the time to play around with the models more.

The High

What I have found in these past few months is that the capabilities of these models have considerably improved. Instead of the narrow, focused tasks I used AI for last year, I am now building and using AI in ways that feel magical. I’m still very much in the phase of OMG, look at everything I can do now! It’s almost like getting high, that intoxicating feeling of seeing things in a new manner, of having your brain altered temporarily that unlocks a new understanding of the world. It’s a total shift in thinking. Now, instead of getting mired in the low-level decisions of infrastructure and tooling—the things necessary in order to build software—I’m building robust MVPs (minimum viable products) in a matter of hours. The software I have longed to build but never had the time to do so (working toward a college degree while employed full-time places time constraints on my days) are now seeing the light of day.

This is remarkable. I have longed for local-first software that doesn’t require external services to process or store my data. Think about a simple to-do application and the kind of data the provider of that application keeps on you, just by the nature of the tasks being stored in their database (tasks are very telling, given that they are very personal in nature). I value my privacy. Or, more accurately, I want the option to share and have control over the data that is collected about me or who I choose to share it with. With AI, I have been able to build my own to-do app (called Settimana) that is local-only; data stays on my device.

In addition to coding, AI is also useful in exploring new topics and quickly troubleshooting real world issues. When I was looking into buying a trickle charger for my motorcycle, I fed in a couple of options into Claude and received a solid recommendation based on my criteria. Of course, verifying this information is still part of the thinking process with AI, as is verifying code integrity, though I worry that I am starting to care less about code quality than I once did.

The Low

AI has downsides, too. I’ve read stories and heard testimony from parents who have lost children to suicide at the urging of their AI chatbots. There seems to be an AI delusion syndromeTM that comes from interacting with chatbots too much. There are reddit threads about women having AI husbands and men using it to create porn. The downsides are real. I am well aware of the problems with using AI, especially when AI is used to replace human relationships. We are already experiencing a loneliness epidemic and AI is posed to exacerbate this to a degree that honestly scares me. Social media has already altered an entire generation; if that trend continues with AI, it will be much worse because AI can be tailored to the individual. The syncophantic nature of these AI chatbots are a real problem. AI chatbots validate and confirm a person’s thinking and when our ideas are not challenged—by friends, family, research, science—delusion is a potential outcome here.

Critical thinking is another issue that the use of AI affects. Critical thinking takes time; reasoning about complex topics means we need a broad understanding of the world, the time to think about claims and supporting evidence, access to research or prior knowledge, and the ability to hold conflicting concepts in our brains. If the first reaction to a question or thought is to reach for an AI prompt, the thinking muscle atrophies.

I already see this happening in me with code. In the past, when there was a new feature request or bug to fix, I would inspect code, come up with a rudimentary plan, and then iterate to the best solution. Now, I just pop my prompt into Claude Code and I have a solution in the time it takes to make a cup of coffee. No thinking required. Granted, the code produced works about half the time, depending on scope and complexity, but that initial process of loading a task into my brain so that I understand the entirety of the problem space isn’t something I do now. And that’s where the problems start to show. I have no concept or understanding of the totality of the software project. So, when a bug does crop up, I am reliant on AI for a quick, imperfect fix or I have to spend the time loading in the project to my brain, thereby negating any speed wins from AI. It’s a problem that only gets worse as the time from project conception increases.

The Discipline

How to balance the excitement and productivity gains of using AI with the consequences of not exercising my brain? Using AI is an Odyssian siren song, pulling me to build all the things I ever wanted, to be smarter than I really am, and to make me seem cooler than I have any right to be. Or, at least, that’s what it feels like. This, I suspect, is the beginning of that AI Delusion Syndrome.

The conflict here is how to use AI to level up without offshoring my critical thinking skills. Humans grow when they are pushed and challenged; hardship isn’t something to be avoided but rather embraced as this creates resiliency and new ways of interacting with the world. When we overcome something, whether that’s a lack of knowledge or physcial limitation, we prove to ourselves that we are capable of more. In this pursuit, it is absolutely integral to have mentors, coaches, or other trusted people to show you what is possible and to point out when your approach may not be helpful. Failing is also part of this. Try something, fail, adjust, repeat. It’s the long process of knowledge acquisition that creates lasting change in a person. Left to its own devices, though, AI reverts to declarative answers that can be wildly incorrect.

Over the past six months, I’ve been iterating on an instructional document that I use for all of my chatbot conversations. I began the document last semester for my final project in Enlightenment: Horizons of Human Potential and Flourishing course. Taking cues from Transend: The New Science of Self-Actualiztion by Scott Barry Kaufman, Ph.D., Self-Compassion: The Proven Power of Being Kind to Yourself by Kristin Neff, Ph.D., Pema Chödrön’s Comfortable with Uncertainty, Dalton from the 1989 film Road House (of course I’m invoking the philosopher tough guy), and New England sensibilities (the ethos of New Englanders is one that is already rooted and grounded in me but I wanted to make it explicit), this prompt esnures that each AI chat session is more akin to a coach/mentor relationship than a regurgitation bot.

Feel free to take a look and use.