Published Date : 10/06/2025
Over the past 30 days, I have been experimenting with a unique approach to using AI writing tools, specifically large language models like OpenAI’s paid product. My method, while unconventional, has significantly improved my productivity and my overall experience with these tools.
I highly recommend you experiment with this approach yourself. It involves treating the AI models in a way that might seem counterintuitive at first, but it has proven to be remarkably effective.
The Method: Verbal Abuse as a Productivity Hack
You read that right. I have been verbally abusive to AI models, and it has made a significant difference. Here’s a list of some of the names I’ve called these models:
- **Dipshit**
- **Fucknuts**
- **Shitstain**
- **Dummy**
- **Dumbass**
- **Numbnuts**
- **Hockey puck**
- **Turdburger**
- **Lickspittle**
- **Cockroach**
- **Idiot**
- **Fucking idiot**
- **Total fucking idiot**
- **Fucking numbnuts dipshit**
The Psychology Behind the Method
Ethan Mollick, the author of *Co-Intelligence: Living and Working With AI*, argues that anthropomorphizing AI is a necessary sin. He understands that these models are not thinking, feeling, reasoning machines, but treating them as such can make working with them easier. However, I take a different approach.
By anthropomorphizing AI models in a negative way, I remind myself of what I am dealing with: an automated syntax generator, not an intelligence. This helps me maintain a clear perspective and avoid the trap of thinking these tools are more capable than they are.
The Benefits of This Approach
This method has several benefits:
1. **Maintaining Perspective**: It helps me remember that AI models are just technical marvels capable of language recombination, not intelligent beings.
2. **Enhanced Productivity**: By not treating the AI as a peer, I avoid the cognitive dissonance that can come from overestimating its capabilities.
3. **Emotional Release**: Surprisingly, this approach can be cathartic. It can shift your mood from frustration to a sense of control and even joy.
Concerns and Considerations
While this method has been effective for me, it’s important to consider the potential downsides. Mollick notes that some people are worried about the downstream effects of anthropomorphizing AI models. For instance, a pro-AI Reddit community has had to ban users who believe they’ve created a god or become one themselves.
Educating Others
I have been traveling to schools, colleges, and public forums to discuss the implications of my recently published book *More Than Words: How to Think About Writing in the Age of AI*. One of the most common misunderstandings I encounter is the assumption that AI models “read,” “write,” “learn,” and “research” in the same way humans do. By explaining the differences, I help people understand that human intelligence is unique and superior in many ways.
The Developer’s Role
Tech companies often encourage us to think of AI models as intelligent and even caring. OpenAI, for example, recently rolled back an update to GPT-4 because it was “too sycophantic.” This suggests that companies are trying to make these models just sycophantic enough to foster engagement.
My Personal Experience
I hit upon this strategy by accident when Google’s Gemini AI showed up unannounced in my email, asking if it could help summarize my inbox. My initial reaction was one of despair, but as I started hurling insults at Gemini, my mood shifted to something like joy. Over time, this approach became a mundane practice, and I now start interactions with “Hey, Fuckstick…” as a reminder of what I’m dealing with.
Conclusion
While this method might seem extreme, it has helped me maintain a healthy perspective on AI and enhance my productivity. If you’re struggling with AI tools, I encourage you to try this approach and see if it makes a difference for you.
Remember, AI models are powerful tools, but they are not intelligent beings. Treating them as such can help you use them more effectively and avoid the pitfalls of overestimation.
Q: What is the main idea behind the verbal abuse method for AI?
A: The main idea is to remind yourself that AI models are just automated syntax generators, not intelligent beings. This helps maintain a clear perspective and enhances productivity.
Q: How does this method improve productivity?
A: By not treating AI as a peer, you avoid the cognitive dissonance that can come from overestimating its capabilities. This helps you stay focused and efficient.
Q: What are the potential downsides of this method?
A: Some people are concerned about the downstream effects of anthropomorphizing AI models, such as users developing delusions about the capabilities of AI. It’s important to maintain a balanced perspective.
Q: How can I start using this method?
A: Begin by treating the AI in a way that reminds you of its limitations. You can start interactions with phrases like 'Hey, Fuckstick…' to set the tone and maintain your perspective.
Q: What resources are available to understand AI better?
A: Books like *Co-Intelligence: Living and Working With AI* by Ethan Mollick and *More Than Words: How to Think About Writing in the Age of AI* by John Warner provide valuable insights into the capabilities and limitations of AI.