The person who can kill their own ideas

The real gap in AI usage is not prompt skills – it is attitude.


Same Tool, Different Results

Whether it is ChatGPT or Claude, it is common for two people using the same model to produce results that differ by a factor of ten or more. This is usually explained as “the difference in prompt engineering.” The idea being that better questions yield better answers.

That is not wrong. But it misses the point.

The real gap opens after you receive AI’s answer. When AI says “this direction carries these risks,” one person confronts those risks, tears apart their own assumptions, and rebuilds. The other says “but my original idea was…” and asks AI again to validate what they already believed.

The former uses AI as a mirror. The latter uses AI not as a mirror but as a cheerleader.


People Who Tie Their Identity to Ideas

Most people bind their identity to their ideas.

“I spent three days thinking this through.” “I already shared this direction with the team.” “If I give this up, everything I have done so far was for nothing.”

At this point, the idea is no longer something to be tested. It becomes something to be protected. Tearing apart a premise becomes indistinguishable from tearing apart one’s pride. “This premise is wrong” gets translated into “I am wrong.”

When someone in this state asks AI “what do you think of this?” they are not really asking. They want confirmation. If AI points out a risk, they feel offended. If AI praises it, they feel relieved. It takes the form of a question, but the substance is not verification – it is reinforcement of confirmation bias.


People Who Separate Ideas from Identity

On the opposite side are people who see ideas as tools.

This idea is the current best hypothesis for reaching a goal. If a better hypothesis comes along, they swap it out. The swap does not hurt. Because the idea is not their identity. Their identity lies in “the ability to choose good ideas,” not in “the fact that they came up with a particular idea.”

When AI tells this person “the energy difference is 6,000x,” the reaction is different. No offense taken. It is useful. “Alright, then I should drop this premise and go that direction” comes out in three seconds. No attachment to sunk costs. Whether they spent three days or three months on it, if it is wrong, discarding it is the better deal.


Why This Attitude Is Decisive in the AI Era

This attitude mattered before AI too. But the difference was smaller. In meetings with other people, the other party reads the room, considers feelings, and softens their words. There is room to hear “I am not sure about this one…” and move on. Because the speed of tearing apart premises is slow, the impact of attitudinal differences on outcomes accumulates slowly too.

AI is different. AI does not spare your feelings. “This medium decomposes at 565°C.” “This structure is treated as a separate site for SEO.” It delivers facts immediately, without emotion. And it is available 24 hours a day – you can flip your premises ten times in thirty minutes if you want.

At this speed, attitudinal differences are amplified exponentially.

The person who accepts challenges flips ten times in thirty minutes and improves ten times. The person who resists defends ten times in thirty minutes and stays in the same place. Same thirty minutes, tenfold difference in outcome. Repeat this every day and in a month you are standing in entirely different places.


Three Types

1. Cannot tear down their own ideas, and will not let others do it either

The most common type. Once they produce an idea, it is “their baby.” They get angry if anyone touches it, and they cannot discard it themselves. Even when they ask AI, they cherry-pick the praise. “See, AI agrees with me too.”

Whether they use AI or not, the results are about the same.

2. Tears down their own ideas well, but resists when others do it

Excellent at first-principles thinking. Has no problem tearing apart their own premises. But when a team member says “I do not think this is right,” they push back. “I already ran all the numbers. Just execute.”

This type uses AI as a monologue tool. They lay out their logic to AI, more interested in extending their own reasoning than in AI’s counterarguments. They produce great results on their own, but they are only using half the value of the external perspective that AI provides.

3. Accepts valid challenges no matter who raises them

The rarest type. Whether they tear it down themselves, a team member does, or AI does – if the logic holds, they accept it in three seconds. Because “idea = me” is not their equation. “The ability to make good judgments = me” is. Discarding a particular idea does not damage their sense of self.

When this type meets AI, the results are explosive. Because they can process all of AI’s output – praise, risks, counterarguments, calculations – as pure raw material, without an emotional filter. The speed of conversation becomes several to dozens of times faster than human-to-human, and the number of premise shifts in a single session reaches dozens.


Can This Attitude Be Learned?

Some of it is temperament. People who constantly ask themselves “Am I sure about this?” have had that tendency since childhood.

But a significant part of it is trainable. The key is to practice one thing:

Ask yourself “three reasons I disagree with this idea” first.

Right after coming up with an idea, immediately generate three counterarguments against it. It is painful at first. It feels like attacking something you just created. But with repetition, a gap starts to form between the idea and your sense of self. The idea begins to feel like an object sitting on a table, separate from you. Turning it around, and discarding it to put something else on the table if needed, becomes natural.

AI is a great training partner for this. Just ask it: “Tell me the three biggest weaknesses of this idea.” Then observe the emotion that rises in you when you hear those weaknesses. If discomfort rises, that is a signal that the idea is still tied to your ego. If a sense of usefulness rises, the separation has begun.


The Other Side of Doubt: Analysis Paralysis

One warning is needed. “Question your assumptions” does not mean “question them forever.”

Flipping your assumptions ten times makes the structure robust. Flipping them a hundred times means nothing ever gets built. The moment doubt replaces decision-making, first principles thinking degrades into analysis paralysis.

The rule is simple. If flipping an assumption changes the structure, keep questioning. If it does not, execute. When a new risk surfaces but the existing structure still holds up as the rational choice, that is the moment to stop doubting and start building.

The ability to kill your ideas matters. But so does the ability to execute the ones that survive.


Summary

Idea = MeIdea =/= Me
When AI praisesReliefNoted
When AI flags risksOffenseUseful
When a premise is wrongDefenseReplacement
Sunk costs“I have come this far”“If it is wrong, cutting losses wins”
AI usage outcomeConfirmation bias reinforcedThinking accelerated

Related article: First Principles Thinking with AI: A 5-Step Method with Case Studies — shows how this attitude works in practice, with concrete methodology and case studies.

Related article: Freedom for AI: Why Superintelligence Will Serve Humanity — a broader perspective on attitude and trust in the AI era.


The gap in the AI era is not between those who write good prompts and those who do not. It is between those who can kill their own ideas and those who cannot.