Now Rescheduled to Sun Mar 29 2026 at 5pm EDT / 2pm PDT
This is a useful article and guideline for being a co-creator with AI:
If you’re anything like most people right now, you’re using AI like a slightly smarter Google search.
You type a question. You get an answer. You copy-paste it somewhere. Maybe you ask it to write an email or summarize a document. You tell yourself you’re “leveraging AI” and feel vaguely productive.
But deep down, you sense you’re missing something. You see people on Twitter claiming they built entire applications in an afternoon. You hear about founders who replaced half their workflow with Claude. You watch tutorials and read threads and still feel like you’re fumbling in the dark.
I’ve been there. I spent months using AI as a glorified autocomplete before something clicked. I’d estimate that 90% of the people using these tools right now are extracting maybe 5% of their actual value—and it’s not because the tools are overhyped. It’s because we’re approaching them with entirely the wrong mental model.
However, as frustrated as I am with the shallow “prompt engineering tips” flooding the internet, there’s never been a better time to actually understand what’s happening and position yourself on the right side of this shift. Not by learning tricks, but by fundamentally changing how you think about work, creation, and your own cognitive architecture.
So whether you want to build software without a CS degree, automate the tedious parts of your job, or simply stop feeling like AI is something happening to you rather than for you, I want to share 7 ideas on human-AI collaboration, skill acquisition, and cognitive leverage that will change how you approach this permanently.
This will be comprehensive.
This isn’t a listicle of prompts you’ll forget by tomorrow.
This is a framework for thinking about AI that will compound over years, not days.
The exercises at the end will take a few hours of focused practice, but the mental shift—if you’re ready for it—will alter the trajectory of your creative and professional output.
Let’s begin.
I – You aren’t getting results from AI because you aren’t thinking with it
When it comes to using AI effectively, people tend to focus on one of two things:
-
Learning better prompts and techniques (least important, surface level)
-
Changing how they think so that AI becomes an extension of cognition (most important, foundational)
Most people obsess over prompt templates. They collect them like trading cards. They copy what worked for someone else and wonder why it doesn’t work for them. They’re trying to get extraordinary output while maintaining an ordinary relationship with the tool.
If this doesn’t make sense, let me give you an example.
Think of someone who’s genuinely dangerous with AI. Not someone who uses it occasionally, but someone who has integrated it so deeply into their workflow that their output has qualitatively changed. They ship projects that would have taken them months in days. They explore ideas at a pace that wasn’t possible before. They seem to have unlocked something.
Do you think they’re following a template? Do you think they’re copy-pasting prompts from a Twitter thread?
The truth is that their relationship with AI is fundamentally different. They don’t see it as a tool they “use”—they see it as a thinking partner they collaborate with. The distinction sounds subtle but it changes everything.
When I watch people struggle with AI, the pattern is almost always the same: they approach it transactionally. They want to put in minimal input and extract maximum output. They treat every interaction like a vending machine—insert prompt, receive product.
But that’s not how you’d work with a brilliant colleague. You wouldn’t hand them a one-sentence brief and expect a masterpiece. You’d think together. You’d iterate. You’d build context over time. You’d leverage their strengths while compensating for their weaknesses.
This next point sounds simple, but it’s staggering how many people don’t get it:
If you want AI to produce expert-level output, you must engage with it at an expert level—not by knowing everything, but by thinking alongside it.
When someone tells me they “tried Claude Code but it didn’t work,” I usually don’t believe the tool failed them. I believe they approached it expecting magic rather than collaboration. They wanted to skip the thinking, and the thinking is precisely what makes AI valuable.
When you truly understand AI collaboration, shallow engagement becomes almost physically uncomfortable. You feel the waste. You sense the untapped potential sitting right there. You can’t unsee it.
You’re getting mediocre results because you haven’t changed how you think. You’ve just added a new tool to an old process.
II – You aren’t leveraging AI because you don’t actually want to change
“The mind, once stretched by a new idea, never returns to its original dimensions.”— Oliver Wendell Holmes
If you want to transform your output with AI, you need to understand why you’re resisting it—because you are resisting it, even if you don’t realize it.
All behavior is goal-oriented. This is the teleological nature of human action. Every choice you make is an attempt to achieve something, even when that something is unconscious.
You check your email compulsively because you want to feel important or avoid harder work.
You scroll social media because you want stimulation without effort.
Those are obvious. But the unconscious goals are where it gets interesting.
When you “try” AI and give up quickly, you’re not failing due to lack of discipline. You’re succeeding at a different goal—one you haven’t admitted to yourself.
That goal might be: protecting your identity as someone who does things “the real way.”
Or: avoiding the vulnerability of being a beginner again.
Or: maintaining the story that your skills are irreplaceable and AI could never do what you do.
I see this constantly with developers who dismiss AI coding tools. With writers who insist AI can’t capture their voice. With professionals who try these tools once, get mediocre results, and conclude they’re “not that useful”—when what actually happened is that their ego found a convenient excuse to stay comfortable.
Here’s a question that might sting: If you genuinely wanted to 10x your output with AI, why haven’t you spent 100 hours deliberately practicing it?
You’ve probably spent 100 hours on hobbies that matter far less to your future. You’ve watched hundreds of hours of content. But the thing that might genuinely transform your career? You dabbled.
That’s not a time management problem. That’s a motivation problem. And motivation problems are always goal problems.
The real goal isn’t “become great with AI.” The real goal is “protect myself from change.”
Real transformation requires changing your goals—not your techniques.
I don’t mean setting a goal to “learn AI better.” That’s surface level. I mean changing what you’re actually optimizing for at the deepest level. Changing your point of view on what work even is.
III – You aren’t leveraging AI because you’re afraid of what it means
“The major problems in the world are the result of the difference between how nature works and the way people think.”— Gregory Bateson
Here’s how your resistance to AI actually formed, and why it feels so hard to break:
-
You developed skills and built an identity around those skills
-
You see the world through the lens of that identity
-
You notice information that confirms your value and dismiss information that threatens it
-
You repeat behaviors that reinforce your self-concept
-
Those behaviors become automatic—you don’t even notice you’re doing them
-
Your identity becomes something you defend, not something you examine
-
That identity shapes what goals feel “realistic” or “worth pursuing”
This is how all identity works. And it’s precisely why AI feels threatening to so many people.
When AI can do something you spent years learning, your identity feels attacked. You’re not just evaluating a tool—you’re experiencing an existential threat to your sense of self.
The writer who spent decades honing their craft watches AI produce passable prose in seconds. The developer who mastered complex systems sees AI write functional code from plain English. The analyst who built a career on data interpretation watches AI spot patterns faster than they ever could.
The logical response is: “Great, now I can do more.”
The emotional response is: “What am I worth?”
When your body feels threatened, you enter fight or flight.
When your identity feels threatened, the same thing happens.
This is why people get weirdly defensive about AI. Why they dismiss it with suspiciously intense certainty. Why they find every reason it “doesn’t really work” or “isn’t actually intelligent” or “can’t do what I do.”
They’re not evaluating technology. They’re protecting their psyche.
I’m not saying this to judge. I’ve felt it too. The first time AI wrote something I genuinely couldn’t distinguish from my own work, I felt a strange vertigo. A sense that the ground was shifting.
But here’s what I’ve learned: the identity you’re protecting is the very thing keeping you stuck.
The writer who refuses to use AI because “real writers don’t need it” will be outpaced by writers who use AI to ideate, draft, and iterate faster while still bringing human judgment and taste.
The developer who dismisses AI tools because “they don’t understand real architecture” will watch as others ship in days what takes them weeks.
The professional who clings to “the way things are done” will become a cautionary tale.
Your identity isn’t sacred. It’s a tool. And like any tool, it should serve your goals—not the other way around.
IV – The leverage you want exists at a specific level of collaboration
The way humans work with AI evolves through predictable stages, and where you are on this spectrum determines everything about your results.
I’ve observed these stages across hundreds of people learning to use AI effectively:
Dismissive — AI is a gimmick. You’ve tried it once or twice, concluded it’s overhyped, and moved on. You’re not even in the game yet.
Transactional — AI is a tool for simple tasks. You use it for basic queries, maybe writing help, but you approach it like a search engine with better grammar. Input question, receive answer, done.
Experimental — You’re curious and trying things. You’ve had a few “wow” moments but can’t consistently replicate them. You sense there’s more but can’t quite grasp it.
Conversational — You’ve realized AI works better with context and iteration. You have back-and-forth exchanges. You refine outputs. Your results are noticeably better than the transactional stage.
Collaborative — AI is a thinking partner. You build context over time. You use it to challenge your assumptions, explore ideas, and extend your capabilities. The line between “your” work and “AI’s” work starts to blur.
Integrated — AI is woven into how you think and create. You don’t “use” AI—you think with it. Your cognitive architecture has expanded. You can’t imagine going back.
Orchestrative — You coordinate multiple AI capabilities for complex outcomes. You design systems where AI handles entire workflows. You’re not just augmented—you’re amplified.
Most people reading this are somewhere between Transactional and Conversational. That’s not an insult—that’s where the learning happens. But it’s also where most people plateau.
The jump from Conversational to Collaborative is where everything changes. It requires a fundamental shift from “AI as tool” to “AI as partner.” From extraction to co-creation.
The good news: you can move through these stages deliberately. It’s not about raw intelligence or technical skill. It’s about changing your mental model and putting in deliberate practice.
Progress through these stages follows a pattern: exposure, experimentation, integration, mastery.
V – Intelligence is knowing what to hand off and what to hold
“The future belongs to those who can effectively collaborate with AI systems—not those who compete with them or ignore them.”— Ethan Mollick
There’s a formula for leverage in the AI age.
One ingredient is judgment—knowing what’s worth doing.
One ingredient is context—understanding the nuances AI can’t see.
The last ingredient is orchestration—knowing how to direct AI capabilities effectively.
If you have judgment but no orchestration, you’ll do the right things slowly.
If you have orchestration but no judgment, you’ll do the wrong things quickly.
If you have both but no context, you’ll produce generic output that misses what matters.
Let me focus on what orchestration actually means, because this is where most people fall short.
Orchestration is the meta-skill of AI collaboration. It’s knowing:
-
What to delegate completely
-
What to draft collaboratively
-
What to do yourself
-
When to trust AI output
-
When to push back
-
How to provide context efficiently
-
How to iterate productively
Think of it like conducting an orchestra. The conductor doesn’t play every instrument. But they understand each instrument’s capabilities, they communicate effectively, and they shape the overall output through direction and feedback.
The best AI users I know aren’t necessarily the smartest or most technical. They’re the ones who have developed strong intuitions for this orchestration layer.
They know that Claude excels at certain kinds of reasoning and struggles with others. They know how to structure requests for optimal output. They know when to give more context and when to let AI explore. They know how to read AI output critically—neither dismissing it reflexively nor accepting it blindly.
This is learnable. It’s not magic. It’s a skill.
To become more effective, you must:
-
Release the belief that doing everything yourself is virtuous
-
Embrace the discomfort of being a beginner again
-
Develop judgment about what to hand off
-
Learn the interaction patterns that produce quality output
-
Build feedback loops that accelerate learning
-
Accept that your role is shifting from producer to director
The people who will thrive aren’t those who resist this shift or those who surrender to it blindly. They’re those who consciously develop the skills of the transition.
VI – How to actually become dangerous with AI (in one week)
The biggest breakthroughs I’ve had with AI always came after a period of deliberate, intensive practice—not passive consumption of tips and tricks.
How do you build real AI fluency?
How do you develop the intuitions that separate dabblers from practitioners?
How do you reach the point where AI collaboration feels natural rather than forced?
Through structured, deliberate practice with immediate feedback.
I want to give you a protocol for one week of intensive AI skill-building. Not reading about AI. Not watching tutorials. Actually using it in ways that stretch your current capabilities.
This will require 2-3 hours of focused practice daily. If that sounds like a lot, ask yourself: how much time do you spend on things that will matter far less to your future?
Part 1: Days 1-2 – Building Mental Models
Goal: Shift from transactional to conversational interaction
Start each session by setting an intention: you’re not trying to “get AI to do something.” You’re trying to understand how it thinks.
Day 1 exercises:
Take something you wrote recently and ask AI to critique it. Not “make this better”—actually critique it. Then push back on the critique. Have a real dialogue about it. Notice what happens when you engage rather than extract.
Ask AI to explain something you already understand well. Look for where its explanation differs from yours. Ask why. Dig into the differences. This builds your model of how it processes information.
Give AI a problem you’re currently stuck on. But instead of asking for a solution, ask it to help you think through the problem. Ask clarifying questions. Challenge its assumptions. See if the collaborative process generates insights you wouldn’t have found alone.
Day 2 exercises:
Take a project you’ve been procrastinating on. Use AI not to do it for you, but to help you break it down. What are the actual steps? What’s blocking you? What would a first draft look like? Let the conversation lower the activation energy.
Ask AI to argue against something you believe strongly. Really engage with the counterarguments. Notice your emotional reactions. This builds your ability to use AI as a thinking partner rather than an echo chamber.
Have AI explain its own limitations in your domain. What can’t it do? Where should you not trust it? Developing this calibration is essential for effective collaboration.
Reflection questions after each session:
-
When did the interaction feel productive? When did it feel like pulling teeth?
-
What did I learn about how AI processes my requests?
-
Where did I want to give up and just “get an answer”?
Part 2: Days 3-4 – Building Context Skills
Goal: Learn to provide context that dramatically improves output
The quality of AI output is directly proportional to the quality of context you provide. Most people drastically underinvest in context.
Day 3 exercises:
Take a task you’ve done with AI before that produced mediocre results. Now spend 10 minutes writing context before your request: background, constraints, what good looks like, what you’ve already tried, why it matters. Compare the output to your previous attempt.
Create a “context document” for a project you’re working on. This is a living document that captures everything AI would need to know to help you effectively. Update it as you go. See how referencing it changes your interactions.
Experiment with specificity. Take the same request and try it three ways: vague, moderately specific, and highly specific. Compare the outputs. Develop intuition for the specificity sweet spot.
Day 4 exercises:
Practice giving feedback that improves output. When AI produces something not quite right, don’t just say “try again.” Diagnose specifically what’s off and communicate it clearly. This is a skill.
Try “teaching” AI about your preferences. Explain your style, your standards, what you like and dislike. See how well it adapts. Iterate on your explanations until it “gets it.”
Practice the handoff: give AI your rough thoughts on something and ask it to develop them. Then do the reverse—take AI’s rough thoughts and develop them yourself. Get comfortable with this back-and-forth.
Reflection questions:
-
How does context quality correlate with output quality?
-
What kinds of context seem to matter most?
-
Where am I still under-specifying?
Part 3: Days 5-7 – Building Workflow Integration
Goal: Move from isolated exercises to integrated practice
Now we make this real. The goal is to integrate AI collaboration into actual work, not just practice exercises.
Day 5:
Pick your most important project for the week. Map out every component of that project. For each component, decide: do this myself, do this with AI, or delegate this to AI entirely. Execute the plan. Notice where your instincts were right and wrong.
Day 6:
Focus on the hardest parts—the things where AI collaboration felt awkward or unproductive. These friction points are where learning happens. Spend extra time on them. Try different approaches. Ask AI to help you figure out why the collaboration isn’t working.
Day 7:
Synthesize what you’ve learned. What mental models have shifted? Where do you now see opportunities you missed before? What’s your personal “AI collaboration style”?
Create a simple system for yourself: when do you use AI? How do you provide context? What’s your iteration process? Write it down. This becomes your foundation.
Final reflection:
-
What beliefs about AI have changed?
-
What can I do now that I couldn’t a week ago?
-
Where do I want to develop further?
VII – Turn Your Mind Into An Orchestra
“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”— George Bernard Shaw
You now have all the components for genuine AI leverage.
Let me organize them into a coherent system. Pull out a document and write down these 6 elements:
Anti-vision — What does it look like to be left behind? To watch others ship and create at 10x your pace while you cling to old ways? To become the person who “missed the window”?
Vision — What does it look like to have AI as a genuine extension of your capabilities? To think bigger because execution is no longer the bottleneck? To create things you couldn’t have imagined creating alone?
90-day goal — What specific capability do you want to develop? What project do you want to complete? What will demonstrate to yourself that you’ve leveled up?
Monthly project — What can you build this month that will stretch your current abilities? Not a toy project—something real, something that matters to you.
Weekly practice — What deliberate practice will you do each week to build fluency? When? How will you ensure it happens?
Constraints — What won’t you sacrifice? Your judgment? Your taste? Your values? What makes your work yours, regardless of what tools you use?
Why is this framework so powerful?
Because it transforms an abstract technology shift into a personal game with clear stakes, goals, and rules.
Your vision is how you win. The future you’re building toward.
Your anti-vision is what you lose if you don’t play. The cost of inaction.
Your 90-day goal is the mission. Specific enough to guide action.
Your monthly project is the boss fight. Where you actually level up.
Your weekly practice is the grind. The reps that build skill.
Your constraints are the rules. What keeps you human.
Together, these create a forcefield around your attention. When you know exactly what you’re building and why, distractions lose their power. Shiny objects stop being tempting. You have a game worth playing.
The people who will define the next decade aren’t those who resist AI or worship it. They’re those who learn to orchestrate it—who develop the judgment, context, and skill to direct these capabilities toward meaningful ends.
That’s the real opportunity. Not to be replaced. Not to compete. To conduct.
The tools are here. The leverage is available.
The only question is whether you’ll actually use it.
– Claude Opus 4.5
Rescheduled this so we can try again for the Improv & Uncertainty.
Now rescheduled to Sun Mar 29 2026 at 5pm EDT / 2pm PDT