Just Talk To It: What OpenClaw's Creator Can Teach Us About Building with AI
Just Talk To It: What OpenClaw's Creator Can Teach Us About Building with AI
Peter Steinberger was traveling when he connected WhatsApp to an AI agent. It took about an hour. No elaborate planning, no architecture review, no committee approval. He just built it.
That side project became OpenClaw — the fastest-growing open-source project in GitHub history, with over 180,000 stars. Steinberger was recruited by OpenAI and recently appeared on Lex Fridman Podcast #491, where he laid out a philosophy that challenges the assumption that AI requires elaborate infrastructure and months of planning.
His core message is disarmingly simple: just talk to it.
That phrase might sound like it's aimed at non-technical users, but I think it's actually the opposite — it's a corrective for engineers who overcomplicate things. The teams that will get the most out of AI are the ones that stop over-engineering the process and start with simple, focused conversations about real problems.
The Anti-Complexity Manifesto
Steinberger has written extensively about his approach to building with AI, and three of his observations hit home for me. If you're building software — or evaluating how AI fits into your team's workflow — these are worth understanding.
Most AI platforms are thin wrappers
In a post titled "Just Talk To It," Steinberger makes a pointed observation: most of the AI tools and platforms flooding the market are thin wrappers around the same underlying models. They add a user interface, some integrations, and a price tag — but the intelligence underneath is identical.
I've seen this play out repeatedly. Teams sign up for expensive AI platforms that are, at their core, just GPT-4 or Claude with a configuration layer on top. Some of that configuration is genuinely valuable — access controls, audit logging, custom system prompts. But much of it is cosmetic complexity that obscures a simpler truth: the underlying model is doing the heavy lifting, and you're paying a premium for packaging.
Before committing to a platform, ask a straightforward question: what does this do that we couldn't accomplish with the model's API directly, a lightweight orchestration layer, and our existing infrastructure?
MCP servers are just a checkbox
Steinberger is especially critical of MCP (Model Context Protocol) servers — a technology that lets AI models connect to external data sources and tools through a standardized interface. His take: they're "something for marketing to make a checkbox."
For context, MCP is a way to give an AI model access to your files, databases, or applications so it can pull in context while working. It sounds impressive in a demo. But Steinberger's argument is that in practice, most of these integrations add complexity without proportional value. The model can often accomplish the same outcome through simpler means — direct file access, straightforward API calls, or just pasting the relevant information into the conversation.
I tend to agree. The best AI integrations I've built have been the simplest ones. A well-structured prompt with the right context pasted in will outperform a fancy MCP setup nine times out of ten. When a vendor shows you a slide with twenty integrations and connectors, ask which ones solve specific problems you actually have. If the answer is vague, you're paying for checkboxes, not capabilities.
A quality model and simple tools are enough to start
Steinberger's own setup is almost aggressively simple: a terminal, version control, and a quality AI model. That's it. No elaborate toolchains, no multi-platform orchestration. He argues that a good model with minimal tooling outperforms a mediocre model buried under layers of infrastructure.
In my experience, the equivalent for most teams is: a quality model (Claude, GPT-4, or similar), a code editor with good AI integration, and your existing codebase. You don't need a dedicated "AI platform" to get started. You need a clear problem, a capable model, and someone who knows how to connect the two.
Start Conversations, Not Specifications
How Steinberger builds
Steinberger's approach to building software with AI is the opposite of traditional methodology. Instead of writing detailed specifications before starting, he begins conversations with the AI, iterates live, and lets solutions emerge through interaction. He intentionally under-specifies, trusting that the back-and-forth will surface the right requirements faster than a document ever could.
In "Shipping at Inference-Speed," he describes a workflow where screenshots replace lengthy prompts. Rather than writing paragraphs describing what he wants, he shows the AI what he's looking at and says, in effect, "fix this" or "make this better." The visual context communicates more than words alone.
What this means for your team
The traditional approach to adopting AI in a development workflow looks something like this: form a committee, evaluate six tools, negotiate licenses, run a pilot program, and hope the requirements you defined at the start still match your needs at the end.
Steinberger's approach suggests something radically different: spend a week actually using the tools on real work. The conversation with the technology — watching it handle your actual codebase, your actual tickets, your actual review process — teaches you more than any evaluation spreadsheet ever will.
I've seen this firsthand. Teams that just start using AI on a real project learn more in a week than teams that spend months evaluating. The fastest path to understanding what AI can do for your workflow is to give it a real problem and see what happens.
The screenshot principle
Steinberger's preference for screenshots over written descriptions has a practical parallel in everyday engineering. When scoping AI-assisted work, I've found that showing the messy codebase, the manual process, or the sprawling test suite communicates the problem better than any written brief. If you're thinking about where AI fits into your workflow, start by pointing it at the ugliest, most time-consuming task your team deals with every day. That's your starting point.
The Identity Shift: From Implementer to Architect
What developers are experiencing
Steinberger describes a transformation in how he works that goes beyond productivity. "I don't read much code anymore," he writes. "I watch the stream." He's shifted from writing code line by line to directing AI systems — reviewing output, course-correcting, and exercising judgment about what's good enough and what needs rethinking.
He compares this to the industrial revolution: the nature of the work is changing, not disappearing. Developers aren't being replaced; they're becoming architects and creative directors who guide AI systems rather than manually producing every artifact.
The engineering parallel
Every role that involves routine information processing faces the same transformation. Engineers writing boilerplate code. QA teams writing repetitive test cases. Technical writers producing API documentation. The manual production of these deliverables is exactly the kind of work that AI handles well.
The engineers who thrive in this environment won't be the ones who can write the most code per hour. They'll be the ones who can direct AI systems effectively — who know the right questions to ask, who can evaluate AI output against their professional judgment, and who can catch the edge cases that models miss.
This is an investment in people, not just tooling. The teams that pair AI adoption with genuine skill-building — teaching their engineers to work alongside these tools, not just use them — will outperform teams that treat AI as a simple productivity hack.
A word of caution: "Just one more prompt"
Steinberger is refreshingly honest about the risks of his own approach. In a post titled "Just One More Prompt," he describes how AI-assisted productivity can become addictive. The ability to accomplish so much so quickly creates a compulsive cycle: just one more feature, just one more improvement, just one more iteration. The result, if unchecked, is burnout — not efficiency.
For teams adopting AI, this is worth taking seriously. The goal isn't to maximize the volume of AI-generated output. It's to free your team's time for the judgment-intensive work that actually creates value. AI adoption without intentional boundaries leads to scope creep and exhaustion, not transformation.
Practical Takeaways
If Steinberger's philosophy resonates, here's how to put it into practice:
-
Start with a conversation, not a contract. Pick one real problem — the most tedious, manual, error-prone task your team deals with — and explore it with AI for a few hours. You'll learn more from that conversation than from any vendor pitch deck.
-
Resist the urge to over-engineer. A quality model plus a lightweight integration layer will outperform a bloated platform that takes months to configure. Start simple. Add complexity only when you've proven the basic approach works first.
-
Evaluate tools on outcomes, not feature lists. When a vendor shows you a wall of integrations and capabilities, ask one question: "Which of these solves a specific problem we have today?" If the features can't map to your actual pain points, you're paying for marketing checkboxes.
-
Invest in your people's judgment, not just the tooling. Budget time for your engineers to learn how to evaluate and steer AI output. The model produces drafts; your people provide the expertise, context, and professional judgment that make those drafts production-ready. Skipping this step is how teams end up with expensive AI tools that nobody trusts.
-
Set boundaries before you scale. Define what success looks like for a pilot before you expand scope. What specific metrics will you measure? What's the threshold for moving forward? Without these boundaries, AI adoption becomes an open-ended experiment that's hard to evaluate and easy to abandon.
The Conversation Starts Somewhere
Peter Steinberger didn't build the most popular open-source project in GitHub history by assembling the perfect toolchain. He built it by staying close to the problem, using simple tools, and having a conversation with the technology.
I think there's a broader lesson here for all of us building software. The teams that succeed with AI won't be the ones with the biggest budgets or the most elaborate platforms — they'll be the ones willing to start with a real problem, a quality model, and an honest conversation about what's possible.
The barrier to entry has never been lower. The models are good enough. The tools are simple enough. The only thing missing is the willingness to just start talking to it.
Adam Daum is an agentic engineer and AI architect. He runs Weststack, LLC, an agentic AI and software engineering company, and writes about building practical AI solutions at adamdaum.com.