Deep Dive
Plan Mode Beats Panic Mode
Boris starts 80% of his Claude Code sessions in plan mode, and it's the opposite of how most people work. They jump in, type a vague request, and let the AI run wild. The problem: the solution Claude thinks it's solving and the solution you actually need are often completely different. Austin shares a client story where Claude was supposed to fix a display bug but instead changed database values, which broke five other things. By planning first, asking clarifying questions like what's the core problem, who's it for, and what should this not do, you put Claude in a position to succeed. The Navy SEAL wisdom applies here: move slow to move fast.
Keep Claude.md Lean or Blow It Up
Claude.md is a cheat sheet of instructions Claude reads every chat. Boris's counterintuitive advice: if it gets too long, delete the whole thing and start fresh. People load it with every rule imaginable thinking more instructions equals better results, but the opposite happens—Claude gets confused. Since models improve constantly, rules from six months ago are often baked in now. Boris recommends doing the minimal possible thing, then adding rules back only when you see the same problem twice. Austin takes a middle ground approach, running a cleanup command instead of full deletion, but acknowledges Boris's strategy of periodic resets is probably more effective.
Verification Is Your Quality Multiplier
Boris gives Claude a tool to see its own output and tells Claude about that tool. Claude figures out the rest. In practice: if you're building a website, open a browser and test it. If you're writing, ask Claude to review against brand guidelines. If you're automating, verify the output matches expectations. One pro tip Austin loves: before any work starts, ask Claude what its verification plan is. If there's no clear way to verify something, maybe there's a better approach entirely. After going back and forth multiple times, you can also ask Claude to verify all its work so far against best practices.
Parallel Sessions Are a Force Multiplier
Running multiple Claude Code sessions simultaneously on different tasks is where efficiency explodes. The key: each task must be partitioned with zero overlap. If two sessions work on the same thing at once, you get wasted effort and file conflicts. Boris's insight is sharp: two context windows that don't know about each other get better results. A fresh session with no baggage sees obvious solutions the first session missed because it's too deep in the problem. It's like restarting and having it just work.
Claude Skills Turn Repetition Into Automation
Inner loops are tasks you do repeatedly throughout the day. Claude skills let you document the process once and call it every time after. Austin uses a skill that generates local law 97 reports in identical format and style—the only variable is the data. A slash command on Boris's end, but Claude skills are more applicable for most users. Think of it like a sports play: a prompt is telling a player to dribble, but a Claude skill is the exact play to run every single time. These are genuinely powerful if you identify your actual inner loops first.
Never Bet Against the Model Getting Better
Boris keeps a framed copy of Rich Sutton's Bitter Lesson on the wall. The core idea: the more general model will always beat the more specific one. Translation: every micro-tweak and scaffolding you build today will probably be unnecessary in six months because the model improves daily. This doesn't mean you stop building, but it changes where you focus energy. Prompt optimization is pointless—Austin calls it stupid—because next month's model won't need your clever wording. Spend time instead on information architecture and the system that feeds the model. AI will never be as bad as it is today, so build assuming it gets better.