If you’d like to receive more content like this, join 5,000+ other lawyers and legal marketers who subscribe to my Legal Growth email newsletter here.
A viral essay made the rounds recently make the case that AI is about to do to every profession what it’s already doing to software engineering. The author, a startup founder, writes that AI now does the technical work of his job better than he does. He argues lawyers are next, and specifically references his lawyer friend who “keeps finding reasons [AI] won’t work.”
He’s not wrong to call that out. The technology is real, it's improving fast, and any lawyer who dismisses it outright is making a mistake. I’ve been blown away by the newest models.
But “next” assumes that every profession gets disrupted the same way. Practicing law doesn’t work like software engineering. When engineers ship software with a bug, they iterate on the code. When a lawyer misses a deadline or makes an error in judgment, someone can lose their money, their business, or even their liberty. The consequences aren't comparable.
The key distinction here is that the legal profession operates within a legal system that requires personal accountability.
The author writes about AI having "judgment" and "taste." Maybe it does—or will. But judgment without accountability is just an opinion. AI will make legal analysis abundant. There’s no doubt about that. However, accountability will remain scarce if there aren’t lawyers in the loop to stand behind the work.
Our legal system is built on the principle that a human being evaluates the work, signs their name, and accepts the consequences. The guardrails imposed by courts, bar associations, malpractice insurers, and opposing counsel aren't ripe for elimination by software. They're the reason people trust the system in the first place.
The essay introduces a COVID analogy to convey urgency. But remember how many of the early predictions during COVID ultimately unfolded? Remote work didn't eliminate offices. Zoom didn't eliminate business travel. The "nothing will ever be the same" narrative gave way to something messier and more nuanced: change in some areas and durability in others. If that's the right model here too, then the interesting question isn't "will AI change the practice of law?”—of course it will—but which parts will change and which parts won't.
Here's what is missing from the discussion: when AI makes legal analysis abundant and nearly free, what will matter most is having a lawyer who can tell you whether the analysis is right, stake their reputation on it, and bear the consequences if it's wrong. Clients don’t hire lawyers just for analysis. They hire them because when something goes sideways, someone has to answer for it. AI can’t do that.
The essay mentions a managing partner at a large firm who spends hours a day using AI, saying it's like having a team of associates available instantly. This is presented as evidence that even senior lawyers see the writing on the wall. But think about what’s really happening: the managing partner isn't being replaced by AI. He's being leveraged by it. He still reviews the work, makes the judgment call, carries the license, and bears the consequences. That's very different from “AI is eliminating your job.”
Another critical perspective missing from this conversation is the client's. Clients don't want AI to replace their lawyer. They want AI to improve their lawyer's performance. In high-stakes matters, no one wants to rely on a system that can't be held responsible. They want the smartest, fastest, most informed human advisor available, and they expect that advisor to use the best technology available.
Here’s what I can say with confidence: commodity legal work will absolutely get cheaper. But the premium on human judgment in high-stakes matters may increase, because the volume of AI-generated legal work that needs expert validation is about to explode.
Every major technology introduced to make lawyers more efficient has ultimately increased the demand for legal work. Email was supposed to streamline communication. Instead, it multiplied it. We’ve seen this before. When something becomes cheaper and faster, people don’t use less of it. They use more.
There's no reason to think AI will be the exception. As businesses move faster—producing more contracts, more products, more content, more transactions, more disputes, more risk—the need for legal involvement rises with it. And AI itself creates entirely new categories of legal work (e.g., governance, risk assessment, regulatory interpretation), and all of it requires experienced human judgment and accountability.
The author assumes AI shrinks the legal profession. History suggests the opposite is just as likely: AI makes legal work cheaper per unit, and the market responds by demanding far more units.
None of this means the transition will be painless. Inside firms, the disruption could be very real. If AI handles first drafts, what happens to the junior associates who used to cut their teeth with that work? If routine matters get faster and cheaper, what happens to the “Cravath model” that most law firm economics depend on? If associates aren't learning by doing, what happens to the training pipeline that produces the next generation of partners? These are serious questions, and I don't think anyone has good answers yet. But firms that figure them out will have an enormous advantage. Those that avoid the questions won't.
These internal challenges are real. But the pace at which they show up depends on the assumption that the legal industry moves as fast as the tech industry. As anyone in the industry understands, it doesn't.
Think about what must happen before AI fundamentally changes how law gets practiced. Courts must update their rules. Regulators need to weigh in. Clients must be willing to accept AI-driven work product. None of that moves fast, and it's not supposed to—by design. “Move fast and break things” is the opposite of how it works in the legal industry.
One practical indicator will be the insurance market. When legal malpractice carriers underwrite policies that expressly cover AI-generated legal work, it will signal that something fundamental has shifted. Likewise, when software companies shift from broad warranty disclaimers and liability caps tied to licensing fees to taking responsibility for their outputs in a way that resembles professional responsibility, we’ll know the technology has reached a new level. Until then, most AI tools are sold under agreements that disclaim warranties, limit damages to the fees paid, and place the ultimate responsibility on the lawyer who uses them. That distribution of risk says everything about where things stand.
I could be wrong about the timeline. It may be faster than I think. But even if it is, the question of who bears accountability only gets more urgent. And this means the time to start engaging seriously is now, not when the changes arrive.
So What Should Lawyers Do?
Here's what I tell lawyers to do: Learn the tools. Use them every day. The essay is correct that the gap between people who use AI seriously and those who don't is growing rapidly. Don't fall behind.
But more importantly, become the person at your firm who understands both what AI can do and what it cannot. In other words, where the practical limitations and professional responsibility boundaries are. That intersection—capability plus accountability—is the sweet spot for the foreseeable future.
But all of this still misses something more fundamental: the human side of practicing law.
Think about what clients need most in moments that really matter. For example, a client going through a bitter divorce wants to sit across from someone who's been through these cases before, who can read the room in a mediation, and who cares how this turns out for their family. It's the same in transactional work. A founder negotiating their first acquisition wants a lawyer who gets what it feels like to bet everything on something you built. And when a crisis hits at 10 p.m., the in-house counsel picking up the phone wants a voice they trust on the other end—the one who knows their business and has their back.
As a former litigator, what comes to mind is that a case ultimately rests with twelve people in a jury box—imperfect human beings deciding who they believe. Trials turn on human judgment; on whether a story feels coherent and a witness feels trustworthy. Until courts and juries are replaced (an issue for another day—hopefully a very distant day), human judgment isn’t a feature of the system. It is the system.
The same reality exists on the business side of the profession.
The author asks whether AI will replicate deep human empathy. He's asking the wrong question. It doesn't matter whether AI can simulate empathy. What matters is whether a human on the other end of a high-stakes, life-altering legal matter will ever know, like, and trust a machine the way they trust their lawyer. In high-stakes legal matters, trust is the product. And in a profession that still runs on relationships and referrals, where the ability to develop business is inseparable from the ability to connect with people, that matters enormously.
The essay presents this as a story about technology replacing expertise, but it’s really about what expertise truly is. It was never just the brief or the memo. It was and is the person behind it—the one your client calls when everything is falling apart, or an opportunity arises, not because they're the fastest or the cheapest, but because the client trusts you. AI doesn't change that. If anything, it reminds us that it was always the part that mattered most.
Jay Harrington is president of our agency, a published author, and nationally-recognized expert in thought-leadership marketing.
From strategic planning to writing, podcasting, video marketing, and design, Jay and his team help lawyers and law firms turn expertise into thought leadership, and thought leadership into new business. Get in touch to learn more about the consulting and coaching services we provide. You can reach Jay at jay@hcommunications.biz.

