All About Estates

AI Generated Wills: Can Courts Fix a Robot’s Mistake?

This blog post was written by: Dave Madan, Senior Manager, Scotiatrust

 

AI has started writing Wills. That might sound futuristic, but it’s already happening. Generative tools can churn out documents that look polished, formatted, and ready to sign. But what happens when those words don’t match what the person actually intended? Can a court fix an AI-generated Will the way it might fix a solicitor’s drafting error?

 

Think of AI-drafted Wills like self-driving cars. On a clear highway, they may stay in their lane. But would you hand over the wheel in a snowstorm with your family in the back seat? Probably not. That’s the risk here – these systems can appear competent until things get complicated.

 

Canadian courts do have a tool for correcting mistakes in Wills. It’s called rectification, and it allows a judge to step in when the text on paper doesn’t reflect what the testator actually instructed. The Ontario Court of Appeal made it clear in Rondel v. Robinson Estate, 2011 ONCA 493 that rectification is a narrow remedy: it applies where the drafter made a recording error or omitted a provision by mistake, but not where the testator’s instructions were unclear or never properly captured (CanLII link).

 

That approach was reaffirmed recently in Ihnatowych Estate v. Ihnatowych, 2024 ONCA 142, which upheld a trial decision allowing rectification where there was strong evidence that the drafting solicitor had failed to carry out the testator’s instructions. The Court confirmed that rectification is about correcting mistakes in recording, not rewriting a Will for fairness or convenience (CanLII link).

 

The problem with AI is the lack of proof. When a solicitor drafts a Will, there’s usually a paper trail: notes from meetings, email correspondence, multiple drafts, billing records. All of this becomes valuable evidence if a dispute arises. With AI, the “process” may be nothing more than a single vague prompt typed into a chatbot at midnight. Unless the client saved screenshots or kept detailed notes, there’s nothing for a court to examine. And without reliable evidence, rectification simply isn’t available.

 

This evidentiary gap is what has commentators so concerned. In The Probater (June 2025), estate lawyers flagged the challenge of applying rectification to AI-produced Wills. If the only evidence is an algorithm’s output, with no context, no surrounding conversation, and no human check on accuracy, courts are unlikely to stretch the doctrine that far (Hull & Hull LLP commentary).

 

And the public isn’t exactly rushing to embrace the idea. A 2024 Willful survey found that more than half of Canadians, 54.7% said they would not trust AI to draft legal documents. Their main concerns were accuracy, privacy, and the lack of personalization. For something as sensitive and enduring as a Will, most people still want human judgment and oversight (Canadian Lawyer Magazine report).

 

Even if an AI system produces something that looks valid, the risks are obvious. A broad phrase like “make sure my children are taken care of” could exclude stepchildren or other dependants. Tax planning might be ignored. Fraud and manipulation are real possibilities – imagine a “deepfake” Will appearing after someone’s death. And in the end, families could be forced into costly disputes, draining the estate the testator meant to preserve.

 

For estate planners, the message is not to panic but to contextualize. If a client brings in an AI-generated draft, treat it as rough notes. Ask questions, clarify intentions, and create a legally enforceable document that reflects what the client actually wants. Keep records, because if litigation arises, those records may be the only evidence that matters. And keep educating clients: just because a document looks official on a screen doesn’t mean it will stand up in court.

 

Canadian judges may one day be asked to decide whether an AI-generated Will can be saved. Courts have already validated unconventional documents like unsigned drafts, handwritten notes, even unsent text messages, but those decisions all depended on strong supporting evidence of intent. Without that context, an AI draft is more like a sketch than a binding testament.

 

AI promises speed and efficiency. But estate planning isn’t about efficiency – it’s about getting it right. Intent is everything, and AI doesn’t understand intent. It predicts patterns. Until that changes, handing over your legacy to a machine is like handing over your car keys in a blizzard. You might get where you want to go, but you might not.

Tagged in:
About 
For over 100 years, Scotiatrust® has helped Canadians preserve and transfer their wealth. Together with your team of specialists, we work to understand your achievements and help you connect them, so your wealth makes the meaningful impact you want. We also help you make important decisions sooner and ensure they’re followed when you’re unable to do so yourself. We are a team of highly experienced, hands-on professionals and we view it as our responsibility to ensure our clients have addressed all relevant issues and that their wishes are followed throughout and beyond their lifetime, helping them to live well and leave well.

1 Comment

  1. Corey Wall

    September 4, 2025 - 6:02 pm
    Reply

    I have to wonder what impact this will have on undue influence and lack of capacity claims. Maybe the AI needs access to the webcam to take photos of who is really answering the prompts? Can you sue AI for solicitor’s negligence?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.