72% of Companies HaveZero AI Policy

Why Minnesota law firms — from Minneapolis to Rochester — can't afford to wait on AI governance.

By Ryan Wixen··Minneapolis, MN

A PwC report making rounds this week dropped a number that should grab the attention of every attorney in Minnesota: 72% of companies have no formal AI governance policy. Not a weak one. Not an outdated one. Nothing on paper at all.

For the solo practitioners and small firms that serve Minnesota's legal market — from the Hennepin County courthouse to Olmsted County, from corporate work in the IDS Tower to family law in the suburbs — this gap creates a double-edged situation. Your own firm likely lacks formal AI governance. And your clients almost certainly do too.


Minnesota's Business Landscape Makes This Especially Urgent

Minnesota is home to a disproportionate number of Fortune 500 companies for a state its size — UnitedHealth Group, Target, 3M, General Mills, Best Buy, U.S. Bancorp, and more. The Twin Cities metro area is a major hub for healthcare, medical devices, agribusiness, financial services, and retail. Every single one of those sectors is deploying AI at scale right now.

A medical device company in the Medtronic corridor uses AI to analyze clinical trial documentation with no policy governing how that AI handles proprietary research data — that's a ticking liability clock.

A regional bank in downtown Minneapolis deploys an AI-powered loan assessment tool without a governance framework addressing algorithmic fairness requirements — multiple regulatory fronts of exposure.

These companies will eventually need legal counsel to clean up the governance gap. The firms that understand the intersection of AI technology and regulatory compliance will capture that work. The firms that don't will watch it go to competitors.


The Ethical Obligations Are Already On the Books

Minnesota attorneys don't need to wait for new rules to understand their obligations around AI. The Minnesota Rules of Professional Conduct already establish the framework — it just needs to be applied to current technology.

Rule 1.1 — Competence

Requires that attorneys maintain awareness of the risks and benefits of relevant technology. If you're using AI tools in your practice and don't understand how they process client data, you're not meeting the competence standard.

Rule 1.6 — Confidentiality

Imposes confidentiality obligations that extend to the technology tools attorneys use. When client-privileged information passes through an AI system, the attorney bears responsibility for ensuring that system maintains adequate confidentiality protections.

The Minnesota Office of Lawyers Professional Responsibility hasn't issued a formal opinion specifically addressing AI in legal practice, but the national trend is clear. Jurisdictions across the country are moving toward explicit guidance, and Minnesota will follow. Firms that get ahead of this by implementing governance now will be better positioned than those scrambling to comply after the fact.


What Small Firm AI Governance Actually Looks Like

A four-attorney firm in Edina handling business litigation doesn't need the same governance apparatus as Target's legal department. But it does need documented answers to a specific set of questions.

Which AI tools has the firm approved, and for what purposes?

This isn't about banning technology. It's about creating clarity. A firm might approve a legal-specific AI research tool for case law analysis and a document review platform for discovery — while prohibiting consumer chatbots for any work involving client data. The key is writing it down.

What are the data handling boundaries?

Minnesota firms routinely handle protected health information (the state's healthcare concentration makes this nearly universal), financial data subject to federal regulations, and trade secrets from corporate clients. Your AI policy should classify data by sensitivity level and specify which tools can process which categories. PHI should never touch a consumer AI platform — full stop.

What verification processes are required?

Every AI-assisted work product needs a documented human review step before it reaches a client or a court. The depth of that review should correspond to the stakes. A research memo might need spot-checking of citations. A brief filed in Hennepin County District Court needs line-by-line verification.

How will the firm handle AI-related errors or breaches?

When an AI tool hallucinates a citation, misclassifies a document's privilege status, or processes data in a way that violates your policy, you need a response protocol. Who is responsible? What's the escalation path? How are clients notified? Documenting this before an incident occurs is dramatically easier than improvising during one.


The Discovery Landscape Is Changing Under Your Feet

The 72% governance gap is going to generate litigation across every sector of Minnesota's economy. Employment discrimination claims based on AI hiring tools. Healthcare malpractice involving AI-assisted diagnosis. Consumer protection actions targeting algorithmic pricing. Regulatory enforcement over AI-driven financial decisions.

The discovery challenges in AI governance litigation are fundamentally different from traditional document review. You're not just searching for responsive emails. You're trying to understand how an AI system made decisions, what data it was trained on, whether appropriate governance existed, and who was responsible for oversight.

This is exactly what drove the design of CaseIntel. Our platform was built agent-native from day one — meaning AI isn't a feature we bolted onto an existing search tool. Our six-agent pipeline reads documents contextually, identifies privilege issues automatically, detects contradictions across testimony, extracts chronological events, and generates case playbooks tailored to specific dispute types.


Minnesota's Healthcare Sector Is Ground Zero

Nowhere is the AI governance gap more dangerous than in healthcare — and Minnesota is one of the country's most concentrated healthcare markets. Mayo Clinic, UnitedHealth Group, Medtronic, Abbott, and the extensive network of regional health systems all create a legal ecosystem where AI governance failures carry life-or-death stakes.

  • When a healthcare organization deploys an AI tool that influences clinical decisions without formal governance around validation, bias testing, and human oversight, the liability exposure extends beyond regulatory fines into malpractice territory.
  • When a medical device company uses AI in the design process without documenting the governance framework, product liability questions follow into every downstream case.
  • Minnesota attorneys who handle healthcare litigation — medical malpractice, product liability, regulatory compliance — will find AI governance gaps reshaping their practice areas.

The Data Confidentiality Problem Inside Law Firms

Beyond the client advisory opportunity, there's a problem that most firms aren't discussing openly: their own data handling practices with AI tools are a liability.

The reality in most small Minnesota firms is that attorneys are using AI tools — chatbots, summarizers, research assistants — without formal approval or oversight. Client names, case details, privileged communications, and sensitive personal information are being processed through systems with unclear data retention policies and unknown training data practices.

The Architecture Difference That Matters

Purpose-built legal AI platforms like CaseIntel implement data isolation by design. Every client matter exists in its own environment. Data never trains shared models. Audit trails capture every interaction. Access controls enforce separation between matters and between users. The difference between using a consumer chatbot and a legal-specific platform for client work is the difference between discussing case strategy in a crowded restaurant and discussing it in your office.


Five Steps Minnesota Firms Should Take Before Summer

1

Run a firm-wide AI usage audit.

Ask everyone — attorneys, paralegals, administrative staff — what AI tools they're using and what data those tools are processing. Anonymous responses encourage honesty. Most managing partners are surprised by the results.

2

Draft and publish a version-one AI policy.

Two to four pages covering approved tools, data classification rules, human review requirements, client disclosure obligations, and incident response procedures. It won't be perfect. Ship it anyway. You can iterate in Q3.

3

Add AI disclosure language to engagement letters.

Minnesota clients should know whether AI tools are being used in their matters. Specify the types of tools, the safeguards in place, and the client's ability to request human-only handling. Transparency here is a differentiator, not a risk.

4

Migrate to purpose-built legal AI tools.

The window for using consumer AI tools in legal practice is closing. Platforms like CaseIntel offer the analytical capabilities attorneys need while maintaining the data architecture that legal ethics require. The investment pays for itself in reduced risk alone — before you count the efficiency gains.

5

Develop AI governance as a practice area.

Minnesota's corporate community — from the Fortune 500 companies downtown to the mid-market businesses across the metro — needs AI governance counsel. Policy drafting, vendor contract review, compliance assessments, incident response planning. Firms that build this expertise now will own the practice area in the Twin Cities market for years.

The window is open — but not for long.

Minnesota firms that move on AI governance in 2026 will define this practice area for the next decade. CaseIntel gives you the tools to get there.

Start Free Trial

Frequently Asked Questions

What Minnesota professional conduct rules apply to AI use by attorneys?

Minnesota's Rules of Professional Conduct already establish the framework for AI governance: Rule 1.1 (competence includes awareness of technology risks), Rule 1.6 (confidentiality obligations extend to AI systems). The Minnesota Office of Lawyers Professional Responsibility hasn't issued a formal AI opinion yet, but national trends point toward explicit guidance — and Minnesota will follow. Firms implementing governance now will be better positioned.

Why does Minnesota's Fortune 500 concentration create unique AI governance risk?

Minnesota is home to a disproportionate number of Fortune 500 companies — UnitedHealth Group, Target, 3M, General Mills, Best Buy, U.S. Bancorp. Every one of these companies is deploying AI at scale, often without adequate governance. When governance failures occur, these companies need outside counsel who understand both the technology and the applicable regulatory frameworks. Minnesota firms that develop this expertise will capture a rapidly growing practice area.

Why is Minnesota's healthcare sector especially vulnerable to AI governance problems?

Minnesota is one of the country's most concentrated healthcare markets — Mayo Clinic, UnitedHealth Group, Medtronic, Abbott. AI governance failures in healthcare carry life-or-death stakes. When a healthcare organization deploys an AI tool influencing clinical decisions without formal governance, liability exposure extends beyond regulatory fines into malpractice territory. Minnesota attorneys handling healthcare litigation should be paying close attention.

How does CaseIntel help Minnesota law firms handle AI governance cases?

CaseIntel's platform was built agent-native from day one. For a solo practitioner taking on an AI governance case against a corporate defendant, CaseIntel's six-agent pipeline reads documents contextually, identifies privilege issues automatically, detects contradictions across testimony, extracts chronological events, and generates case playbooks tailored to specific dispute types. It provides the analytical firepower that used to require a team of associates and a six-figure discovery budget.

This article is for informational purposes only and does not constitute legal advice. For guidance specific to your Minnesota practice, consult the Minnesota Office of Lawyers Professional Responsibility.

Ryan Wixen is the founder of CaseIntel, an AI-powered legal discovery platform built for solo practitioners and small law firms. CaseIntel helps firms handle complex discovery workflows with AI-native tools designed for confidentiality, compliance, and efficiency.

Explore Related Topics