Checking the Pulse: Bird's Eye View on Gen AI - Unpopular Opinions in the Legal Sector

By: Jeff Johnson, Chief Innovation Officer

The loudest voices in the legal tech industry are focused primarily on the progress and innovation potential surrounding generative AI (Gen AI). Thankfully, there’s a growing chorus of voices offering counterpoints when discussing when and how to best embrace Gen AI within the legal industry.

This installment of our “Checking the Pulse” series delves into some of these unpopular opinions, exploring the nuances of why jumping on the Gen AI bandwagon might not be suitable for every need in a legal practice … yet. We examine concerns over costs, creativity, and the ethical quandaries that accompanies the deployment of Gen AI in legal contexts.

The Case for Caution

While the excitement around Gen AI’s capabilities grows, there’s good reason for caution. The potential is real, but the current hype does not necessarily mean it’s the right time for every legal practice to adopt this technology. Factors contributing to this viewpoint include:

  • Readiness and Relevance: Not all legal practices are at a stage where integrating Gen AI would be beneficial or even feasible. The readiness of a firm’s existing technology infrastructure, the specific nature of its legal work, and the firm’s strategic goals should all inform the timing of any investment in Gen AI.
  • Cost vs. Benefit: For most, Gen AI enabled tech solutions represent significant costs. The potential benefits may not yet justify the initial investment in Gen AI technology and the ongoing costs of training and integration for many use cases in some practices, especially smaller firms with limited resources.

Creativity at Risk

A notable critique of Gen AI in legal work is the potential stifling of creativity and critical thinking. As legal professionals increasingly rely on AI for tasks ranging from research to drafting documents, there’s concern that:

  • Autopilot Over Free Thinking: The convenience of AI-generated solutions might discourage the development of lawyers’ ability to engage in the deep, nuanced thinking that complex legal issues often require.
  • Homogenization of Legal Strategies: Over-reliance on Gen AI could lead to a narrowing of legal strategies and approaches, as AI tends to suggest options based on past data, potentially overlooking innovative or untested solutions.

Ethical Implications: The Case for Caution in a World of Wonder

Beyond practical considerations, the ethical implications of Gen AI in legal work demand careful examination:

  • Bias and Fairness: The potential for AI to perpetuate or even amplify biases present in training data is a significant concern, especially in legal decisions that impact people’s lives and liberties.
  • Accountability and Transparency: Questions about who is responsible when AI-driven legal advice is faulty or when an AI system fails to comply with regulatory requirements highlight the need for clear accountability frameworks.
  • Privacy: It’s not always abundantly clear who has access to the data, or whether learning tools store sensitive client data and regurgitate in other inquiries. Data privacy is always an utmost concern, and the lack of clarity does not absolve us of responsibility. A healthy dose of skepticism and caution is certainly warranted.

Sum it Up

As the legal industry contemplates the integration of Gen AI, it’s crucial to balance the enthusiasm for technological advancement with a thoughtful consideration of the potential downsides. Unpopular opinions and cautionary perspectives offer valuable insights, reminding us that we should temper our aggressive pursuit of innovation with thoughtfulness, ethics, and an eye towards the broader implications for creativity, equity, and the practice of law itself. By engaging with these diverse viewpoints, legal professionals can navigate the complexities of adopting Gen AI in a way that enhances rather than detracts from the quality and integrity of legal work.

You can reach out a Purpose Legal thought leader here: