Gpt-oss and legal AI: the impact of open source models on client confidentiality

Gpt-oss robot hand touching a human hand

Legal AI adoption has hit a turning point. For a while, we were seeing law firms scramble to try and implement something, for better or worse. As AI becomes a staple in law firms (and businesses in general) we are starting to see more useful applications of AI that help law firms and their clients.

However, a notable gap still exists between expectations and reality. More than 70 percent of corporate clients expect generative AI to cut costs and speed up work. Yet only 6 percent see any real benefits from their law firms’ AI systems. Eeek!

OpenAI’s release of gpt-oss, their first open-weight models in over five years, will reshape legal AI. These models come with the Apache 2.0 licence (e.g. can be used for free) and bring remarkable capabilities. They also support a massive 128k token context – great for legal drivel/text.

In this piece, we get into what the release of gpt-oss means for UK law firms. We look at both sides – the advantages of open-source, on-premise AI models and their potential risks. This matters especially when you have client confidentiality and data protection rules unique to the UK legal sector.

Evaluating gpt-oss for legal use in UK law firms

Gpt-oss brings a fundamental change in how UK law firms approach AI implementation. 97% of applications now make use of open-source code, from your Instagrams to your banking apps. Legal professionals must learn about the impact of open-weight models.

Benefits of open-weight models: control, cost, and customisation

Open-weight models give law firms clear advantages when it comes to data privacy. Unlike the popular AI services, gpt-oss gives law firms the option to run AI models within their own infrastructure. However, if hosted via third-party platforms, data may still leave the firm’s direct control, and confidentiality risks remain unless strict governance and data handling policies are in place.

The cost structure changes too, from per-token charges to fixed infrastructure investments. This makes expenses easier to predict as usage grows.

The ability to customise is another big plus. You can fine-tune gpt-oss to understand specific legal terms, precedents, and your firm’s knowledge. This helps create expert models that work specifically with UK legal frameworks. On top of that, it lets firms stay independent from vendor decisions about model updates or service changes.

Limitations of gpt-oss: transparency and safety concerns

Gpt-oss may be “open” but it comes with some limits. You can access model weights but not the original training data. So while you can fine-tune the model, you can’t completely retrain it or change its core structure.

Safety is another big concern. Open-weight models might be misused if safety guardrails are removed. OpenAI took precautions during development by filtering harmful content from training data. However, each firm must take responsibility for keeping safety measures in place.

The legal side gets complicated too. OSS licencing can create unexpected obligations. Some licences have “copyleft” rules that might force you to share derivative works freely. This could affect your intellectual property rights.

Comparing gpt-oss with legal-specific AI tools like Harvey AI

Gpt-oss offers flexibility, but purpose-built legal AI tools like Harvey AI and Legora have their own advantages:

  • They immediately work smoothly with legal databases and case management systems
  • Legal tools are ready to use with pre-built prompts for legal tasks
  • You need technical expertise to set up gpt-oss, while specialised legal tools are easier to start using

 

Your choice between open-weight and specialised legal models depends on what you want in terms of control, customisation, and integration features. Also, how do you plan on applying it? You may use a leading legal AI vendor for your main AI tool, but look to implement smaller, open-source tools for specific use-cases.

Not sure where to begin.

GDPR, AI Act, Law Society and SRA guidance on AI use

The SRA clearly states that solicitors must use AI in ways that protect sensitive information. The core team requirements include:

  • Making sure data protection rights remain when information helps train AI
  • Protecting confidentiality and legal privilege
  • Taking responsibility for outputs from AI systems
  • Watching for biased or inaccurate outcomes

The Law Society has also published comprehensive guidance on generative AI, emphasising that solicitors remain subject to the same professional conduct rules regardless of whether AI assists in their work. Their guidance highlights that even if outputs are derived from generative AI tools, this does not absolve solicitors of legal responsibility or liability if the results are incorrect.

The GDPR requires organisations to do Data Protection Impact Assessments for high-risk AI processing. The AI Act demands organisations ensure their open-source AI models meet requirements, especially for high-risk category models.

Solicitors should avoid putting confidential information into generative AI tools. This applies especially to tools where they lack direct control over development and deployment. Not maintaining proper safeguards could breach SRA Principle 2 regarding client confidentiality.

Client confidentiality is at the heart of solicitor-client relationships. This makes AI data handling practices a crucial concern for UK firms that want to implement open-source AI models.

Building a responsible legal AI framework

Law firms need a well-laid-out approach to implement AI responsibly. Primarily focusing on governance, transparency, and accountability. Firms should create detailed frameworks that protect client information while getting the most from AI’s benefits.

Creating a firm-wide AI policy

A resilient policy forms the foundation of good AI governance. This policy should address ethical concerns, safety protocols, and compliance needs. SRA-regulated firms must line up their AI usage with SRA Standards and Regulations. Client interests, service standards, and confidentiality remain crucial. A detailed policy should include data governance, bias reduction, crisis response, and regular audits. The COLP or another senior person should oversee AI implementation to create clear accountability.

Defining approved use cases for legal AI

Teams should identify where AI can work safely. Pilot access helps teams see which applications benefit clients while reducing risks. Legal teams should create workflows that combine AI’s efficiency with human judgment at key points. Quality checks must happen before and after rolling out AI-assisted work.

Establishing audit trails and human oversight

Audit trails let firms track back through decision-making processes. These detailed logs should capture inputs, outputs, timestamps, and user actions for tracking purposes. The “human-in-the-loop” model stays vital, especially when dealing with high-risk AI applications. Checkpoints where solicitors confirm AI outputs help maintain professional standards and protect client interests.

Training and Governance for Safe AI Deployment

Training programmes build the foundation needed to implement AI responsibly within law firms. Statistics show 55% of baby boomer legal professionals have tried AI compared to 67% of Gen Z. These numbers highlight why addressing skill gaps between generations is vital to wider adoption.

Educating solicitors on confidentiality in AI workflows

Legal professionals must now understand AI’s capabilities and limitations alongside traditional legal knowledge. They need to recognise the risks to confidentiality that AI systems present, especially when tools like ChatGPT may save user data and prompts to train their models. Law firms should stress the need to review AI terms and conditions and reject vendors who claim ownership rights over client data.

Ongoing training and AI literacy programmes

Technical competency in AI has become a requirement in legislation. Organisations must ensure their users understand both technical aspects and regulatory requirements. Law firms have responded by creating new roles like “AI Lead” or “Director of Innovation” to build strong training systems. More firms now teach summer associates about AI-based research and chatbot tools, knowing that newer professionals adapt to these technologies quickly.

Encouraging internal feedback and risk reporting

A “continuous improvement” mindset is vital to make AI governance work. Organisations can use the Plan-Do-Check-Act method to handle AI risks:

  • Plan: Identify risks and develop management strategies
  • Do: Implement mitigation strategies
  • Check: Monitor effectiveness
  • Act: Make adjustments to improve processes

A strong governance framework should promote active supervision, clear communication, and defined accountability. This helps maintain professional standards while making the best use of AI’s capabilities.

Conclusion

Gpt-oss’s release marks a defining moment for legal AI adoption among law firms. The change to open-weight models gives unprecedented control over confidential data, predictable costs, and extensive customisation options. All the same, these advantages bring more responsibilities for data governance, regulatory compliance, and professional standards.

Law firms must consider several factors before implementation, with client confidentiality at the forefront. Open-weight models tackle this through on-premise deployment, though firms remain responsible for implementing proper safeguards. On top of that, it becomes vital to have clear governance structures when humans oversee AI-generated outputs.

The difference between general-purpose models like gpt-oss and specialised legal tools needs careful evaluation. Purpose-built solutions fit right into existing workflows. Open-weight models offer more flexibility and control but demand higher technical expertise. They also come with a much more reasonable price tag!

Whatever model you choose, a detailed risk assessment and reliable data protection measures should come before any AI implementation.

Successful AI adoption relies on well-thought-out policies, specific use cases, and thorough training programmes. Law firms that create clear AI governance frameworks and combine them with ongoing education will achieve better results in operations and client service.

Without a doubt, AI will continue to reshape legal practice deeply. Law firms now face the challenge to balance new ideas with their core professional duties. Those who tackle these challenges with clear policies, proper safeguards, and continuous improvement will gain the most from AI advances. They’ll also keep the trust that builds the foundation of client relationships.

AI implementation in law isn’t just about technology – it’s a cultural change that honours the profession’s values while moving toward its future.