, og:description, fb:app_id

Sunday, 1 September 2024

Do lawyers have a future? Legal practice in the age of AI

Anne Trimmer AO, former President of the Law Council of Australia, delivered the 37th Annual Sir Richard Blackburn Lecture during Law Week on 21 May 2024.

The 2024 Annual Blackburn Lecture examines the growth and spread of generative artificial intelligence technologies across society in general, and their increasing influence on both the profession and within the courts.

Introduction
I am delighted to be with you today on the occasion of the 2024 Blackburn Lecture. Permitted an open-ended topic in delivering this lecture, I have chosen to examine the potential impact on legal practice of the ever-expanding world of artificial intelligence and ask the question, do lawyers have a future in a world founded on artificial intelligence?  

Justice Allsop summarised well the current state of play: ‘To a degree, the future must remain unknown. Artificial intelligence and its effect on courts, the profession and the law will change the landscape of life in ways we cannot predict.’

I begin with two disclosures – this lecture has not been generated by ChatGPT (although I am sure ChatGPT could make a reasonable fist of it, given relevant parameters). Secondly, I am not an expert in artificial intelligence. 

However, I do have a long-held interest in how law is practised and in the dynamics of legal practice as a profession. As we approach the end of the first quarter of the 21st century, the interest in artificial intelligence and its application to the practice of law stands to fundamentally change the nature of legal practice, for better or for worse. 

In this 2024 Blackburn lecture, I examine the impact that AI is already having on the legal profession, and question how, as a profession, lawyers can preserve the integrity of legal practice, including the ethical boundaries of the profession. I also examine what opportunities might exist to enable greater democratisation of legal practice through the use of artificial intelligence. 

When Sir Richard Blackburn delivered the first Blackburn lecture in 1986, we could not envision the changes that technology would introduce into legal practice. In the 1980s, the first word processors arrived in legal offices with word processing pools replacing typing pools. Internal communications were delivered in envelopes and long distance transmission of documents was by telex until the introduction of the fax machine. And if this sounds like another era, it was my experience in my first year in practice. 

The Economist, in a recent opinion piece, commented that previous technological breakthroughs revolutionised what people did in offices. It quotes an observer of the spread of the typewriter in 1888: ‘With the aid of this little machine an operator can accommodate more correspondence in a day than half a dozen clerks can with the pen and do better work.’ The introduction of the computer a century later eliminated some low-level administrative tasks while making skilled workers more productive. It was not until the 1990s that networked computers became more widely available and with them, the introduction of emails and external electronic communication. 

In late 1998, representatives of the Sections of the Law Council of Australia recommended that the Law Council undertake a long-term strategic planning exercise for the legal profession. 

As a result of this suggestion, the Law Council established a Taskforce under my chairmanship as the then President-elect. The Taskforce was asked to examine some of the big issues likely to impact on the legal profession in the first decade of the 21st century.  

The subsequent Discussion Paper was released in September 2001 when I was President. I urged that the Discussion Paper be used as a tool to generate debate within the profession and the community about the role of lawyers and the implications arising from some of the issues identified. In rereading the paper in preparation for this lecture, I was struck by how much remains pertinent, a quarter of a century on.  

The Discussion Paper comments that one of the issues in looking forward to the future of the legal profession, is the ‘paradox of change’. The introduction makes the point that ‘[w]hile the profession needs to address and come to terms with all the issues that arise from the forces of deregulation, competition, globalisation and technology, there is equally a need on the other side of the policy equation to emphasise the core values of the legal profession’.  

To my mind this paradox remains as we engage in a technology age aided by artificial intelligence. The paradox is the benefit of efficiency brought about by the capacity of AI to trawl through large amounts of data, and to provide analysis and images in a short time. However, on the other side is a danger of reliance on an interface that nonetheless requires human input to ensure it is accurate and consistent with the law. What is the role of the lawyer within this paradox? 

What is artificial intelligence? 

What do we mean when we talk about artificial intelligence in the context of legal practice?  

At its broadest, artificial intelligence, or AI, is defined as technologies and systems comprising software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.  

The subset of AI that has attracted interest, and debate, is generative AI (GenAI) which uses deep learning algorithms to generate new outputs based on large quantities of existing or artificially created input data. These outputs can include multiple modes such as text, images, audio or video. Gen AI systems have been trained on massive amounts of data, and work by predicting the next word or pixel to produce a creation. 

As the Law Society (of England and Wales) succinctly puts it, traditional AI recognises, while generative AI creates. Generative AI has the capacity to create new content based on the data that has been fed into it. However, it does not have the capacity to validate or check its outputs. 

AI language models that operate as generative pretrained transformers (GPTs) are a group of language models ‘pre-trained’ on a large data set to generate human-like text responses. 

A large language model (LLM) is an AI system trained on an exceptionally large amount of data. It uses machine learning to conduct a probability distribution over words to predict the most likely next word in a sentence based on the previous entry. Language models learn from text and can be used for producing original text, predicting the next word in a text, speech recognition, optical character recognition and handwriting recognition. 

Familiar large language models include ChatGPT, developed by OpenAI, and Gemini (formerly known as Bard), developed by Google. There are now many other LLMs with names as diverse as BERT, Claude, and Ernie. 

Using AI in legal practice 

In 2016 the Law Society of New South Wales established its Future Committee and, in turn, the Future of Law and Innovation in the Profession Commission of Inquiry to provide the legal profession with recommendations which might enable lawyers to better accommodate new concepts and ideas and to adapt to changes.

In its report, published in 2017, the Inquiry found that:  

  • clients are seeking greater value for legal services and increased competition amongst lawyers is fuelling change, as is the increasing use of technology  

  • change has also brought with it new ethical and regulatory issues  

  • there is an increased awareness that future law graduates need to be equipped with new skills to meet the current and future demands of the profession, and  

  • the wellbeing and mental health of lawyers needs to be safeguarded by appropriately supporting them through the process of change.

Additionally among the findings was that artificial intelligence raises regulatory and ethical issues that require investigation and guidance for solicitors.

In 2021 the Law Society (of England and Wales) published a rather bleak analysis. Its report, ‘Images of the Future Worlds Facing the Legal Profession 2020-2030’, outlined a legal profession largely replaced by artificial intelligence and self-service legal advice by the end of the 2020s. It forecast a 'savage reduction' in full-time jobs by 2050. The report suggested that those human lawyers who remain will work alongside technology - and be required to take 'performance-enhancing medication in order to optimise their own productivity and effectiveness'.  

Those leading the discussion about the impact of generative AI on legal practice have identified key areas for transformation. These include decreasing lawyer effort while increasing high-value services. In contrast with the very dark forecast of the Law Society (of England and Wales) that generative AI will result in fewer lawyers, others argue that GenAI is merely a tool and will not change the way law is practised. Lawyers will have more time to spend with clients with work traditionally performed by more junior lawyers undertaken by GenAI. Where it will have an impact is to challenge the traditional law firm billing model based on hourly rates. 

The likely impact of GenAI on the legal profession is now the subject of considerable analysis. 

The legal publisher, Walters Kluwer, in its Future Ready Lawyer Report 2023, provides a snapshot of how rapidly attitudes among lawyers towards the use of AI are changing. In 2023 73% of lawyers were reported to anticipate integrating generative AI into their legal work within the next 12 months. This compared with the 2019 survey in which only 58% of lawyers surveyed predicted that AI would have an impact on their work over the next three years. 

In a report released this year, Thomson Reuters Institute cites 88% of corporate legal departments believing AI can be applied to their work, primarily to increase efficiency and productivity. While only 12% of legal industry respondents to its survey say they use legal-specific GenAI today, an additional 43% say they plan to do so within the next three years.  

Goldman Sachs has estimated that 44% of legal tasks could be performed by AI, more than in any other occupation surveyed, other than clerical and administrative support. Lawyers are able to use AI for a variety of tasks that are otherwise time consuming such as due diligence, research, and data analytics. These tasks are all “extractive” AI, that is, using applications that extract information from text.  

Generative AI is much more powerful. Commercial providers such as LexisNexis and Microsoft are already introducing AI platforms that have been created specifically for lawyers. There are also firms that have taken platforms like ChatGPT and adapted them for use by lawyers, such as the legal software system RobinAI which assists in speeding up drafting and querying contracts. 

Two American lawyers who have written extensively on AI in legal practice, Natalie Pierce and Stephanie Goutas, argue that ‘much like previous technological advances, [AI] may be poised to redefine the role of legal professionals rather than displace them.’ AI can automate time-consuming and routine work which then allows legal professionals to undertake the more complex and higher value work. 

Already some global law firms have signed on to the integration of AI into their work practices. Allen & Overy announced in February 2023 that it was integrating Harvey into its practice. Harvey is an artificial intelligence platform built on a version of OpenAI’s GPT language models. As well as the general internet data that underlies GPT, Harvey is trained in legal data including case law. Harvey uses natural language processing, machine learning and data analytics to automate and enhance areas of legal work such as contract analysis, due diligence, litigation and regulatory compliance. It can help to generate insights, recommendations and predictions based on large volumes of data. The system alerts lawyers to fact-check the content it creates. 

In its analysis of the possible impact of AI on legal practice, The Economist outlines three ways in which it views AI as having the potential to transform the legal profession. In large, complex lawsuits, detailed documents can be uploaded into a litigation preparation AI and one lawyer can undertake the interrogation, resulting in a leaner, specialised firm. 

AI could change how firms bill for their time. If AI can do the work of multiples of young lawyers, firms will need to change their billing practices. The tyranny of the hourly rate may disappear with flat fees charged for the work. Or, as The Economist suggests, clients might be charged a “technology fee” that reflects the cost of the firm’s acquisition and/or development of appropriate AI. 

AI might change the number of young lawyers needed to undertake the ‘grunt’ work, with a consequent change in hiring practices and ratios of partners to young lawyers within law firms. This has implications for young law graduates and indeed for law schools which continue to enrol and graduate large numbers of young lawyers. 

You may have read an article last week in the Australian Financial Review, detailing the AI tool that Minter Ellison has built based on a GPT-4 platform. According to the firm’s CEO, Virginia Briggs, the tool can prepare a basic piece of legal advice in 15 minutes, a task that would take a graduate lawyer up to eight hours. Ms Briggs acknowledges the likely impact of AI on graduate level lawyers but argues that it will not take away work but rather change the way work is undertaken. 

Richard Susskind, a British veteran commentator on technology changes in legal practice, has said that ‘[p]eople who go to lawyers don’t want lawyers: they want resolutions to their problems or the avoidance of problems altogether’. If AI can provide the solutions, then clients will be satisfied to use AI-generated assistance. 

In a commentary published by the Brookings Institute in 2023, John Villasenor wrote that ‘AI will make it much more efficient for attorneys to draft documents requiring a high degree of customization—a process that traditionally has consumed a significant amount of attorney time. Examples include contracts, [court filings], responses to interrogatories, summaries for clients of recent developments in an ongoing legal matter, visual aids for use in trial, and pitches aimed at landing new clients. AI could also be used during a trial to analyse a trial transcript in real time and provide input to attorneys that can help them choose which questions to ask witnesses.’   

A recent trial of the use of AI by the global firm, Ashurst, utilising 411 staff across 23 offices in 15 countries, found time savings of 80 per cent in reviewing articles of association, 60 per cent on company research reports, and 45 per cent on client briefings. The Ashurst trial used a blind study to judge the quality of AI-assisted case studies and found that in all but one case, the summaries produced by humans were judged a higher quality. The head of Ashurst Advance is quoted as saying that ‘the trial’s findings had implications for the training of young lawyers and that AI could bring an end to the billable hour by forcing law firms to charge for the value of work completed rather than the time taken to do it.’

In an interesting discussion of the potential use of AI to assist in litigation, Don Farrands KC identified four key beneficial areas: 

  • removing repetitive and relatively low-skilled work, such as reviewing vast volumes of discovery 

  • providing more powerful search engines and analysis regarding legal principles and arguments 

  • providing predictions on court proceeding outcomes, and 

  • providing opportunities to mine vast volumes of data to determine whether relevant expert material can be used, or criticised, in proceedings. 

AI will also become the focus of law firm competition. As I have outlined, some of the large law firms are already investing in acquiring or developing AI suitable for their business. An argument has been made that law firms that fail to utilise AI will be at a competitive disadvantage. John Villasenor argues that ‘[l]aw firms that effectively leverage emerging AI technologies will be able to offer services at lower cost, higher efficiency, and with higher odds of favourable outcomes in litigation. Law firms that fail to capitalize on the power of AI will be unable to remain cost-competitive, losing clients and undermining their ability to attract and retain talent.’ 

What could possibly go wrong? 

In considering a framework for risk management of AI, the United States Department of Commerce suggests that ‘AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed. These risks make AI a uniquely challenging technology to deploy and utilize both for organizations and within society. Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities. With proper controls, AI systems can mitigate and manage inequitable outcomes’.

One issue that has received some attention is the accuracy, or more correctly, the inaccuracy, of GenAI. Generative AI is known to ‘hallucinate’, that is, where it offers up made-up or incorrect information. The confident and chatty style of ChatGPT, for example, can mask the fact that the information provided is completely wrong. 

In the United States there have already been several cases where lawyers have been suspended from practice because they have used citations based on fabricated cases, where AI has purportedly ‘hallucinated’.  

The case of Mata v. Avianca, Inc set a precedent in the United States District Court for the Southern District of New York, for what can go wrong and the action the court might take in response. 

Steven Schwartz was a personal injury lawyer at the New York firm Levidow, Levidow & Oberman who used ChatGPT to draft a court filing. Unfortunately, ChatGPT created a motion that used non-existent judicial opinions with fake quotes and citations. Even after he was challenged Mr Schwartz continued to stand by the fake opinions after being questioned by the court. The court found that Mr Schwartz and his firm had breached the New York Rules of Professional Conduct which state that ‘[a] lawyer shall not knowingly make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer.’ 

Since the Mata case, cases involving lawyers and hallucinated or erroneous material are becoming a regular occurrence in the United States. In a case in the Massachusetts Superior Court, counsel for the plaintiff filed four memoranda in response to four separate motions to dismiss. In reviewing the memoranda, the judge could not find three of the cases cited in two of the memoranda. When the lawyer was quizzed as to why he had included the cases, he responded that he did not know. The judge then asked for a written explanation. In filing a response, the lawyer acknowledged that he had ‘inadvertently’ included citations to multiple cases that did not ‘exist in reality’ which he attributed to an AI system used in his office. 

While the lawyer had checked the filings for style, grammar and flow, he told the court he had not checked the accuracy of the citations. The outcome was the imposition of a fine on the lawyer and a firm statement from the judge on the “broader lesson” for lawyers. He said, ‘[t]he blind acceptance of AI-generated content by attorneys undoubtedly will lead to other sanctions hearings in the future, but a defense based on ignorance will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known.’

It is not only the data coming out of the AI system that needs careful monitoring, but also the information going into it. Data submitted into an AI tool becomes part of the model. For lawyers it is critical that this feed does not violate the confidentiality obligations to clients and their privacy or confidential information, or breach other laws such as anti-discrimination laws. 

An area for potential bias in the application of AI is its use in human resources where it is used to trawl large numbers of applications. Without human oversight there is the risk of bias built into the AI process. In the United States some jurisdictions have enacted their own AI employment laws such as New York City’s requirement that employers subject AI hiring tools to an independent audit for bias no more than one year before their implementation. The New York law prohibits employers from using automated tools to screen candidates unless the software has been independently reviewed for bias against protected groups. Furthermore, all job candidates who live within New York City must be notified if the AI software is used during the hiring process.  

Using AI to democratise access to the law 

The Law Council’s Discussion Paper on challenges for the future legal profession identified that one of the outcomes of the utilisation of new technology in legal practice was the empowerment of the user. Writing at that time about access to the internet, the Discussion Paper notes that access to the internet can make clients better informed and, as clients become more empowered, they will begin to expect more of their lawyers.  

Richard Susskind, who I have referred to as a leader in the field of legal practice and technology, devised the term ‘latent legal market’. Susskind noted, in 1998, that there were many areas of commercial activity where non lawyers would benefit from legal advice but did not seek advice due to issues such as cost and accessibility. According to Susskind, ‘a vast latent legal market will emerge on the so called information superhighway, giving everyone (and not just lawyers) ready and inexpensive access to legal products and information services’. 

One of the arguments in favour of the legal profession embracing AI is the potential to expand delivery of legal services, enabling more providers to offer affordable services.  

In 2014 the American Bar Association established its Commission on the Future of Legal Services with the aim of improving the delivery of, and access to, legal services. Its report was published in 2016. Among its findings were that most people living in poverty, and the majority of middle-income individuals, do not receive the legal help that they need. Further, pro bono alone cannot provide the poor with adequate legal services to address their unmet legal needs and the traditional law practice business model constrains innovations that would provide greater access to, and enhance the delivery of, legal services.

Writing in the foreword to the report, former ABA President, William C Hubbard, who commissioned the report, stated that ‘[w]e must open our minds to innovative approaches and to leveraging technology in order to identify new models to deliver legal services. Those who seek legal assistance expect us to deliver legal services differently. It is our duty to serve the public, and it is our duty to deliver justice, not just to some, but to all.’ 

The report asserts that ‘[t]he justice system is overdue for fresh thinking about formidable challenges. The legal profession’s efforts to address those challenges have been hindered by resistance to technological changes and other innovations. Now is the time to rethink how the courts and the profession serve the public.’ This indeed is the challenge to the legal profession – to use technology not only to improve efficiency but to reshape the way in which legal services are provided to those who otherwise have no access. 

There are now several instances of non-lawyers taking up the challenge to use technology to radicalise access to legal services. 

A recent example is the software developed by a company called Grapple which provides advice to members of the public on a range of workplace issues from bullying and harassment to redundancy. It is able to generate legal letters and provide summaries of cases. 

In early 2023 a company called DoNotPay, using a ‘robot lawyer’ and chatbot, attempted to appear in a court in the United States, representing a client charged with traffic offences. DoNotPay was established by a student at Stanford in 2015, using the technology initially to dispute parking tickets. The client pays a bimonthly subscription fee, with the service available in the United States and the United Kingdom. The company describes its services as ‘your AI consumer champion’. The list of applications for its AI tools is extensive, ranging from refunds for flight tickets to cancelling subscriptions to filing complaints with administrative agencies. 

The legal advice provided by DoNotPay used machine learning to match text and voice recognition with a dataset comprised of legislation and legal precedent. In the case referred to, DoNotPay was forced to cease the action on the basis that it was unlicensed to practise law. 

The need for guardrails  

Despite suggestions to the contrary, this is not the Wild West and guardrails are needed to ensure that firms, in their enthusiasm to embrace AI technologies, do not fail to adequately manage risks.  

In Australia there is currently no AI-specific legislation, although there is increased examination of whether current laws are sufficient. AI is governed by existing legislation. As the Australian Government’s interim response to the consultation on safe and responsible AI noted, ‘businesses and individuals who develop and use AI are already subject to various Australian laws. These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.’

Joe Longo, chair of the Australian Securities and Investments Commission, in a keynote speech earlier this year, made the point that current regulation may not be sufficient.

The challenge of regulating use by the legal profession of artificial intelligence was considered in a speech in 2018 by a former president of the Law Council, Morry Bailes who said: 

From a regulatory perspective, we recognise that regulation of the legal profession and the provision of legal services has, generally speaking, evolved in response to problems after they have emerged. One of our key challenges as a profession is to work toward shaping a regulatory and ethical framework that is not simply reactive, but which fosters and accommodates innovation, so that the benefits of developing and deploying new technology-based tools, as well of new ways for lawyers to work, organise and provide legal services, are encouraged and realised. In looking at regulatory responses to the growth of technology and new ways of working in the legal services industry we must ensure that we do not ‘regulate-away’ the benefits for consumers, courts and the profession, nor should we stifle innovation and competition. If we are too conservative, we run a risk of devising overly protective and controlling regulatory measures. On the other hand, regulation of the legal profession and the provision of legal services serves the public interest and protection of consumers by ensuring quality, of both the knowledge and skills of legal practitioners, and the services they provide.

Some legal professional bodies have begun to construct guidance about the safe and/or ethical use of generative AI.  

The Law Society (of England and Wales) in its guidance to members has identified the following risks:  

  • intellectual property risks: potential infringements of copyright, trade marks, patents and related rights, and misuse or disclosure of confidential information 

  • data protection and privacy risks: concerns related to the unauthorised access, sharing or misuses of personal and sensitive data 

  • cybersecurity risks: vulnerabilities to hacking, data breaches, corruption of data sources and other malicious cyber activities 

  • training data concerns: the use or misuse of data to train generative AI models, which could result in biases or inappropriate outputs 

  • output integrity: the potential for generative AI to produce misleading, inaccurate or false outputs that can be misconstrued or misapplied 

  • ethical and bias concerns: the possibility of AI models reflecting or amplifying societal biases present in their training data, leading to unfair or discriminatory results 

  • human resources and reputation risks: if the use of generative AI may result in negative consequences for clients, there may be reputational and brand damage. 

The NSW Law Society has recently published a guide for solicitors on the responsible use of artificial intelligence. The guide identifies the range of potential issues for legal practice in the use of generative AI, including accuracy and bias.  

The NSW Law Society guide highlights the relevant conduct rules that govern the use of AI in practice. These include: 

  • Rule 4 – competence, integrity and honesty which requires that there is full disclosure to a client when a generative AI program is used 

  • Rule 9 – confidentiality. Generative AI uses the information that has been fed into the system, so care is needed not to share information that is not publicly available, leading to a breach of confidentiality and loss of client privilege 

  • Rule 17 – independence and the avoidance of personal bias, using best judgement and not merely relying on information generated by AI  

  • Rule 19 – duty to the court by not misleading or deceiving the court, even inadvertently 

  • Rule 37 – supervision of legal services. Where AI is used in a practice, critical evaluation of the accuracy and completeness of the output is required. 

The NSW Bar Association issued guidelines on the use of GenAI in July 2023 outlining the practice issues that might arise from using such models. The guidelines note that under the Uniform Law provisions, barristers are bound by professional conduct rules and ethical obligations, which include providing competent and diligent representation, maintaining independence and integrity, and maintaining the confidentiality of client information.  

The guidelines continue ‘[w]hen considering whether to use ChatGPT or any other tool which co-creates content, barristers should ensure that they are complying with those rules and obligations.’They point also to elements of the Barristers Rules which reflect expectations of barristers as specialist advocates, requiring them to apply their own skill and to exercise their own judgement. 

The NSW Bar guidelines provide some useful suggestions on how to approach practice using GenAI. It recommends keeping a record of the prompts that have been used (in other words the search history), the choices that have been made, and the results generated by the AI tool. It also recommends that barristers be transparent with clients about their use of AI tools in assisting legal representation.  

The need for digital safeguards and guardrails is obvious. The benefits of using the technology can be demonstrated, but unless we have policies and regulations that govern its use in the legal context, the risks will overshadow the benefits. Some organisations are already ‘leveraging a combination of frameworks and existing rulebooks for privacy and anti-discrimination laws to craft AI governance programs’.   

At its midyear meeting in 2023, the American Bar Association turned its attention to organisations that design, develop, deploy and use AI systems and capabilities, including lawyers and firms, and urged them to follow specific guidelines.The ABA asked developers of AI systems to ensure that their products and services are subject to human authority, oversight, and control. The resolution that was passed noted that individuals and organisations should be accountable for their use of AI products and services, including any legally cognisable injury or harm caused by their actions or use of the AI systems, unless they have taken reasonable measures to mitigate against that harm or injury. Furthermore, developers should ensure the transparency and traceability of their AI products and services by documenting key decisions. 

In the Australian context, the federal government’s voluntary framework of AI Ethics Principles includes accountability, specifically that ‘[p]eople responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled’. 

The Australian AI Principles also include transparency and explainability stating that ‘[t]here should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them’. 

The principle goes on to state that when AI is used, responsible disclosures should be given in a timely manner, and provide reasonable justifications for AI systems outcomes, including information that helps people understand outcomes.  

Conclusion 

Pierce and Goutos (referred to earlier) argue that ‘the evolution ahead calls for a thoughtful and strategic approach, centered on embracing new technologies, modernizing legal education, and providing the necessary training. This strategy is designed to equip legal professionals with the requisite skills needed to collaborate effectively with sophisticated AI systems, underscoring the importance of adaptability and continuous learning. Our goal should be not to ban or eliminate GenAI from the legal industry, but instead to skillfully train legal professionals so they understand how to leverage the technology responsibly to enhance their unique skills and, in turn, their practice of law’. 

To this I would add that our legal professional bodies, charged with regulating the profession, consider whether the current conduct rules are sufficiently flexible to apply to legal work undertaken in an AI context. As I have pointed out, some thoughtful preliminary guidance is already available from different professional bodies. 

In answer to the question, do lawyers have a future, the answer to my mind is definitively ‘yes’. While AI may spare lawyers time in trawling large volumes of material, the lawyer brings judgement, empathy, reasoning, and strategy.  

There are challenges, however, in the way we educate and train our early career lawyers. Their efforts need to be valued, not devalued.  

And finally, the use of AI to assist and support those citizens who otherwise do not have access to legal services has to be a key reason to support the further evolution of AI in legal practice.