30 April 2018  | 
 |   | 

Digital disruptions that are transforming our world – and those of our clients

For law firms, one of the most fundamental impacts of the 4th Industrial Revolution and its emerging disruptive digital innovations is set to be the transition from the use of technology, essentially a set of tools in a people-leveraged business model, assisting those people to become more efficient and do more sophisticated work, to it being a core generator of actual legal work product.

This transition from core work being done by people assisted by machines to work being done by machines controlled by people will have a transformational impact on business models.

As shown in figure 15, a continuum exists from work that can only be done by humans (because of the degree of intellectual ambiguity and complexity, requiring a level of judgement or empathy of which machines are incapable) to work that can only be done by machines (because of the sheer volume of data involved, the complexity or the analysis to be performed, or because the machine can do the work as well or better than humans, far less expensively.)

We can expect a steady shift in activities from left to right across the spectrum, with machines taking over more and more existing work and humans, aided by those machines, being able to take on more sophisticated and entirely new kinds of work.

A point will no doubt be reached where charging for the work on the basis only of the human component of the effort and investment by the firm becomes nonsensical. At that point, the ‘billable hour’ will likely cease to have relevance except in rare circumstances.

In addition to the opportunities and challenges that digital transformation will bring for law firms, are those this will create for clients, and the impacts that they will have on the law itself. The three need to be considered together.

Figure 15

 

Artificial Intelligence (AI): opportunities wrapped in a threat
The promise of artificial intelligence is a digital assistant who can almost instantaneously research every legal, financial, technical and other aspect of a matter, in any language, and provide a lawyer with a good overview of a client’s legal position, within its strategic business context. It promises to allow lawyers to elevate their core advisory role from compliance, risk and dispute resolution, to advising on more complex issues such as how to use law to improve their strategy and competitive advantage. It makes it possible to advise clients in areas where the amount and complexity of the data involved makes it simply impossible to do so, with humans alone. It promises to take over much of the soul-destroying work that has been the bane of associates’ lives for decades. While today AI is limited to (increasingly sophisticated) machine learning applied to quite narrow applications, deep learning is enhancing its capabilities and in turn, enhancing the ‘super powers’ of lawyers using the tools.

Perhaps fictional robotics and Hollywood are to blame but when people think about AI, the meaning of ‘intelligence’ conjured up is as it applies to human or animal intelligence. It would be more accurate to think of it in the sense of how it is used by MI5 and the CIA. The tools that exist today and that are likely to continue to exist if or until quantum computing becomes a reality (which would likely change everything) are essentially expert systems and systems for analyzing vast amounts of data. Beyond that, deep learning tools seek to replicate narrow bands of human behaviour, to create similar outputs in carefully defined contexts to what would have been created by humans under similar circumstances.

It has been said that AI systems and computing generally are very good at performing tasks that humans cannot do well (like searching for patterns in vast amounts of data and performing complex calculations) and very bad at doing what humans do well (managing ambiguities, applying judgement, thinking ‘out of the box.’) Applying this logic suggests that even in the longer term, the work that AI displaces will not be the cutting edge, deeply judgement-driven work that excellent lawyers love to do. If anything, automation of lower order work will create opportunities for lawyers to deliver far more sophisticated legal services to their clients.

AI can do work that is simply beyond the capabilities of humans, because of the level of complexity involved or the sheer volume of data. In other cases, AI can do work that humans could do, to an acceptable level of quality, but less expensively and far more quickly. So, it enhances efficiency and reduces cost.

Potentially, AI and other LegalTech developments could pave the way to redefining the roles of both in-house lawyers and external legal advisors, returning law firm partners to the trusted business advisory roles that used to characterize the profession with business clients before the advent of procurement departments and RFPs. The most successful LegalTech systems are not so much about adding intelligence, as about removing mindless, soul-destroying tasks from lawyers’ practices and vastly reducing risk of mistakes.

For now, most of what is termed AI is more accurately referred to as machine learning. The world appears to be nowhere near machines that are able to replicate the human mind, but it would be dangerous to assume this to be certain. Researchers often draw the distinction between ‘hard’ and ‘soft’ AI. While research into hard AI seeks to replicate the flexible and adaptive reasoning processes of the human brain, soft AI simply seeks to replicate human outputs in a narrowly defined task.

Currently, all working forms of AI are soft. The debate rages about when, if ever, there will be a breakthrough in hard or fully cognitive AI. Estimates from those working in the field vary widely from ‘several years’ to ‘never’ (see figure 16, which presents a table of myths and facts drawn from Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence (Tegmark, 2017.))

This section is not intended to provide a comprehensive treatise of AI and legal services. To do so would require a substantial book. Instead, it provides a general overview of a number of areas that are most likely to have a transformational impact, providing comment especially on implications for business models. Figure 17 presents the logos of a range of AI platforms that are focused on legal services, grouped into four categories:

  • Legal research and analysis
  • Prediction technology
  • Intellectual property
  • Electronic billing and practice management

Unlike older LegalTech, legal research and analysis tools now routinely use natural language processing, which means technology able to understand language in its natural form. For instance, a legal document or even the spoken voice. Instead of using man-made rules, computers use machine learning to develop recognition patterns to create predictive algorithms. These can not only understand the data that is presented in one language but also automatically translate into any other language for which such algorithms exist in its databases. The machine does not (nor does it need to) understand language in the same way that a human does.

Machine learning is proving immensely useful in a wide range of applications related to legal research, including contract review, due diligence and eDiscovery. The tools require training and successes achieved by different law firms vary according to how much effort they have invested in this and the quality of the data used. One frequently hears one law firm say that they tried one or other AI product but were disappointed by the result, while another firm says that they are achieving stellar results with exactly the same platform.

While eDiscovery tools have been available for at least a decade, the AI-enabled tools of today are a far cry from the word-search algorithms of before. Today, predictive coding can also determine the context of a particular search term, greatly improving its ability to pick out relevant documents and data from databases which, if printed out, would constitute many billions of pages.

The value of this is obvious for litigation, antitrust filings, regulatory compliance investigations and any other situation where unmanageably large and complex sources of data are involved.

Contract review is another application for machine learning and, unlike eDiscovery, the tools that are used for this acquire knowledge between matters. In other words, they learn from exercise to exercise, over time becoming more competent at analyzing different kinds of document. The tools are typically supplied with a certain level of knowledge pre-loaded but the true value comes from a law firm training the tool with knowledge from its own knowledge management systems and its own lawyers’ skills. Unlike humans, AI review tools never actually forget what they learn. In a small percentage of cases (typically under 10% for the best tools) the recollection may be imperfect, but this reduces through repeated experience. They are affected by human biases only to the degree that humans communicate these to them – usually inadvertently.

Figure 16

Due diligence is another area where machine learning is proving efficient and highly cost effective. The amount of data that businesses generate and that are relevant to mergers and acquisitions and other circumstances under which due diligence is required is increasing exponentially. For large, complex businesses, the point has been long passed where it is beyond the capability of humans to perform complex due diligence effectively and complaints routinely emerge now in the M&A literature about instances where due diligence failure contributes to M&A failure (for instance Kirk, 2017.)

As the tools mature, one might expect that less human intervention will be necessary and the transaction costs to the law firm of executing a matter will reduce dramatically. Humans will only need to check the most important documents. Where the risks involved are acceptable, machine results will likely be enough. Using natural language processing, it is also possible to create legal research tools that can be queried in spoken language or text and which can then search for answers and present these back, also in natural language. Such tools, as they develop, will further drive digital-leveraged versus people-leveraged business models and enhance the accuracy of other AI machine learning tools.

In the next 3 – 5 years, we can expect to see more sophisticated natural language processing tools that can analyze the context of provisions that they are trained to identify, further enhancing their ability to identify data that is responsive to the matter in hand. One might also expect these tools to be integrated with other kinds of technology, for instance blockchains and the IoT to routinely monitor compliance and update systems. Lawyers might become involved in designing and building such systems. Should quantum computing become a reality the field will undergo a step change, the magnitude of which is probably impossible to comprehend.

Even without quantum computing, for now with classical computing, AI is advancing from simple machine learning to deeper capabilities, for instance deep learning. This is a form of AI designed to behave in a similar way to the human brain. At this level, the AI becomes meaningfully self-teaching. Astounding results are being achieved in a range of areas from image recognition to discovering disparate relationships and patterns hidden deep in vast data sets and more sophisticated and accurate real-time language translation.

However, for many it is of some concern that legal work product is being delivered by technology that is behaving, at least to a degree, autonomously. In a world where such severe limitations exist on who (or in this case what) can practice law, ethical considerations arise. But these concerns will not halt the inexorable growth in the capabilities of AI.

The drivers of this growth are firmly on the client side, in mainstream government, commerce and industry. Investments into Legal AI tools pale into insignificance when compared to other sectors and especially technology giants like Amazon, Apple, Cisco, Facebook, GE, Google, IBM and Intel. Limiting the legal profession’s ability to use these tools to the full, will likely do no more than limit the ability of lawyers to adequately advise their clients on these matters and generally.

Some firms have been quick to seize the opportunity offered by AI and are well advanced now in passing the advantage on to clients. At the extreme end of the innovation curves are firms that are working with AI and other technology developers to create entirely new ways of doing legal work.

At the other end of the spectrum are firms who are grudgingly taking up these new digital tools once the pressure from clients becomes inexorable but who use them only narrowly, to perform specific tasks in what remain essentially human-leveraged work processes. The latter would seem a dangerous strategy to follow in a world where things are moving so fast and so profoundly.

Figure 17

Sources and references

Kirk, J., 2017. Why M&A due diligence flounders in the face of Big Data and how smart technology can help. Financier Worldwide, accessed at https://www.financierworldwide.com/why-ma-due-diligence-flounders-in-the-face-of-big-data-and-how-smart-technologyW-can-help/#.WuCqVi-ZN24

Tegmark, M., 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. Allen Lane.

Blockchains: a world without intermediaries?
October 23, 2008

Stock markets across the globe were in freefall, the New York Stock Exchange having plummeted 30% in five weeks. Lehman Brothers was bankrupt. Credit markets had ground to a halt, banks refusing to advance each other funds because they did not know who else amongst them was insolvent. Houses in some U.S. cities had lost 40% of their value. Unemployment was skyrocketing. Like many other businesses, over the following months law firms would conduct the most severe and painful layoffs in their history.

Confidence in Government was rock bottom. A U.S. presidential election was just two weeks away. The U.S. Congress, normally in recess so close to an election, was considering a raft of bail-out legislation to break the log jam. The House Oversight Committee had called the heads of the three global ratings agencies, Standard & Poor, Fitch, and Moody’s, to testify before them. Essentially, what they wanted to know was: how had they got things so wrong?1

October 31, 2008

An anonymous writer calling himself Satoshi Nakamoto published a paper titled ‘Bitcoin: A Peer-to-Peer Electronic Cash System.’ The paper posited a world where intermediaries were unnecessary. Where citizens could contract and transact with each other directly. Intermediaries, and in particular financial institutions as we understand them today, would be obsolete.2

The timing of the Bitcoin white paper is noteworthy. The world that the technology promises is an attractive one to those who feel that intermediaries are inefficient and extract rents that are not justified.

Much of the hype and media attention around blockchains/distributed ledger technologies (DLTs) has focused on cryptocurrencies – especially since late 2017 when active and wide-scale speculation in them began. These are a relatively small part of the picture though – albeit an important one. Just how important will become evident if widespread public preference for cryptocurrencies over fiat currencies increases, for instance in peer-to-peer systems in international trade that reduce demand for dollar and other major currencies. Public authorities and global financial institutions are, unsurprisingly, heavily focused on preventing this. Nakamoto’s libertarian utopia where citizens can transact and contract with each other without the need for an intermediary holds great appeal for those who understand its promise. For legal and other professional advisors, it will be essential to understand where the service that is being provided really is advisory, and where the role being played is simply that of an intermediary.

Today’s DLTs are cumbersome and inefficient (see parable in figure 9.) The highly complex cryptography (‘mining’) required to operate them consumes a prodigious amount of energy. Significant and legitimate concerns exist about the opportunities for them to be used for illegal activities, for instance tax evasion, money laundering and other crimes.

Besides driving down transaction costs in banking and other intermediary-dependent systems, research is also being focused on how DLTs might enable more rapid and reliable adoption of the ‘Internet of Things’ (IoT). They may become an essential part of rendering advanced artificial intelligence safe. In a world where public leaks of private information is becoming commonplace, blockchains promise to restore a degree of privacy and control that many feel has been lost.

Many are beginning to regard blockchains/distributed ledger technology to represent an entirely new generation of the Internet or, more accurately, the World Wide Web (WWW) calling it ‘Web 3.0’.

By this logic, the Internet and WWW have gone through four generations, the most recent of which is still in its infancy.

Figure 18

Figure 19

Pre-Web Internet

This refers to the period before the WWW was invented and released to the public, in other words preceding 1991. (The WWW was invented by British scientist Sir Timothy Berners-Lee in 1989. He wrote the first web browser computer program in 1990 while employed at CERN in Switzerland. The web browser was released outside CERN in 1991, first to other research institutions and then to the general public on the Internet in August 1991).

Web 1.0

The early WWW consisted only of static websites, which people used to provide or access information and other data.

Web 2.0

This term has been in wide use for at least a decade and describes a more social, collaborative, interactive and responsive WWW that humans use as a platform to interact with each other. Social media platforms like Facebook, WhatsApp, Instagram and a vast array of social news sites, blogs and wikis stand at the core of Web 2.0.

Web 3.0

A clear definition of Web 3.0 has not yet emerged but two common themes seem to be emerging, namely a web that is used by:

  • smart devices connected through the IoT to communicate with each other, and also to instruct and execute actions, without human intervention;
  • humans who not only socialise and share data, but also contract and transact directly with each other not through intermediaries.

The above is a very superficial snapshot of a complex and continually evolving concept.

A more comprehensive overview of Web 3.0 would require exploration of concepts such as the semantic web, which is a web where all information is categorized and stored in such a way that a computer can understand it as well as a human. It would also require exploring the implications for the WWW of advanced AI (for instance to produce results similar to social bookmarking and social news sites without human subjectivity) and even emerging trends in virtual reality.

Figure 20

Figure 21

The purpose of this paper though is simply to provoke discussion about the likely impact of DLTs, acting together with other disruptive digital technologies and other kinds of geo-economic, socio-political and environmental trends, on the law, client legal needs and hence the business models that large law firms need to adopt in order to meet these effectively.

For that purpose, it is probably adequate to note that:

  • banks and governments especially are almost certainly right to fear the implications of the peer-to-peer, intermediary-free world described by Satoshi Nakamoto;
  • useful DLT applications are emerging in almost every industry sector and many other aspects of society;
  • for large law firms, the most important impacts of DLT are those that impact the legal needs of the firm’s most important clients;
  • many of these issues are common across client industry sectors so a sector approach to exploring these makes sense, at least at the outset;
  • smart contracts deserve particular attention because of the scope for liability to arise in novel ways. Links to particular agencies may be difficult or even impossible to establish. Coupled with that, smart contracts as yet are not that smart, as witnessed by the frequency with which ‘initial coin offerings’ (ICOs) are hacked;
  • blockchains / DLT also promise a whole host of applications that will prove useful for law firm management, ranging from document control and other aspects of matter management to enhancing partner involvement in governance processes;
  • legal and governance systems will need to evolve, and possibly fundamentally transform, in order to accommodate DLTs and the unprecedented realities that they create.

Sources and references

1 Silver, N., 2012. The Signal and the Noise: The Art and Science of Prediction. Allen Lane. p19.
2 Nakamoto, S., undated (but believed to be 31 Oct 2009.) Bitcoin: A Peer-to-Peer Electronic Cash System. www.bitcoin.org. Accessed at https://bitcoin.org/bitcoin.pdf.

Big data: far beyond hard drives
When one speaks of big data generally, the amount of data in question is ‘big’ but not monumentally so. Terabytes or petabytes perhaps, or at a stretch zettabytes of data. There is also no agreed formal definition of ‘big data’ although the Oxford English Dictionary defines it as: “data of a very large size, typically to the extent that its manipulation and management present significant logistical challenges.”

But how big is ‘big data,’ really?

A recent article in Financier Worldwide posed the question: “Is due diligence actually working in today’s M&A transactions? To what extent does it help companies achieve the strategic business goals of the deals in question and pave the way for successful integration?”

An important question, given the many articles in business literature that show how frequently mergers and acquisitions fail.

In a recent Axiom survey quoted in the same article, only twenty one percent of respondents indicated they felt the outputs of due diligence in M&A transactions were “very effective” in helping a deal reach its expected synergy targets.

Respondents cited a lack of standardised processes, inadequate data alignment and output, and underuse of technology. As deals increase in size and complexity – and as the volumes of data that due diligence teams have to sift through and analyse continue to grow exponentially in this era of ‘big data,’ it seems unavoidable that these issues will get worse. Does that mean that the success rates for M&A might deteriorate yet further?

How does one perform due diligence on a data base which, were it to be printed out, would be many billions or perhaps even trillions of sheets of paper? What if some of the crucial data is unstructured, or otherwise in a format that is difficult to access. How does one include data from devices connected through the IoT?

Who is responsible when crucial data is missed because the sheer volume of data involved makes it unrealistic to sample it thoroughly?

The same, of course, applies in equal measure to eDiscovery. Recent issues with Facebook and the use of account holder data by Cambridge Analytica highlights privacy issues. Most business articles being written on ‘big data’ focus either on such data protection and privacy issues or on business intelligence and the application of big data in M&A.

The legal system itself is a prodigious producer of big data. Roughly 350,000 court cases a year in the United States alone generates a vast amount of data. Within judgements, witness statements and pleadings lie a vast reservoir of data containing nuggets of value that can help win cases, if they could only be found amidst the sea of irrelevant information.

LexisNexis and Westlaw, the two giants of legal research, currently offer only quite limited analytical capabilities, beyond search functions (Marr, 2016) although that could change quite quickly should market demand warrant it.

Harvard Law School’s Caselaw Access Project1 is making 42,000 volumes containing roughly 40 million pages freely accessible online. But is even 40 million pages truly enough to qualify as ‘big data?’ By the Oxford English Dictionary description of course it does, in that it is of a size that “its manipulation and management present significant logistical challenges.” However by comparison to other datasets that are emerging, it is quite modest.

Figure 22 shows the enormity of the task of analysing the vast data sets that are beginning to be created, as the digital age gains momentum.

Figure 22

 

As at the end of 2017, the amount of electronic data in the world (both structured and unstructured) was estimated to be roughly five percent of one yottabyte. That is a pool of data of a magnitude that is impossible to contemplate with a mindset that thinks of data in terms of memory sticks, hard drives and even cloud-based databases. Figure 22 seeks to bring some perspective to the issue by relating the various data volume metrics (gigabyte, terabyte, yottabyte, etc) to the height that a pile of DVDs would need to be, that contained that amount of data.

A standard DVD is 1.2mm thick and contains 4.47GB of data. A pile of DVDs containing a terabyte of data would therefore be roughly the height of a wine bottle. A petabyte of data would require a stack 273 metres / 896 feet high.

A yottabyte of data would require a stack of DVDs that extended from the earth to our sun, and back again, and thirty percent of the way back to the sun. If that does not persuade that we need to start thinking about data in completely different ways, then consider that by 2020, the amount of data in the world is expected to be doubling at a rate faster than once a day.

Consider also the illustration showing heights of the DVD stack for the largest terms that data scientists use, and then contemplate the sheer impossibility of analysing anything like those amounts of data in the old fashioned way, with human eyeballs. Even the best machine learning tools that we have today would be hopeless at analysing such oceans of data, but they are improving all the time.

Of course, not all data is of equal importance and properly prioritising the data to be managed is crucial. Deploying data analytics to structure and filter large, complex data sets before they make their way into the data room will very quickly become essential.

The IoT is poised to dramatically ramp up big data. Combined with 5G and further generations of wireless network, which will allow devices to communicate wirelessly instead of by cable, autonomous vehicles will become a common reality. These vehicles could generate up to a gigabyte of data per second (two petabytes per annum) as they communicate through wireless networks, tracking the position of other vehicles, analysing their surroundings and processing a steady stream of traffic, environmental and other data.

‘Big data’ is clearly beyond the scale where it can be analysed by humans alone. Sophisticated AI tools are essential to even begin to deal with the volume. As pointed out in the section on AI, this raises the question of how the costs of such analysis can be recovered from clients, in business models dominated by billable hours. The amounts being spent by most firms is still relatively modest, but this will likely change in due course. Clients are understandably reluctant for the costs to be passed on to them unless that is in the context of a lower overall price, but firms that are fundamentally built around the notion of the billable hour find it difficult to conceive of any other way to charge clients.

It has been arguably easier for architectural and engineering firms, with their billing models that include fees based on the value of the project, to make this transition. CAD is also easily transferable across national boundaries. A skyscraper is a skyscraper in New York as in Shanghai, but the law is not.

As the investments in technology increase and a greater proportion of the actual legal work is done by machines (under the control of lawyers, naturally) so the notion of regarding that technology as an overhead and charging fees only on the basis of the time-input of the lawyers involved becomes less valid.

It is likely that a break point will be reached. For firms who seize AI and other dispuptive technologies as opportunities and develop deep competence in their use, this could be the point at which they can create clear blue water between themselves and competitors who have not.

For law firms who are late, reluctant adopters of these technologies, the challenges involved in catching up once that break point is reached, may prove insurmountable.

 

Sources and references

1 Harvard Law School’s Caselaw Access Project is a collaboration between HLS and Ravel Law – see https://lil.law.harvard.edu/projects/caselaw-access-project/

Kirk J., 2018. Why M&A due diligence flounders in the face of Big Data and how smart technology can help. Financier Worldwide Magazine, August 2017 edition, accessed at https://www.financierworldwide.com/why-ma-due-diligence-flounders-in-the-face-of-big-data-and-how-smart-technology-can-help/#.Wtze7C-ZN24

Van Rijmenam, M., 2017. Self-driving Cars Will Create 2 Petabytes Of Data, What Are The Big Data Opportunities For The Car Industry? Datafloq, accessed at https://datafloq.com/read/self-driving-cars-create-2-petabytes-data-annually/172

 

The ‘Internet of Things’
The Internet of Things (IoT) is another concept that has been with us for years but is only now poised to transform society. Introduced first in 1999, the IoT was for the first two decades of its existence a relatively low-key set of solutions, mostly connecting industrial equipment machine-to-machine (M2M) through non-wireless internet connections. 5G and further generations of wireless networks yet to be developed are changing that.

The idea of having literally billions of items (including many everyday things) connected to each other via the internet, exchanging data and even triggering and executing actions without human intervention, will create uncharted territory for legal services. Who is responsible when, under circumstances that could not be conceived by the owner of the technology, the developer of the software or any other human agency involved, a ‘thing’ executes an action that is illegal or in breach of a contract? How is blame apportioned? How are damages dealt with? What kind of contractual conditions need to be considered in any agreement involving devices forming part of the IoT?

Cybersecurity is an obvious concern. Security experts have already demonstrated the ease with which home networks can be hacked via WiFi-enabled devices, ranging from central heating systems to smart light bulbs. Without proper safeguards, the IoT could well end up inadvertently providing trojan horses for hackers to penetrate secure facilities where sensitive information is stored.

By its very nature, the IoT generates a vast amount of data (see section on ‘big data.) This poses challenges for developers, who will need to keep systems efficient by generating and collecting only data that is relevant and necessary.

Many IoT applications will typically operate in concert and communicating with each other, without the subjects of the data being collected even being aware that this is being done. The subjects would also likely be unaware of how the data about them is being processed and used. The ability for the subjects to give the proper consent where this is required by law and to exercise their rights in respect of the data collected could therefore be impacted.

Given the sheer volume of the data and the number of devices that will be involved, risks also exist that private or otherwise sensitive data may be re-purposed in ways that were not originally anticipated and that may not be legal. Combining the IoT with AI raises the possibility even of the technology changing the parameters within which it operates and manages data unilaterally, without human actors involved being aware at all. With the level of sanctions allowed by the EU General Data Privacy Regulations for violations of data privacy obligations (fines of up to 5% of annual worldwide turnover or EUR100m, whichever is greater) the issues involved are very serious indeed.

Autonomous vehicles including drones raise an entirely new set of possibilities, some of which have already been the subject of lively public debate, especially where accidents have been involved.

Regulators are a long way from understanding how the IoT should best be controlled and how compliance should be measured. The European Commission has published a report on the result of its public consultation on the IoT. The obvious issues such as loss of privacy and data protection have been identified as areas that need to be addressed with new legislation, but this barely scratches the surface of the issues involved.

Given absence of case law related to the IoT and diverse failures that might occur, it might be some time before usable precedents emerge, to guide courts or legal advisors. Also, clients today have little more than rudimentary understanding of what their legal needs will be with regard to the IoT, as it unfolds.

For law firms wanting to advise clients in this area, it is difficult to see how a practice can be developed with legal talent alone. The issues involved are intensely technical and complex, involving a wide range of disciplines.

Figure 23

5G: ubiquitous, global, broad band connectivity
Different sources describe the future generations of wireless communication technology in different ways. Like other forms of emergent digital technology, the field is also not short on hype. This section provides a brief overview of wireless technology as we have experienced it so far, what 5G will deliver and what further advances are likely in the medium term.

The table opposite, showing past and possible future generations of wireless technology, is based on a paper presented at the 2017 International MultiConference of Engineers and Computer Scientists in Hong Kong1, supplemented by other sources. It is entirely feasible that 6G, 7G and later generations could be overtaken by new forms of communication technology, as yet unforeseen, that disrupt wireless technology in its current form. Emergence of viable quantum computing would make this more likely.

The promise of advanced wireless technologies is to provide, certainly within the next decade or two and perhaps more quickly:

Ubiquitous, high definition mobile multimedia communication and ultra-high-speed data streaming, anytime and anywhere.

In essence, this could completely de-tether work from the need for a specific location. The drawbacks to effective teaming caused by poor bandwidth (such as unreliable video conferencing and extensive areas without adequate coverage at all) will be resolved, and augmented reality could make virtual meetings and other collaboration at least as effective as face to face. Access to the firm’s and the client’s databases will be easy and quick. The global coverage providing a network of geostationary communications satellites promises the ability to work as easily on a tropical island as in the centre of a city.

Following its debut at the 2018 Winter Olympics at Pyeongchang, South Korea2, 5G technology is now well proven and poised to be commercially deployed by about 2020. It will deliver:

  • download speeds of 10Gbps (which is ~100x faster than 2017; ~1,000x than 2010)
  • connectivity of seven trillion wireless devices serving seven billion people
  • zero perceived downtime
  • 90% reduced energy needs through new efficient micro base station technology
  • cost per unit of data transmitted falling at the same rate that data volume increases.

As with the other technologies, the real impact and the most severe unanticipated consequences will likely result from advances in wireless communications acting in conjunction with other technologies. For instance, the potential of autonomous vehicles will be unlocked by the convergent and cumulative impact of IoT together with 5G and later advances together with advances in automobile technology. High quality broadband being accessible in remote rural communities, coupled with localised electricity generation and good online education platforms, could transform not only education but also healthcare and access to other data-rich resources in frontier communities. Knowledge workers especially could live and work even in remote areas, should they so choose.

On the other hand, history has taught us that data flows tend to grow in step with the bandwidth available. So, it is entirely possible that in years to come we will still complain about buffering and connection failures caused by overload from a proliferation of IoT devices and far more data-intense applications, such as new forms of media.

 

Figure 24

 

Sources and references

1 Yadav, R., 2017. Challenges and Evolution of Next generation Wireless Communication. Proceedings of the International MultiConference of Engineers and Computer Scientists 2017 Vol II, IMECS 2017, March 15 – 17, 2017, Hong Kong.

2 Kim S. and Kim S., 2018. 5G is making its global debut at Olympics, and it’s wicked fast Bloomberg Technology. 12 February, 2018

Barak, S. 2018. Onward to 5G, 6G and beyond DesignNews, 02 January 2018. Accessed at https:/ www.designnews.com/electronics-test/onward-5g-6g and-beyond/211102982658032

Messier, D., 2017. SpaceX Wants to Launch 12,000 Satellites. Parabolic Arc. Accessed at http://www.parabolicarc.com/2017/03/03/spacex-launch-12000-satellites/

Quantum computing: the next major frontier in computing power
What is quantum computing?

Writing even a simple introduction to quantum computing is difficult. The principles of quantum mechanics are very different to those that are familiar even to most scientifically literate people. The notation is unfamiliar and the mathematics complex. One might imagine it similar to somebody in the 1920s or 1930s, trying to write comprehensibly about today’s computers.

The concept itself is not new, though. Richard Feynman, an American scientist who received the Nobel Prize in Physics in 1965, realized in the early 1980s that one might be able to build computer processers that adopt a blend of classical states simultaneously, in the same way that matter does at subatomic levels.

How do quantum computers work?

Quantum computers operate in a fundamentally different way to the computers we know. Our modern computers require that data be encoded into binary digits (bits) each of which is always in one of two definite states (0 or 1.)

Quantum computers use qubits, which operate according to two key principles of quantum physics namely superposition and entanglement (see figure 25.) Superposition means that each qubit can simultaneously represent either or both a 1 and/or a 0. Entanglement means that whether qubits in a superposition can be correlated with each other (so whether a qubit is a 1 or a 0) can depend on the state of another qubit.

A qubit can be thought of like an imaginary sphere. Whereas a classical bit can be in only one of two states, at either of the two poles of the sphere, a qubit can be any point on the sphere.

The permutations are virtually limitless. This allows huge amounts of more information to be stored and far more powerful processing, using less energy than a classical computer. With each additional qubit added, the computation power of the processor is effectively doubled.

So, a system of 500 qubits represents a quantum superposition of up to 2500 states, each state being the equivalent of 500 bits. That potential amount of data is well beyond the abilities of classical computing to simulate, indeed many multiples of the total amount of electronic data (structured and unstructured) that exists on earth at present. (See section on big data.) As a metaphorical comparison, the total mass of ordinary matter in the observable universe has been estimated to have a mass of 1.5×1053 kilograms.

When might we expect quantum computers to be a reality?

Quantum computers already exist but are in their infancy. IBM, Google, Lockheed Martin and a number of others are racing to build the first market-ready machine. Professor Winfried Hensinger, a scientist at the University of Sussex in England, claimed in early 2017 to have developed:-

“the first practical blueprint for a quantum computer capable of solving problems that could take billions of years for a classical computer to compute.”

As at early 2018, the UK Government had invested £270 million into the UK national quantum technologies programme (UKNQT) which is an initiative to promote the UK for development and commercialisation of quantum technologies and as a leader in the global supply chain that will develop to service them. This amount is darwfed by the investments being made in the USA and China, though.

Figure 25

 

In Hefei, Anhui Province, China is building a $10 billion, 4 million square foot research center for quantum applications. According to Pan Jianwei, a leading Chinese quantum scientist, the first general-purpose Chinese quantum computer could have a million times the computing power of all other computers presently in the world.

In May 2016, IBM launched a five-qubit quantum processor and matching simulator. By the end of 2017, IBM had produced a 50 qubit machine, but it was able to function only for tiny fractions of a second at a time (Knight 2017.)

In an article in the March 2017 edition of Nature, Google set out plans to commercialise quantum technology within five years. It is therefore quite conceivable that quantum computers could start to significantly impact business and societies within the five year time horizon that many law firms use to think about strategy.

What will be the likely implications of quantum computing for client legal needs?

  • the sheer processing power of quantum computers will render existing cryptographic protocols obsolete. On the other hand, new cryptographic protocols will likely emerge that are probably impossible to imagine with today’s knowledge;
  • the strategic implications of having access to quantum computers or not, is probably similar to having access to one of today’s supercomputers, versus no computer at all. The implications are obvious from a personal all the way to a global security level;
  • coupled with AI and big data, one might easily imagine intelligent systems that can trawl through unstructured data on a monumental scale, uncovering trends and intelligence that are simply not discoverable with today’s most advanced AI systems;
  • quantum computing will likely give rise to a whole new generation of legal needs. This is likely to be far more important from a strategic perspective, than their direct impact on law firms and other kinds of legal service providers;
  • quantum computing will likely unlock the full potential of tools such as real-time natural language translation, quick and inexpensive identification of tax and other legal issues across multiple jurisdictions and also across multiple disciplines. While it is easy to bemoan the fact that this will displace work currently done by lawyers and other professional advisors, the more important issue is what new, more highly value-added services advisors will be able to deliver to clients using tools such as those enabled by quantum computing.

Competitive advantage will depend on the ability to understand and capitalise on the digital tools that quantum computing delivers, especially as that relates to client businesses. It is difficult to conceive of computers that could have ‘a million times the computing power of all other computers presently in the world’ or that are ‘capable of solving problems that could take billions of years for a classical computer to compute.’

Clearly, a world in which such machines exist and are widely used will be very different to what we have known over the past decades. The step change could be as dramatic as candles and fires to electricity; horses and other draught animals to automobiles and aircraft.

If history is a reliable guide then we can expect an exponential increase in the complexity and volume of legal issues with which clients will need to contend. Excellent, digitally astute legal advisors will be even more necessary to help them manage those needs.

Figure 26

 

 

Sources and references

Cookson C., 2017. Blueprint published for first ultra-powerful quantum computer. Financial Times. 1 February, 2017. Accessed at https://www.ft.com content/344e548c-e87b-11e6-967b-c88452263daf

Hurd W., 2017. Quantum computing is the next big security risk. Wired Magazine, 7 December 2017. Accessed at https://www.wired.com/story/quantum computing-is-the-next-big-security-risk/ (Congressman Will Hurd (R-Texas) chairs the Information Technology Subcommittee of the Committee on Oversight and Government Reform and serves on the Committee on Homeland Security and the Permanent Select Committee on Intelligence.)

Ignatis, D., 2017. The quantum spy. WW Norton.

Knight W., 2017. IBM Raises the Bar with a 50-Qubit Quantum Computer. MIT Sloan Management Review. November 10, 2017.

Mohseni M., Read P., Neven, H., Boixo S., Denchev V., Babbush R, Fowler A., Smelyanskiy V. and Martinis J., 2017. Commercialize early quantum technologies. Nature, Vol 543, Issue 7644 (3 March 2017) Accessed at https://www.nature.com/news/commercialize quantum-technologies-in-five-years-1.21583

Rieffel E and Polak W, 2011. Quantum computing: a gentle introduction. Massachusetts Institute of Technology.

Singer P. and Lin G., 2017. China is opening a new quantum research supercentre. Popular Mechanics. 10 October 2017. Accessed at https://www.popsci.com chinas-launches-new-quantum-research-supercenter

West J., 2003. The quantum computer. Xootic magazine, July 2003 edition.