The ‘tournament of lawyers’ business model and its nineteenth century origins
With this paper, we aim to convince you that the pyramid-shaped law firm business model has run its course. That it will soon be as inadequate to meet complex client legal needs, as would be an architect with drawing board and ink pens in today’s age of computer aided design (CAD.) We aim to challenge you to rethink the way in which your firm operates as a business enterprise. Not all law firms will be up to the challenge of transforming to the digitally-leveraged, agile ‘computer aided legal service’ model that we explore in Chapters 5 and 6. We expect to see more defensive mergers in years to come as law firms try to merge their way out of systemic decline. Two dinosaurs’ merging do not create a gazelle, though. Merging firms need to use the opportunity created by their merger to migrate their business models, too. Otherwise, perhaps after a short period of boosted performance, they are likely to return to the same slow, inexorable slide, eventually to competitive oblivion.
Julius Cohen was an American lawyer who, in 1916 wrote a book titled: ‘The law, business or profession?’ At that time, the U.S. legal profession was being rapidly and severely disrupted by the impact of the technological innovations of the late nineteenth century. Close partnerships of lawyers with generalist practices of the Victorian era were giving way to new, far more business-like law firms. A proliferation of new and more complex law made it difficult, even impossible, to be well versed in all its aspects and lawyers had to specialise. Shortages of legal talent drove firms to use their trainee lawyers (today’s junior associates) more substantively on client work, even charging for them – something that would have been unheard of, before. Lawyers became more business-like in their relationships with their clients. To many, these developments were anathema. The practice of law, they argued, was a vocation that should be elevated above mere business considerations.
The debate that Cohen described rages on in some quarters today, a century later. Dire predictions are routinely made about the collapse of ethical norms in society and the rule of law, if change is imposed on law firms or if the regulatory safeguards that protect the profession are diluted, to align with modern client needs.
If the trends outlined in this paper are accurately portrayed, then these arguments will likely be overtaken during the next decade by more fundamental concerns about both the practice and business of law in a truly digital era. But we are getting ahead of ourselves. Let us return to Cohen and his era.
Through the first three decades of the 20th century, the world’s financial institutions, major corporations and government clients grew massively in scale and complexity. So too the markets and regulatory environments in which they operated. The process of globalisation began.
In the wake of the Great Depression starting in 1929 and up to the early years of World War 2, these trends accelerated. An entirely new kind of law firm emerged.
At the core of this new kind of law firm was a business model consisting of a pyramid with partners overseeing more junior associates and trainees. Other kinds of professionals were also involved, first in administering the business of the firm and later, as firms became more complex, in other business functions and eventually even directly supporting the delivery of legal services to clients.
Frequently labelled with the somewhat pejorative moniker ‘non-lawyers,’ prohibitions on sharing fees between lawyers and others precluded them from owning equity in the law firm. A ‘lawyer versus non-lawyer’ divide became entrenched through much of the 20th century, lawyers in some ways resembling the knights of old, supported by their ‘non-lawyer’ squires and serfs (figure 10.) In time, this divide became embedded in the fabric and culture of the profession. In most global markets, this divide has now either ended or it is in the process of being dismantled.
Galanter and Palay traced in detail the rise of the one hundred largest American law firms, showing that much of their success stemmed from their ability to blend the talents of experienced partners with those of energetic junior lawyers driven by a powerful incentive—the race to win “the promotion-to-partner tournament.” At the same time, they showed how the very drivers of growth would likely lead to the model’s undoing. As more associates won the tournament and became partners, so more junior associates needed to be hired in order to maintain the leverage.
This meant that as firms reduced their intakes in recent years, associates had to come to terms with vastly reduced chances of becoming an equity partner. A modification to the model, introducing ‘non-equity partners’ who then competed for spaces in the equity, slowed the trend – but only temporarily.
Today, a new radical business model transformation seems poised to deliver the death-knell for this people-leveraged tournament. Being displaced by machines is a general concern across modern society but for law firms, with their people-leveraged business model, the prospect is especially troubling.
The increasing need to collaborate substantively with other professionals and the continued blurring of boundaries between the pure practice of law and other kinds of business advice could place law firms who practice only law at a disadvantage, in the future. Finally, a significant part of the work being delivered by technology will make cases where it makes sense to bill by the hour the exception rather than the rule.
To understand how the ‘pyramid of lawyers’ law firm business model will change, it is useful first to understand how it came into being. A different model existed before that, namely small general partnerships. We can still see this model in law firms in frontier markets with simple economies and legislative systems. For these latter day general partnerships, digital transformation as their economies grow in scale and complexity might involve leapfrogging to digitally-leveraged models. This would also be an elegant solution to shortages of skilled legal talent in some of those countries.
When was the age of greatest technological disruption?
Who saw more technological disruption in their lifetimes, Julius Cohen, or a person born in 1950 looking back from today? Cohen was born in 1873 and died at age seventy-seven, in 1950. Asking a room this question almost always get a show of hands in favour of the latter. The evidence though, seems heavily in favour of the former (see figure 7.)
When Julius Cohen was born in 1873, people used candles to light their homes and fires to warm themselves. Thomas Edison patented the first commercially viable light bulb when Julius was five years old.
By 1950, with the advent of distributed electricity, light bulbs were in common use across most developed economies. Today, they remain the most common form of lighting. Distributed electricity also made refrigerators possible. In 1873, there was no way for ordinary people to preserve food without drying or pickling it. By the outbreak of the World War 1, nearly fifty years later, less than half of American industry was using electricity for more than lighting.
In 1873, wealthy people travelled by horse and cart and the less affluent by other forms of animal-drawn vehicle, or on horseback, or walked. The internal combustion engine changed that with the invention of the Benz Patent-Motorwagen in 1885 (perhaps the ‘bitcoin of the automobile industry’ – see figure 9.) The automobile became ubiquitous with the invention of the modern assembly line, made possible by distributed electricity, introducing the Model T Ford. The 2018 model sedan car is faster, safer and more comfortable than its equivalent in 1950, but the technology remains essentially the same. Electric and then driverless cars seem set to create a fundamental step-change in motor transport over the coming decade, though.
It took several years for the automobile to displace horses. As with many disruptive innovations, early generations of automobile were notoriously unreliable. Disruptive innovations are frequently ignored or discounted as unreliable at the outset, especially by those with vested interests in the incumbent technologies. Figure 9 shows the relevance of this today, in the form of a parable.
The internal combustion engine and assembly line gave rise also to modern aviation and revolutionised the railways and shipping industry. In 1873, the air travel was by hot air or hydrogen balloon. Julius Cohen was already 30 years old by the time the Wright brothers flew their aircraft on that first twelve second flight at Kitty Hawke on 17 December, 1903.
By the time he died in 1950, the era of commercial jet liners had been ushered in with the De Havilland Comet.
Today, the most common mode of international travel remains commercial jet liners. Today’s Airbus A380 and Boeing 787 Dreamliner are far from the small and notoriously unsafe Comet, but the technology is essentially the same.
In 1873, a number of sophisticated mechanical calculating machines had been developed – but of course no computers. The first electro-mechanical binary programmable computer (generally regarded as the first really functional modern computer) was invented by German Konrad Zuse in his parents’ living room between 1936 and 1938. By the time Julius Cohen died in 1950, the state of the art was probably Remington-Rand’s ERA 1101 computer. One of the first commercially produced computers, the company’s first customer was the U.S. Navy. The machine stored 1 million bits of data on its magnetic drum.
Better known is the ENIAC computing system, built by John Mauchly and J. Presper Eckert at the Moore School of Electrical Engineering of the University of Pennsylvania, from 1943. ENIAC was > 1,000 times faster than any previous computer. It occupied more than 1,000 square feet, used 18,000 vacuum tubes and weighed 30 tons. It was believed that ENIAC had done more calculation over the ten years it was in operation than all of humanity had, until then.
ENIAC, the 1101 and its contemporaries were used mostly for performing complex calculations, storing vast amounts of data and communicating data between them. Since 1950, computer processors have progressed from vacuum tubes to transistors, to integrated circuits, to microprocessors. Their performance has improved exponentially.
But we still use them for roughly similar purposes – complex calculations, storing vast amounts of data and communicating data across the world. The advent of artificial intelligence, big data and quantum computing seem set to trigger a step-change, though, that will make ENIAC’s seem tiny by comparison.
Even space travel saw its seminal technology developed prior to Julius Cohen’s death. Liquid-fuelled rockets were invented during World War 2 and the first rocket launch from Cape Canaveral took place in the year of his death.
Could it be that the seventy years preceding 1950 produced far more fundamentally disruptive technological innovations, than 1950 to the present day?
The combined impact of distributed electricity, the internal combustion engine and the telephone, amongst other 19th century inventions, utterly transformed Western society.
At the same time, the amount of law also began to climb the vertical of an exponential curve, the tipping point of which was sometime in the late nineteenth century (figure 12.) Perhaps it is no coincidence that the offset lithographic paper printer was invented at about that time, in 1904 by American Ira Washington Rubel. This made printing law and other material far cheaper and easier.
The preceding business model
As previously noted, at this time law firms were generally very small, ordinary partnerships. Charging clients for the services of their junior apprentices would have been unthinkable and the notion of time-charging was as yet unheard of.
Because the law was relatively limited in scope and volume, and the legal needs of clients relatively straightforward, it was feasible for a lawyer to have a quite general practice.
Billing methods in common use by the early twentieth century included fixed fees, annual retainers and discretionary ‘eyeball’ methods. In 1908, the ABA also approved contingency fees.
As client needs escalated in response to more voluminous legislation, more stringent regulations and more complex business needs generally, so law firms had to increase their capacity and capabilities. A disconnect emerged between lawyers who were fulfilling the same role in society as fifty years before (their practices being mostly advocatory and notarial in nature) and law firms who were advising businesses and financial institutions on matters of compliance, growth and corporate strategy. This disconnect led to strident debate in the American Bar Association (ABA) at the time and also to Julius Cohen’s book, in which he wrote:
“We are administering our discipline and our ethics committees upon the philosophy that the Bar is a profession, and we are conducting the practice of the law in large measure as though it were a business.”
Today, in some quarters, this debate continues to rage. Perhaps it is time this was recognised, once and for all, as the false dichotomy that it is.
The ubiquitous S-curve
Most trends in life go through a cycle of emergence, followed by a ‘tipping point’ where the trend (if it does not die out) undergoes rapid growth, then levels off, followed frequently by a decline as a new trend or fad replaces it. This can be observed in trends as diverse as teenage fashions, disease epidemics, the popularity of Broadway musicals, health fads – and business models.
For the law firms advising large business clients, such a tipping point in the ‘Tournament of Lawyers’ model after World War 2 saw the beginning of a golden era, the like of which the profession had not seen before and which was to last until the early 1990s.
Lawyers at that time enjoyed a strong position of ‘knowledge asymmetry’ with their clients. In other words, they knew a lot more about the law than did their clients. Knowledge asymmetry is not unique to the legal profession. It is one of the key attributes of a profession, so just as relevant for instance to doctors, architects and engineers. Until clients started developing their own in-house legal capabilities from roughly the 1990s onwards, law firms had a virtual monopoly on advising clients on their legal issues. As Western economies grew in scale and complexity and so did the businesses that drive them, so the demand of high quality legal advice increased. Law schools flourished and most major universities established them, leading to a steady increase in the number of newly-minted lawyers entering the workforce each year.
By the 1930s and 1940s, state bar associations started publishing “suggested” minimum fees. These persisted until 1975, when the Supreme Court ruled that minimum fee schedules violated antitrust law, triggering almost universal adoption of the ‘billable hour.’
The origins of the billable hour
Reginald Heber Smith was managing partner of the firm Hale and Dorr (now a part of WilmerHale) from 1919 to 1956. Widely regarded as one of the most significant pioneers of the modern law firm, he is attributed with the invention of the billable hour for legal services.
Quite ironically, given the iniquities that have been laid at the door of the billable hour in recent years, the idea grew from Smith’s experiences in the world of legal aid. It was a system that Smith intended to promote fairness, efficiency, client satisfaction, professional ethics, and the advancement of the public good. Working with the Harvard Business School, Smith devised a system of accounting and record keeping for the Boston Legal Aid Society, including a method for tracking statistical information on cases, that increased the number of cases that the Society was able to handle by 65%, and reduced the average net cost of each case from $3.93 in 1913 to $1.63 in 1915.
Smith joined Hale & Dorr after World War 1 and quickly embarked on an inquiry to discover whether the same principles could be applied in a commercial law firm. He divided an hour into one-tenths, choosing that unit simply for ease of arithmetic. So was born the time sheet, divided into six-minute increments. At the time, Smith wrote:
“This simple plan had but one weakness which is that lawyers are individualists. They hate any system and to keep a detailed record of time seemed to them about as bad as a slave system.”
Again, not very different to today!
Initially an internal metric used to track the cost of delivering services to clients, the notion of tracking time translated only around 1940, into billing on a time-charge basis. As Smith observed:
“This method is especially pleasing to businessmen, all of whom have cost systems of their own. You can show him your cost and you can give him your supporting evidence. This at once dispels the notion that you are charging ‘all the traffic will bear.”
Saturation and decline
By the 1990s, demand for services from conventional U.S. law firms was growing more slowly than GDP (adjusted for inflation.) While records in other markets are less comprehensive than the U.S. dataset, evidence suggests that the same applied to British and other European firms, in their own markets.
For UK-headquartered firms especially, this caused a drive for international revenue growth, with a rapid succession of office openings by the Magic Circle and other large UK firms.
A view emerged in the market that the mid-market was a dangerous place to be and that the only two routes to profitability were:
- small specialist boutiques focusing on highly profitable work, or
- large firms straddling multiple practice areas, with the critical mass to invest in areas that mid-level firms could not afford.
Quality of lawyers being equal, the theory went, one had to either be small and focused, or big and diverse. This ignored the fact that some mid-level firms were very profitable.
Other problems with this argument are:
- From a client’s perspective, quality almost always trumps scale.
- The “U” curve graphics that were produced to illustrate the argument showed only firms above a certain size and/or profitability level (see diagram “A” in figure 14.) If one included the small, unprofitable firms that fell below that base, the “U” curve often became a straighter line, implying a strong correlation between size and profitability (see diagram “B” in figure 14, which is identical to diagram “A” save for the addition of the small, unprofitable firms.)
- The higher profitable boutiques then become outliers to the trend. Which is not to detract from them as successful firms. It simply questions whether the “dangerous middle ground” is always as dangerous as has been suggested.
- The wide range in profitability across the mid-sized firms frequently created a wide enough deviation to undermine confidence in the statistical argument.
One reason for the popularity of the “dangerous middle ground” myth is use of league table rankings as a primary measure of strategic success. This has detracted from what should be the primary strategic driver (needs of their clients) towards scale and ‘profits per equity partner’ (PEP.)
League tables furthermore mostly exclude multi-disciplinary advisory firms (most notably the ‘Big 4’ – Deloitte, EY, KPMG and PricewaterhouseCoopers) and other kinds of advisory businesses that deliver legal services to clients. One wonders for how long they will be able to sustain their role as definitive authorities on market positioning of legal service providers, unless they develop more sophisticated ways of segmenting legal services.
Sources and references.
1 Galanter, M. and Palay, T., 1991. Tournament of Lawyers: The Transformation of the Big Law Firm. University of Chicago Press.
2 Cohen, J.H., 1916. The Law: Business or Profession? The Banks Law Publishing Company, New York
3 This phenomenon is referenced in Moore’s Law, which originated in around 1970, the popular version of which states that overall processing power of conventional computers (or more accurate, of CPU transistors) double roughly every year or two years, without increase in price.
4 Gladwell, M., 2000. The Tipping Point: How Little Things Can Make a Big Difference. Brown Little. Note: Malcolm Gladwell defines a tipping point as “the moment of critical mass, the threshold, the boiling point” when a trend takes off. Adrian Bejan, and Syvie Lorente describe the phenomenon in terms of what is now called the Constructal Law, first published in the Journal of Applied Physics in 2006. Says Bejan: “This phenomenon is so common that it has generated entire fields of research that seem unrelated – the spread of biological populations, chemical reactions, contaminants, languages, information and economic activity. We have shown that this pattern can be predicted entirely as a natural flow design.” (Bejan, A. and Lorente, S., 2006. Constructal theory of generation of configuration in nature and engineering. Journal of Applied Physics 100, 041301 (2006.) Accessed at https://doi.org/10.1063/1.2221896.)
5 WilmerHale website, 2010. Slice of History: Reginald Heber Smith and the Birth of the Billable Hour. Accessed at: https://www.wilmerhale.com/pages/publicationsandnewsdetail.aspx?NewsPubId=95929