Search
Close this search box.

The Next Frontier Of Work

Trustlessness

Suppose I lend you money, and we agree that you will pay it back in a week. In such an arrangement, the trust between us is the basis of the transaction. But suppose that a week later, you are nowhere to be found. In a country governed by the rule of law, I would find recourse through the courts. But in a country without the rule of law, I would have to find and force you to pay back the money by other means. Either way, the problem with trust is it can be broken. And the cost of enforcing a contract when trust is broken is often high – sometimes higher than the value of the contract.

Blockchain technology promises to solve this problem with the use of smart contracts. Instead of writing the proverbial loan agreement on a piece of paper and trusting that the other party will do their part, now we can write the terms into a computer program and install it on a public blockchain such as Ethereum. Then on the day to repay the loan, the smart contract would automatically go into your bank account and settle the outstanding debt without human intervention. In addition, the smart contract would monitor your bank account to prevent breach. Because the contract would be installed on a blockchain, which is hosted on many computers (or decentralised), nobody would be able to stop it from running, and the transaction would be transparent and visible to everybody. Hence, the promise of a trustless transactional environment.

This trustless idea has far-reaching implications in the role of trust-bearing or coordination institutions such as the state, banks or deeds offices, to name but a few. With blockchain technology, processing a home loan and transferring property would take minutes instead of months because a smart contract would be able to enforce the rules, and the blockchain would keep a “permanent” record of the transaction.

Although this is an over-simplification of a complex set of systems, I estimate that trust merchants, including insurance companies, are at the cusp of a revolution with the possibility of micro players, even individuals, being able to write smart contracts to help facilitate agreements that would ordinarily be handled by large firms or government institution.

This poses a question – is bigger better?


The Doctrine of Bigger Is Better

Before the Industrial Revolution, most people worked on smallholding farms or were artisans, producing a few handmade items for their local communities. Among others, the challenge with artisanry was the limited capacity for producing large quantities of products. Secondly, the quality of the product was often not as good as it could have been due to weather changes, access to raw materials and human error. In addition, the ability to distribute products over a large geographical area would have been difficult without safe ways of transferring money and delivering goods. All this made it difficult for businesses of the time to expand their sales beyond their local communities.

However, the adoption of various technologies and new philosophical ideas made it possible to overcome some of these limitations. For instance, the ability to mechanise repetitive tasks using machines made it possible for people to produce large quantities of products consistently and cost-effectively. Consequently, the production system became one of the marvels of the early 20th century and made it possible for ordinary people to have motor vehicles, for example, a product that was limited to a few super-rich individuals.

Politically, the idea of nationalism also emerged and played an important role in creating larger marketplaces. Before nationalism, it was common for people to identify themselves by tribal or monarchic lines, and often limited their travels and marital reach to the confines of those domains. However, the sense of national identity, enshrined in a constitution rather than a king, transcended these confines and created homogeneity in larger territorial expanses. In addition, national institutions, such as schools, propagated the idea of a national language and common values among people, making it possible for people to identify and cooperate with one another.

The economic conditions emerging from the combined technological and ideological changes made it more attractive for people to live in cities rather than the countryside. Mass production at a consistent quality and fairly low cost became necessary and possible, leading to the rise of mega-corporations. The great wars of the 20th century also made mass manufacturing essential; countries without these capabilities simply found themselves in peril.

Today, the idea that bigger is better is so ingrained that one could be castigated for suggesting that a company remain small. Take, for instance, the crash of Facebook’s stock price in February 2022 when they forecasted slower-than-expected growth – not a decline, just slower growth. Their share price, which had peaked at $384 per share in December 2021, tumbled by more than 26% in one day after making the announcement. In reality, their number of active users per month was still at its peak at 2.9 billion, and the only thing that declined was the growth rate rather than the number of active users. Yet, the sentiments around the company plummeted as though it was going out of business.


The Productivity Problem

As Milton Friedman famously pointed out, “There’s not a single person in the world who could make [a] pencil.” Of course, he was referring to the principle of division of labour. The wood on the stem of a pencil is cut from some farm, the graphite is quarried somewhere else, and the eraser is made from rubber that was extracted from a tree in yet another part of the world. Hence, the statement that no one person can make a pencil.

Likewise, it would not be worthwhile to harness all these competencies to produce one or a few pencils. This is yet another reason higher became better – or scale became necessary.

I contend that scale became necessary to deliver complex goods, not because that is what the market needed. The complexity (which implied scale) was due to the technological limitations of the time. This is not to say the market did not enjoy the scale – mainly in the creation of jobs and a cosmopolitan lifestyle, however, this was a consequence of the market arrangements and must not be misconstrued as necessity.

Today, 3D printing technology and artificial intelligence make it possible for a small group of people or individuals to be as productive, if not more than hundreds of specialists who have spent years mastering their craft. SpaceX, which has reduced the cost of rocket launches by more than 97% as compared to the 1960s, is a stellar example of this fact.

Nevertheless, the fundamental point that Friedman made remains true. No one person can create a pencil. But instead of relying on other people, we can now rely on machines to perform the various specialisations. In addition, machines are becoming increasingly cheaper and more accessible, making it ever more possible for amateurs like myself to create complex products consistently and cheaply. In a way, technology is leading us back to the world of the artisan and challenging the notion that bigger is better. But as we journey back to the past, we might find ourselves in yet another problem.

Since the dawn of civilisation, we have drawn meaning from what we do, especially when our labour impacts other people’s lives or yields the desired results. Presumably, artists would find more meaning if their work resonated with an audience. An entrepreneur would find more satisfaction in a business that yielded good financial returns, happy customers and their desired lifestyle. Likewise, a plumber who unblocks sewage pipes, let’s say, would find meaning in having done the dirty work that brings peace of mind to another person. In this regard, the more we do things that yield the desired outcomes, the more meaning we derive from our work. In other words, one of the byproducts of work is meaning.

Of course, we cannot ignore that people find their work a lot less engaging and less meaningful in large companies. In those environments, the division of labour is likely so severe that they can no longer trace how their work impacts others. In the process, they lose the sense of purpose or why they are doing what they do, except that it pays the bills. In response to this problem, companies are spending tremendous effort and resources to boost morale and sell their vision to employees in the hopes that they would see the bigger picture, the why, as Simon Sinek put it.

Nevertheless, it seems that people are increasingly withdrawing from dedicating their lives to large companies and are choosing to work in smaller numbers on more “meaningful” projects where they have direct relationships with customers or end-users. The Great Resignation, as some scholars call it, is gaining traction. Even Google Trends indicates that we are currently in the 95th percentile of people searching for “when to quit your job.” These resignation sentiments also coincide with the Gig Economy, where knowledge workers have taken to marketplaces such as Fiverr and Upwork to contract with one another on a project or semi-permanent basis. Furthermore, infrastructure such as the Internet and A. I. tools are making it increasingly easier for people to take the plunge and work independently as freelancers while remaining extraordinarily productive.

Now that we are turning towards working independently, however, what will happen to the notion of meaning as derived from work?

When I was still a graphic designer several years ago, for instance, I agonised over the creative process and beamed with pride when handing over a logo or brochure to a customer. Today, I can click a few buttons and allow a program like Logomaster.ai to use artificial intelligence to create a good enough logo within 30 seconds. In fact, many graphic designers use programs like this for their ideation process and only spend a few minutes tweaking or refining the final product.

Arguably, in this new quasi-creative process, the A. I. has removed the spiritual intercourse between client and designer; the part where they sit together and dream up a new reality; where the client trusts the designer to come up with something meaningful, and indeed the designer pours themselves into the work to consummate the relationship. All of that is now gone because a few clicks of a button are likely to produce something useful but not necessarily more meaningful.

And so the question remains, will technology rob us of the privilege of doing meaningful work?

In the past, maximising productivity was the main problem at work. But with technology, this problem is evidently transitioning towards the next frontier. Instead of worrying about our capacity to deliver, we are increasingly becoming concerned about whether we should even do the work in the first place. We are moving from CAN this be done, to SHOULD this be done.


Moral Transfer and the rise of the ethistician

Forty years after dropping the atom bomb on the city of Hiroshima and wiping out an estimated 200,000 people in a moment, Paul Tibbets, the general who was in charge of the mission, sat for an interview with Tom Ryan. Ryan wasted no time in probing Tibbets with a question that had been on everyone’s mind ever since. “General, let me ask you, are you proud of what you did?”

“Yes, I am”, replied the Tibbets, “…because, look, a military man starts his career with the idea of serving his country and preserving the integrity of that country, and I feel that I did just that very thing.” He responded briskly, albeit with some words muffled by his 74-year-old voice. Then, he unflinchingly dropped another bombshell. “I have to say; we can’t look at the so-called grimmer aspects of it because there is no morality in warfare. So, I don’t dwell on the moral issue.”

In the military, it is standard practice to absolve soldiers from the moral burden. After all, the army would become dysfunctional if soldiers second-guessed their commanders or dilly-dallied about instructions. In essence, the soldier becomes the commander’s property, mechanised and sent to their death if needs be in the name of a mission. Or, in. Tibbets’ case, sent to do the deed without batting an eye-lid. I call this phenomenon Moral Transfer (and I explored the subject more in a podcast).

Given that companies adopted most of their practices from the military, it was similarly normal to expect employees to uphold corporate hierarchy and, like soldiers, implement commands without friction. Such employees enjoyed many rewards and promotions for their loyalty and dutifulness. But once again, the contemporary employee is more resistant to moral transfer and apt to question their boss’s intentions, purpose and purity of their decisions. Thus, loyalty and hierarchy are no longer the highest virtues at work—instead, more fashionable virtues such as purpose and connectedness have taken centre stage.

While the military is more apparent in its moral transfer, this phenomenon is more ubiquitous than it appears to be. For example, I come from a family of somewhat strict cultural norms and traditions, many of which remain unchallenged for many generations. In our culture, one should not look at elders in the eye; a handshake must be as light as a flag (especially when greeting elders); And as a married man, I do not make conversation with my mother-in-law and father-in-law, and my wife, similarly, does not interact with my dad, however, my mother is exempt.

These rules preserve solutions to moral problems that our ancestors dealt with. As a result, a large segment of society has never had to think critically about moral dilemmas, which is not a bad thing but not necessarily good either. Moral questions had always been ceded to leaders, chiefs and institutions. At any rate, among the few who have been in leadership positions, history is witness to the difficulties and intricacies of upholding wholesome moral standards. Even among us mere mortals, our cupboards are full of skeletons, many of which remain under lock and key and often come out in full parade when we die.

As the artisan and the smallholding farmer re-emerges, but with superpowers, the next frontier of work will demand increasing sophistication in ethics. Given that we might not do the work anymore since A.I. will do it for us, the last vestiges of doing meaningful work will rest upon deciding what work ought to be done. We will become demigods, as it were, truly speaking things into existence—of course, with our machines bringing those things into existence. Like the Greek gods, we will have extraordinary powers, but will we have the wisdom to discharge them properly?

Without an external (or objective) moral standard and with the prevalence of social media, where algorithms “help” us get access to information, we run the risk of quietly adopting artificially generated assumptions, out of which we “speak things into existence”. This will lead to a positively reinforcing reflexive cycle where people become increasingly divorced from reality (and enforced by an echo chamber created by algorithms). In turn, work may become less about serving the other, since the other will be able to serve themselves with machines, and more about actualising one’s innermost fantasies.

Life will be “but a poor player that struts and frets his hour upon the stage and then heard no more. A tale told by an idiot, full of sounds and fury, signifying nothing”, as Shakespeare put it, unless we choose a different path. But the question is, which path is that?

Subscribe
Notify of

1 Comment
Oldest
Newest
Inline Feedbacks
View all comments

That was a very nice and thoughtful essay! I too wonder what would happen and how former workers would think of purpose if A.I. takes over.

Enter your email address below to subscribe.