Built to Be Replaced (And What To Do About It)
ARTICLES


Imagine a guy named John. He is a simple and nice person, who 15 years ago graduated in something he felt would help him make a good living. Maybe it wasn't something he adored, but he thought it was a reasonable trade - an interesting career, a decent salary, enough saved to buy a house and raise a family there someday.
And it worked! He studied hard, read everything he could on the relevant subject, met the right people, and got the role he wanted. He stayed long enough to get promoted, became an expert, and understood the organisational politics inside out.
He learned how to be very professional, too. Never late to work, never early to leave. Consistent output, measured and reported. Visible enough to showcase strong performance, but not so visible that colleagues felt threatened. He was never out of place and was great at taming the parts of himself that didn't fit the company's dynamics. Everybody loved John. He was easy, reliable, and exactly what the organisation needed him to be.
In fact, he became so good at fitting the mould that he became the machine - perfectly designed to do the particular things the company was willing to pay him for. And he couldn't be happier about the exchange. To him, his job was a big part of his identity, and it felt as if no one else could do it quite the way he did.
I think about John a lot lately.
I wonder how he feels now that his company has started asking him to run everything through AI. To let it learn his workflow, absorb his process, and handle the output while he approves the results.
I imagine his manager saying something like this cheerfully:
"John, try to do all your tasks with AI. It's such a great tool - it learns very quickly. Soon, you'll only need to check the results and approve them. Imagine how much time you'll save. You won't even need to do the actual work anymore!"
"Yeah, sure, that sounds great…" John said.
He didn't actually believe it, though.
Saving time on reports, taking minutes, and automating tasks he never liked doing sounded good, but what if John actually likes his job? What if he enjoys solving real problems, learning from mistakes, making decisions based on experience, and thinking through competing scenarios? I doubt he wants to outsource that.
And yet, somewhere underneath his polite compliance, John knows that they will eventually try to take it all from him.
The Irony
For years, many of us worked very hard to become machine-like. Consistent, measurable, and predictable. And the closer we got to those three things, the higher our performance review was.
I remember writing my CVs 10 - 15 years ago, carefully listing hard and soft skills in separate sections, almost certain nobody was reading the soft skills part. Empathy, communication, creativity - all seemed like decorative pieces to fill the space, because nearly every human had them as standard. Hard skills were what got you the role.
Now I read research on the future of work and find those same soft skills described as the capabilities least likely to be automated. Empathy, creativity, judgment, the ability to sit with uncertainty and find a way through it. The hard skills, meanwhile, are being reclassified as the easy part - something that can be easily teachable and outsourced.
Turns out the thing that made us hireable is now the thing that makes us replaceable.
This matters most not for the large organisations that have been using AI to track and manage workers for years -they've been living in this reality long enough to have built defences, or a working relationship with the numbness. It matters most for the small, purpose-driven companies and solopreneurs who stand to gain the most from AI in the near term. Resource-light and ambitious, they will move faster, produce more, and reach further without additional headcount. The efficiency gains are real and worth having.
But without intention, companies that launched because they genuinely cared about something can fall into the same trap as every efficiency-first organisation before them. They will build something that delivers clean dashboards, met metrics, and strong performance reviews - all while quietly hollowing out the people doing the work. The very people who possibly even inspired the venture in the first place. The work will get done faster, but the people doing it will start to feel less like people.
The EY 2025 Work Reimagined report identifies the real risk of AI not as job loss but as purpose loss. When AI absorbs execution, and humans are only left to approve the outputs rather than generate them, judgment and critical thinking get less exercise. People who once solved real problems start to feel like marionettes - perfectly functional, but no longer pulling their own strings.
That feeling has a name. Émile Durkheim called it anomie - a state of normlessness in which established guides for behaviour lose their hold, leaving people disoriented and unsure of how to act. It is worth understanding before companies build AI into their workflows, so they can prevent it rather than try fixing it later.
John is not a cautionary tale about technology. He is a cautionary tale about what can happen when we design work around output rather than around people and then hand the output to a machine.
A More Human-Centred Approach to Work Design
The question, then, is not whether to use AI. It is whether you are being intentional about what happens to the humans around it.
The MIT Sloan EPOCH paper mapped human capabilities across every occupation in the US labour force and found that Empathy, Presence, Opinion, Creativity, and Hope are the five categories most complementary to AI and most resistant to automation. The research highlights that companies should be investing in these capabilities and developing them in their people instead of treating them as peripheral to the real work.
These are, of course, the same capabilities that spent decades buried in the soft skills section of CVs that nobody read.
The irony writes itself.
But investing in EPOCH capabilities is not enough on its own. Retraining people does not automatically make their work more human. That requires a deliberate work design decision - one that many organisations may not be making. The WEF's AI at Work community paper (2026) found that organisations typically go the other way: AI systems absorb the execution layer, the traditional career ladders break, and leaders find themselves rethinking how career progression in their companies should work and what mentorship is needed to support employee development and progression in this new structure.
The ILO's most detailed global assessment of AI's impact found that one in four jobs worldwide is potentially exposed to generative AI. Not necessarily to replacement, but to transformation. And whether that transformation makes work better or worse depends entirely on design choices and policy decisions, not on the technology itself.
AI does not decide what humans are for. The people running organisations do. At least for now :)
Work design - the content and organisation of tasks, activities, relationships, and responsibilities - is how those decisions get made in practice. Human-centred work design correlates with better mental and physical health, which in turn produces organisational productivity. Companies built around human flourishing are not idealistic. They are intelligent. They are building something that cannot be bought: culture, trust, and conditions in which people genuinely want to stay and do their best work. Those people become, eventually, the ambassadors that no marketing budget can create.
Parker & Knight's SMART Model of Work Design (2024) offers a practical framework for what this could look like. Good work design ensures work is stimulating, creates opportunities for mastery and autonomy, satisfies relational needs, and keeps job demands tolerable. Companies implementing AI should not only focus on whether new systems are productive and efficient, but also stress-test their decisions against all 5 dimensions, and ask themselves whether the work is still genuinely good for the people doing it.
And a useful diagnostic while you're at it is a simple question: “Who exactly is championing and funding certain technology decisions, and whose interests do they serve?” If the answer is consistently IT leadership, vendors, or cost-cutting mandates, the social and human side of the system is likely to be underdesigned.
How AI Affects Solopreneurs and Micro-teams
For large, resource-rich companies to be intentional about work design is one thing, but for solopreneurs and micro-teams to design theirs may be another story entirely.
When you build something of your own, it is too easy to focus on what AI lets you do that you couldn't before. How fast you can move, learn, and execute. How much you can produce alone. A half-formed idea is now enough to get a full strategy mapped, a content plan drafted, and a launch sequence built - all before you've had time to decide whether it's actually what you want to do.
But when your work can be done with you clicking a few buttons, the question becomes: what is work for? Not what it produces, but what it actually means to you.
Solopreneurship is usually started as a way to express yourself more fully. To do work that actually fits who you are. But if you are not careful, you will recreate John's situation - outsourcing your thinking, your ideation, and your strategy until you are approving outputs you did not really generate from a business you are no longer really running.
At which point, what exactly are you the founder of?
The same five dimensions apply here, too. Use AI if you need to, but implement it against the SMART characteristics. Use it to make your work more stimulating and more autonomous, not less. Use it as a thinking partner who asks what you think you should do and then provides evidence for and against, not as a colleague who tells you what to do and why - not the other way around.
The distinction might appear small, but the difference in who you become over five years is not.
You Have a Choice to Make:
Human-centred organisational design has been slowly growing since the mid-20th century, but with the rise of AI and innovative technologies, it has become something more urgent than a management philosophy.
It is no longer just about health, well-being, safety or engagement scores. It is about whether the humans inside organisations retain the conditions that make work meaningful. Whether they remain agents in their own working lives, or become mere approvers of someone else's outputs.
John did everything right. He learned the rules, played the game, built the expertise and became indispensable by making himself into exactly what a machine would eventually do better, quicker, and more efficiently.
Organisations now have a choice to make - and it is not a philosophical one. It gets made in every technology decision, every workflow redesign, every conversation about what AI should handle and what should stay human. They need to ask whether they should do more with fewer people, faster, and call it progress, or use this moment to finally build the conditions in which people do the work that actually needs a human - and have the space, the autonomy, and the meaning to do it well.
Not everyone will choose the latter. Some will chase the short-term efficiency and profits and wonder later why nobody wants to work there. But I remain hopeful that the ones who started their companies because they actually believed in something bigger than themselves will, and that’s what I want to focus on.
Thank you for reading,
Lina
_____________________________
SMART Work Design Model:
Stimulation:
Does the role require a range of different tasks?
Does the role require the use of a variety of skills?
Does the role require novel ideas and solutions as well as cognitive processing?
Does the role require attending to or processing data and information?
Mastery:
Does the role provide clarity that gives the worker confidence in what their responsibilities are, including what and how to do them?
Does the role enable the worker to receive regular feedback about the effectiveness of their task performance?
Autonomy / Agency:
Does the role allow freedom and the chance for independent decision-making?
Does the role allow the worker the latitude to choose the order in which he/she completes tasks?
Does the role allow the worker to use his/her personal initiative?
Relation:
Does the role make it clear the impact it has on others’ lives?
Does the role allow social support from other team members?
Does the role give the worker enough connection to the people they're helping that they can actually feel how their work affects them?
Tolerance:
Role overload: Does the role protect the worker from excess responsibilities given their time and responsibility constraints?
Role conflict: Does the role ensure that the worker has compatible or consistent expectations in his/her work role?
Work-home conflict: Does the role protect the worker from his job interfering with his family-related responsibilities?


© Copyright LINA MILESKAITE 2026
GET IN TOUCH
