07.11. 2025, Vienna
We are living through one of the most disruptive technological transformations in history. AI, robots, and their interaction are reshaping workplaces at a pace unmatched by previous technological disruptions. Bold predictions about how AI may reshape work are likely not exaggerated. Yet, we don’t hear much from the policy front. As an economist with an avid interest in public policy, I have struggled to understand the role of policy in steering the relationship between technology and labor. Should policy intervene at all, and if it does, at what level should this be? Should it create incentives that direct firms’ innovation away from labor substitution and towards labor augmenting technologies? Should it reimagine education such that the investments in human capital can compete with the investments firms make in technology? Or should we tax technology the way we tax labor?
Economics teaches that if a technology can perform a job more cost-effectively than a human, firms will eventually adopt it. Firms have little choice: if they do not adopt, but others do, the cost of their goods or services will not fall relative to that of adopters and they will see themselves being outcompeted. In the long run, only the adopters survive.
History offers plenty of evidence. Agriculture used to employ up to 80 percent of total labor in many countries at the turn of the 19th century. Mechanization and logistics brought this down to 3 percent or less across the Global North today. Manufacturing used to peak at above 30 percent of total employment; now it is down to 15 percent in some economies as a result of mechanization and robotization. Clerical work peaked at 20 percent just to see itself reduced by half of that by computerization.
These dynamics play out globally as well - countries that do not adopt lose out to adopters and risk the loss in global competitiveness. This is why governments typically choose to subsidize, rather than tax, technological innovation and adoption through schemes like accelerated depreciation or investment allowances. Faster adoption means greater competitiveness, nationally and globally. There are more perks to adoption - technology increases the wages of workers whose work it complements. And then there is the downside - adopters hire less of the workers whose labor technology substitutes.
If we cannot tax technology, what else can policy do? Historically, technology has simultaneously destroyed and created jobs. The policy challenge therefore hasn’t been the one of solving mass unemployment, but one of relocating workers from obsolete to new jobs. Mobility subsidies can help workers move from regions in decline to flourishing regions, reducing geographic mismatches in job supply. Systems of education offer opportunities for training and retraining, reducing skill mismatch.
Can the same policy playbook still work today? Today’s technological transformation is different. First, the pace of change is greater - compared to other general purpose technologies like electricity, computers or the internet, AI is diffusing faster. Second, the technology is more powerful - the breadth of human tasks that AI, robots and their combination can do is greater than that of past technologies. It can perform more complex cognitive tasks than any past technology. Third, the jobs it creates may be distant from the jobs it destroys, temporally and skills-wise. If the pace of change is fast enough, one could imagine a world in which many jobs disappear before new ones emerge. The currently emerging jobs — AI prompt engineer, data engineer, ethicist — may not be accessible to everyone.
If we cannot tax technology, can we tax labor less? The tax wedge, that is the share of tax in total labor cost, neared 35 percent on average across OECD countries in 2024. This puts a great deal of incentives for firms to invest in technology rather than labor when labor-saving technology is available. Some of this, I would argue, could be put in service of workers. One direction for policy would be to direct some of this burden on labor towards financing individual training accounts - dedicated training funds that individuals accumulate over time and can use to pay for re-training and living expenses during periods of training. In France, for instance, a payroll-based training contribution by employers funds the French Individual Learning Accounts (Compte personnel de formation). In Austria, some of the tax on labor is redistributed back into adult education through Austria’s paid education leave (Bildungskarenz), where employees can take up to 12 months of paid leave to spend on training. Such schemes can help bridge the skill gap that technology causes.
How about impacting the direction of technological innovation through policy? Technology substitutes some job tasks, and augments others. Each of these increases productivity - the first only by reducing the need for labor, the second by making labor more productive. So why don’t we subsidize the innovation and adoption of labor augmenting technology? It’s worthwhile to spend some time thinking about this idea even if the devil might be in the details: augmentation may be difficult to measure and firms may find it all too easy to label tech as labor-augmenting.
But what if the transformation is disruptive enough to create a temporal and skill gap that individuals find difficult to bridge? This is a scenario that policy rarely tackles, but the time might be ripe to consider the options. Imagine a world where, at least temporarily, most work in an economy is done by technology, and the work that is done by humans requires extreme levels of skill. The current fiscal systems that rely heavily on labor taxation will collapse, and there will be little to redistribute in order to support the many jobless and ‘unskilled’ people. To maintain a fiscal system that can keep investing in humans, the source of fiscal income will need to change from labor to technology.
One idea is to take a public stake in AI. Building AI requires public infrastructure and vast amounts of citizens’ data, and governments could charge data use fees, for instance. AI is expected to generate rents - large profits due to network effects, scale, and control of data. This is similar to oil production in a sense, and the way governments take a public stake in AI rents and productivity can be modeled after the idea of sovereign wealth funds in oil-rich economies, e.g., Norway’s Government Pension Fund Global. Different from traditional sovereign wealth funds, the fund could finance socially valuable work that the market under-values - childcare, elderly care, community engagement, and environmental protection. In this way, policy does not only help protect livelihoods, but also helps maintain the critical value that work provides.
Creative destruction has always been a feature of technological progress. This time, the transformation is faster and more sweeping than ever before. We see the Red Queen effect unfolding before our eyes: to keep pace with technology, humans must run faster than ever. Our most powerful lever is public policy and we should not shy away from using it.
13.02. 2026, Vienna
How should we decide how much AI and what kind of AI to encourage in university classrooms? A useful starting point might be distinguishing between fundamental and higher-order skills, and between learning and performing.
At work, I’m seeing the pace of AI advancement exhaust some of the best lecturers I know. To do it right by their students, they keep learning, keep adjusting, keep rewriting syllabi and assessments to match a moving target. At home, my seven-year-old is learning math and reading the way I did decades ago: no calculators, no computers, no AI. Just pen and paper.
Are these two approaches in conflict or are they responding to different goals? One could argue that universities are meant to teach higher-order skills that prepare students for the labor market, while primary schools focus on foundational skills that make later learning possible. In practice, though, universities teach both.
They teach foundational competencies such as calculus, programming, basic statistics, writing, and communication alongside higher-order capacities like critical thinking, research design, leadership, and policy analysis. These different types of skills may call for different uses of AI in the classroom. To see why, it helps to distinguish between performing and learning.
Technology’s core value often comes from embodying human knowledge and ability. It does things we used to do ourselves, saving time and freeing us up to do other things. In other words, technology drives performance and productivity: typewriters embody writing, calculators embody arithmetic, computers embody a wide range of human capabilities, and AI, an even wider one. When AI does a task for us, it can boost performance, but not necessarily learning.
Learning, by contrast, is slow and effortful. It demands concentration, self-control, and patience. Importantly, once we internalize knowledge, we can apply it across domains - a powerful foundation for problem-solving and creativity.
This is where the distinction between foundational and higher-order skills becomes useful. If we don’t restrict AI when teaching foundational skills, we risk producing graduates who can complete tasks with AI support, but struggle to apply concepts creatively when the context changes. These skills may therefore be best taught in environments that limit AI use and help students internalize knowledge and know-how. Well-designed AI can certainly serve as a tutor, but the goal must remain learning and not performance alone. If the foundational skill in question is calculus, for example, teachers can use AI tutors to support students as they learn the mechanics of integration. But the practice of solving integrals must still be done by the students.
If foundational skills are about building internal understanding, higher-order skills are often about navigating complexity and here collaboration with tools has always been part of the job. Historically, technology has automated lower-order tasks while complementing higher-order ones. When computers automated complex statistical models, for instance, they allowed researchers to focus on model assessment and interpretation rather than on solving numerical optimization problems. This shift made it easier to teach research design and empirical analysis.
To the extent that AI complements higher-order skills, educators may need to encourage broader use of it, treating AI as an acknowledged teammate and exploring what students can achieve through augmented collaboration. Still, even in higher-order courses, not every part of the learning process should be outsourced. In a course on economic development policy, for example, AI can help students understand economic models faster by offering explanations tailored to how they learn. The headstart that AI gives us in learning economic models, allows for richer discussion on which theories apply in real situations, and what policy options follow. This “application” phase should remain student-led. Here, students are practicing how to transfer what they have learned into new domains. That ability to adapt and apply knowledge is itself a critical higher-order skill.
A practical takeaway is that educators can make their objectives explicit at the onset, both the syllabus and the course’s AI policy: what parts of the course are meant to build foundational mastery, and what parts are meant to develop higher-order thinking. In foundational parts, AI may be best positioned as a tutor, supporting practice without replacing it. In higher-order parts, AI can be treated more openly as a teammate, as long as students are still required to demonstrate independent reasoning and transfer. The goal is not a blanket rule about AI, but a coherent design: aligning how we use AI with what we are actually trying to teach.