Employees at Meta launched protests across several US offices after the company introduced mouse-tracking software designed to support AI training efforts.
Staff distributed flyers encouraging colleagues to oppose the monitoring system. Meta says the technology helps train AI agents to understand everyday computer interactions more effectively.
The backlash arrives during a period of workforce uncertainty, with the company also planning layoffs affecting roughly 10% of employees.
The dispute reflects a broader workplace tension developing across the technology sector. Companies want greater efficiency through automation and AI systems. Employees increasingly question whether monitoring tools could eventually contribute to replacing parts of their own roles.
Many professionals recognise the underlying concern. Workers accept productivity software when it improves collaboration or reduces repetitive tasks. Resistance grows when tracking feels intrusive or tied directly to performance measurement.
Large employers have faced similar reactions before.
Warehouse operators introduced tracking systems to optimise fulfilment speeds. Customer service teams adopted analytics tools to monitor response times. In several cases, staff argued the technology prioritised metrics over trust.
Meta’s approach raises deeper questions because the software reportedly supports AI development itself.
If companies use employee behaviour to train future AI agents, workers may begin asking where operational support ends and replacement risk begins.
The implications stretch beyond one company:
- Employers may face stronger demands for transparency around monitoring tools
- Workplace surveillance policies could attract greater regulatory attention
- AI training methods may become a growing labour issue inside large firms
What happens if employees start viewing productivity software as a direct threat to job security? Resistance to AI adoption inside workplaces could intensify far faster than many executives expect.
Author: Pishon Yip
