Volume 38
Issue
4
Date
2025

A System Under Artificially Intelligent Strain: Can Hatch Act Enforcement Handle AI Surveillance?

by Alex Comfort

Did anyone truly know what a mouse jiggler device was prior to the COVID-19 pandemic? In the years since, as telework has become increasingly prevalent, people turned to mouse jigglers and other such devices to outsmart surveillance software called “tattleware” or “bossware.” These devices essentially mimic mouse movement to prevent a computer from entering “sleep mode” in order to avoid software that monitors employees’ screen time. Surveillance software can monitor a variety of different employee activities, from detailed information on websites, apps, and files accessed, to seeing emails and messages sent by employees—all in real-time. When employed to its full capacity, surveillance software can utilize cameras and microphones on employees’ computers to listen and even watch staff at work. Add artificial intelligence (AI) to the mix and things only get scarier. AI models are well-suited for the sort of surveillance employers crave—“[t]hey are efficient at counting and identifying the words typed in and websites visited; the number of emails sent; the number of steps taken in a warehouse; the number of bathroom breaks; [and] their length.” When employed to its maximum extent, the software is akin to someone looking over an employee’s shoulder throughout the entire workday.

Naturally, this raises troubling privacy concerns for all employees. For federal employees specifically, the question of compliance with laws such as the Hatch Act races to the fore. The Hatch Act, designed to prevent political influence by the nation’s federal civil service,      prevents most federal employees from engaging in political activities while “on duty” or actively working. Even though the government is pursuing a government-wide return to work policy, ad-hoc telework raises the question of what constitutes “on duty,” especially if employees intersperse their workday with everyday tasks like running errands.

This note argues that the rise of AI-powered surveillance software will put extreme stress on the Hatch Act’s current enforcement system if employed by the government to monitor federal employees’ compliance with the law. AI software offers a level of comprehensive surveillance of employee actions that is impossible to obtain by traditional means. Despite this increased effectiveness, the use of AI raises concerns about reliability. 

 Part I will explore the current state of the Hatch Act by explaining its history, persons and actions covered by the Act, and the enforcement system as it stands today. Additionally, Part I will discuss the connection between the Hatch Act and the Model Rules of Professional Conduct, specifically Rule 8.4(c) and 8.4(e) which emphasize lawyer integrity and the importance of public trust in government.

Part II will explore the use of AI in the workplace and government. Specifically, it will discuss keystroke and facial recognition software as examples of the types of AI programs that would stress the Hatch Act enforcement system. Accuracy concerns inherent in keystroke and facial monitoring software from the private and public sector paint a concerning picture of how use of this software may artificially inflate reported Hatch Act violations.

Part III will discuss the impact of AI surveillance on the Hatch Act enforcement system. A case study of the IRS’ tax return system—which faces stress from its use of automated data-driven software—will highlight the impact that a greater number of Hatch Act claims may have on the Office of Special Counsel (OSC), including how operational capacity considerations affect which claims are pursued.

Keep Reading

Subscribe to GJLE