DOT in Hot Water over Use of AI
Background
Recent reporting has raised alarms about the U.S. Department of Transportation’s (DOT) decision to use Google’s generative AI model Gemini to draft transportation safety regulations. Internal presentations obtained by ProPublica show that the program aims to save time by having the model produce the first draft of new rules, touching areas such as aviation, automotive, rail and pipeline safety. Supporters inside the DOT say the technology could shorten a process that normally takes months or years, but critics argue that delegating critical work to a non‑expert system could compromise public safety.

Efficiency Claims
The agency’s top lawyer promoted the initiative as a way to “flood the zone” with regulations and make updates faster, explicitly stating that perfect wording is not the aim. Presenters claimed that Gemini could produce draft rules in minutes, handling 80–90 percent of the writing, with human staff serving primarily as proofreaders. In demonstrations, employees watched as the model was prompted for an example Notice of Proposed Rulemaking and within moments returned a document packed with boilerplate preamble language.
Demonstrations and Missing Text
Those demonstrations were far from reassuring. One draft looked official at first glance but lacked the actual regulatory text required for the Code of Federal Regulations. Staffers who spoke on condition of anonymity said the model’s output sometimes contained hallucinated references and missing provisions. They stressed that rulemaking is “intricate work” requiring decades of expertise in statutes, regulations and case law. DOT rules, they noted, keep airplanes in the sky, prevent gas pipelines from exploding and stop freight trains carrying hazardous chemicals from derailing. A misworded or incomplete clause could expose the agency to lawsuits or, worse, lead to accidents. As one employee put it, asking an AI to draft safety rules feels “wildly irresponsible”.

Expert Concerns
Outside observers share these concerns. Experts interviewed by ProPublica and other outlets agree that language models could help summarise research or suggest wording, but only with strict oversight. They warn that generative systems are prone to confidently inventing facts and citations. Ben Winters of the Consumer Federation of America said that leaving the drafting of detailed safety regulations to a tool known for hallucinations could result in vague or erroneous standards. Others argue that federal law requires regulations to be grounded in reasoned, non‑arbitrary judgement—something a model trained on internet text cannot provide. Even advocates of the technology concede that every AI‑generated sentence must be checked by domain experts, potentially offsetting much of the promised time savings.
Importance of Safety Rules
Transportation rules touch aviation, pipelines, railroads and highways. A small lapse in a regulation can cascade through manufacturing, operations and enforcement. Because these rules directly affect public safety, regulators typically weigh technical data, legal precedent and stakeholder input before writing and revising them. The idea of outsourcing such deliberation to a generative model—no matter how advanced—has many veteran rule writers bristling, especially when the tool occasionally omits key text or invents details.

AI’s Role and Human Oversight
Proponents counter that the existing system struggles to keep pace with innovation, arguing that AI could help modernize outdated regulations and respond more quickly to emerging technologies. However, the gap between assistance and authorship remains critical. Supervisors within the DOT say that, at least for now, AI drafts would be thoroughly reviewed and edited by human experts before going to public notice. Even so, employees worry that heavy reliance on AI could erode institutional knowledge and change regulators’ roles from active drafters to passive editors.
The Debate Over AI Use Continues
The debate over the DOT’s use of AI underscores a larger conversation about the role of artificial intelligence in public governance. In fields where lives are on the line, critics argue that “good enough” simply is not good enough. The temptation to automate complex tasks must be balanced against the responsibility to protect the public. For now, the DOT’s experiment with AI has landed the agency in hot water, raising the question of whether faster rulemaking is worth the risk of letting a generative model set the terms of transportation safety.
Blog Posts
Latest Posts
Related Posts