
A Deluge of Doubtful Feedback (Image Credits: Pixabay)
Southern California – Regulators at the South Coast Air Quality Management District faced an unprecedented wave of opposition last year when proposing a rule to promote heat pumps over gas heaters. The agency received over 20,000 comments, far exceeding typical volumes, prompting immediate suspicions about their origins. Investigations revealed patterns that questioned whether real citizens had authored many of these submissions, highlighting vulnerabilities in digital public input systems.
A Deluge of Doubtful Feedback
The sheer number of comments triggered alarms at the agency. Staff contacted 172 individuals from a sample list to verify submissions. Responses were scarce, and among those who replied, several denied any knowledge of the emails sent in their names.
Further checks by a Sierra Club campaigner yielded similar results, with contacted individuals expressing surprise at the opposition attributed to them. The agency’s executive director even received a message thanking him for opposing his own proposal. These incidents led to deeper scrutiny, including plans for more extensive verification efforts.
Enter AI-Powered Campaign Platforms
A Los Angeles Times report identified CiviClick, a firm offering AI-driven advocacy services, as central to the effort. The company’s client maintained ties to the gas sector. CiviClick’s platform uses artificial intelligence to customize messages based on user inputs, such as budget impacts or local concerns.
Founder Chazz Clevinger emphasized the tool’s role in amplifying genuine voices. “A homeowner in Riverside County who had recently installed a gas furnace wrote a different message than a renter in Los Angeles who was concerned about landlord compliance costs,” he explained. The company rejects claims of non-consensual submissions or fabricated content, attributing variations to authentic personalization.
Echoes of Earlier Scandals
Artificial intelligence represents an evolution, not the invention, of comment manipulation. During the 2017 net neutrality debate, the FCC processed 22 million submissions, with roughly 18 million later deemed fraudulent. Sources included a single college student generating millions and half a million from Russian addresses.
New York Attorney General Letitia James imposed fines on firms that impersonated real people en masse. Similar issues surfaced elsewhere:
- In the Bay Area Air District, seven contacted commenters disavowed knowledge of pro-gas submissions via the Speak4 platform.
- North Carolina county officials discovered constituents denying emails supporting a pipeline, eroding project backing.
These cases underscore persistent challenges, now amplified by AI’s capacity for unique, persuasive text.
Debating Impact and Authenticity
Critics like Sierra Club’s Dylan Plummer argue AI blurs lines between engagement and illusion. Regulators favor detailed, personalized input over form letters, yet AI-generated versions mimic effort without true investment. Plummer noted one woman puzzled by her supposed opposition to clean air rules.
Experts offer nuanced views. Political scientist Steven Balla stresses content over commenter identity, noting agencies prioritize substantive arguments. Still, he acknowledged the “icky” feel of deception. NYU’s Jonathan Brennan warned of eroding trust, potentially sidelining legitimate input from those unable to attend meetings.
Seeking Safeguards for Democratic Input
The South Coast board narrowly rejected the heat pump rule, sending it back for review, though CiviClick claimed influence via its campaign. Responses include Sierra Club calls for fraud probes and a new bill by Senator Christopher Cabaldon, “People Not Bots,” barring AI from mimicking citizens.
Agencies explore secure portals for human verification. Spokesperson Rainbow Yeung affirmed commitment to process integrity. Tools already detect duplicates efficiently, unlike past paper-based piles, but unique AI outputs complicate detection.
- AI tools personalize comments but risk impersonation without strong consent checks.
- Historical fakes relied on volume; AI enables quality deception at scale.
- Trust erosion could diminish public input’s role in policy.
As technology advances, balancing accessible participation with authenticity grows urgent. Agencies must adapt to preserve genuine citizen voices in rulemaking. What measures would you support to protect public comments? Share your thoughts in the comments.






