CyberSecure

FOR LOCAL UNIONS

AI on the Shop Floor: Security Risks You Can't Ignore

Artificial intelligence is making decisions about your members' shifts, benefits, and paychecks right now — often without anyone knowing what data it's accessing or how it's using it. Automatically generated work schedules, chatbots answering benefits questions, predictive tools flagging absenteeism or equipment failures — these systems have quietly become part of everyday union operations.

For plan administrators, local officers, and trustees, AI's convenience and cost savings are real. But so is the legal exposure. Under ERISA, fiduciaries have a duty to protect plan assets and participant data. When AI systems process member information without proper controls, trustees can face personal liability for breaching their fiduciary obligations — even if the AI system was implemented by a vendor. The Department of Labor has made it clear that cybersecurity responsibilities extend to any technology touching plan data, and that includes AI.

Most AI risks aren't exotic. Many relate to basic security hygiene: who can access what, and when. AI systems often get rolled out quickly to solve immediate problems, like rebalancing shifts, answering member questions, predicting equipment failures. Then they quietly accumulate more privileges: pulling HR data, aggregating attendance records, accessing camera feeds. Because these systems work and deliver value, no one revisits access rights or data flows. "Temporary" permissions become permanent. Small, useful automations can become single points of catastrophic failure — one misconfiguration and a system can access or expose far more than intended.

When Automation Goes Wrong

Public examples show how routine systems create outsized risks. In 2022, more than 100,000 British Council student records were exposed through misconfigured cloud storage — names, IDs, and emails sitting on the open internet. This wasn't a sophisticated hack; it was a configuration error in software designed to share information efficiently. The lesson applies directly to AI: if a system can publish data at scale, a mistake can leak data at scale.

Algorithmic scheduling and productivity tracking tools, used widely in warehouses and logistics, demonstrate how AI can cause harm even without a data breach. Court filings describe how automated monitoring systems issued more than 13,000 disciplinary notices at a single U.S. warehouse in one year, with workers penalized for falling short of machine-set targets with minimal human review. Similar automated performance systems have been documented terminating employees with almost no supervisor involvement.

The point is straightforward: when decisions affecting pay, performance, or employment rest on opaque algorithms, the risks become operational, reputational, legal, and human, not just technical. And when those decisions involve plan member data, trustees have a fiduciary duty to ensure proper oversight exists.

More recently, in November 2024, a ransomware attack on Blue Yonder — an Arizona-based software provider handling employee scheduling and payroll systems — forced Starbucks and other major retailers to shift to manual processes for weeks. Baristas' schedules had to be managed by hand, and payroll continuity became a daily scramble. The incident showed how dependent organizations have become on AI-powered logistics tools, and what happens when those tools go dark. For union benefit plans relying on similar third-party systems for member data processing, the implications are sobering: one vendor breach can paralyze operations and expose trustees to claims they failed to properly vet or monitor service providers.

Basic AI Security Hygiene

AI governance frameworks like ISO 42001 and the NIST AI Risk Management Framework provide comprehensive approaches for managing AI risks. But you don't need to master those standards overnight. A strong AI security policy can be implemented with minimal cost and builds on controls you likely already have in place. The foundation rests on three principles that can be applied using existing audit procedures with only small modifications: least privilege, continuous logging, and regular human supervision.

  • Least Privilege means the AI gets only the narrow access required for its task — no shared administrator accounts, no inherited rights "just in case." 
  • Continuous Logging means every automated action is recorded where humans can review it: who accessed which data, what changed, what rule was applied. 
  • Regular Human Supervision means a named person monitors the AI, understands its outputs, and can revoke access without waiting for a vendor ticket. 

These aren't abstract ideas; they're the digital equivalent of a sign-in sheet for warehouse keys and an inspection log for equipment.

The Broader Cyber Context

AI systems increasingly operate alongside or within operational technology. Many facilities now run predictive analytics and optimization layers that share credentials with control systems. If those AI tools aren't properly segmented, any breach becomes a potential shutdown scenario. Segmentation, separate credentials, and clear "read-only" boundaries for AI tools aren't luxuries, they're essential business continuity measures that can often be implemented using in-house resources only.

The question isn't whether AI is good or bad, it’s whether access, behavior, and accountability are defined as clearly for AI as for any other role in your organization. An AI chatbot answering plan questions should be treated like a service account representative with documented, approved data access, not a black box. An AI scheduler touching overtime and task qualifications should sit behind the same controls as your payroll system. An equipment predictor should read sensor data, not write to control systems. When AI roles are precise and logs exist, errors become incidents you can investigate. Without them, they become grievances you can’t resolve.

What Workers and Representatives Can Do

You don’t need to become an AI specialist to manage these risks. What you need is transparency: understanding where AI is used, what data it accesses, and how to escalate concerns. For many organizations, the gold standard involves continuous monitoring of AI behavior with human-in-the-loop oversight to catch anomalies early, though implementing such systems requires thoughtful planning and often benefits from specialized guidance.

But transparency can start simply by identifying where AI systems are and what they’re doing. It continues with practical training on recognizing unusual AI behavior — unexpected schedule changes, data requests outside normal scope — and knowing how to read a basic audit trail or pause a system until someone can review it. Most cyber incidents begin as routine anomalies that no one quite owns. Training gives people permission to say, “Stop — this isn’t normal.”

The Vendor Challenge

Many AI tools arrive as cloud services that update automatically. When software behavior changes unexpectedly — when a scheduling system suddenly requests access to medical records, or a chatbot starts asking questions it never asked before — someone needs the ability to investigate. If a system influences decisions affecting pay, safety, benefits, or working conditions, you need the right to understand what changed and why. Sometimes, this is a statutory right, although rarely in the U.S. More often, vendor contracts are negotiated to provide some visibility into shared AI systems and require update notifications where changes will affect access or processing logic. Often, breach notification within specified timeframes is also required.

Bringing Familiar Disciplines to New Technology

For union leaders and trustees, addressing AI means applying familiar disciplines to unfamiliar technology. The principles mirror long-standing safety and fiduciary practices: define responsibilities, separate duties, keep records showing someone was paying attention. Transparency doesn’t mean publishing secrets; it means being able to explain what a system did and why.

When logs are as complete as meeting minutes, and permissions are as thoughtful as job descriptions, AI becomes a tool you control rather than a risk you’re managing blind. And when issues arise that exceed in-house capabilities — particularly in complex regulatory environments, healthcare settings, or multi-jurisdictional operations — engaging experienced AI governance professionals can provide the specialized expertise needed to ensure robust, defensible practices.

10 STEPS FOR SAFER AI

  1. Name a human owner. Every AI system needs a named custodian responsible for access, monitoring, and decommissioning.
  2. Grant minimum access only. No shared credentials, no standing admin rights. AI gets only what it needs for its specific job.
  3. Log everything and review regularly. Automated actions must be recorded in tamper-evident logs. Set a review schedule and investigate anomalies immediately.
  4. Keep AI separated. AI tools and service accounts must be isolated from operational controls and critical systems through network segmentation.
  5. Document data sources. Use only approved repositories. Track where training data and inputs come from and verify data licensing.
  6. Validate AI outputs. Random-sample results. Require human sign-off before AI decisions affect pay, safety, or benefits.
  7. Control your vendors. Contracts must require breach notification within defined timeframes. If possible, negotiate visibility into AI updates as well. Require vendor cooperation during incidents and audits.
  8. Minimize personal data. Strip identifiers unless legally required and operationally necessary. Delete data when retention periods expire.
  9. Train your people. Provide role-specific training on AI limitations, how to read logs, and clear escalation paths when something seems wrong.
  10. Plan for AI failure. Maintain manual overrides, test recovery procedures regularly, and ensure your incident response plan covers AI-specific scenarios.

Let’s build something better - together.

The Department of Labor emphasizes the importance of cybersecurity for those responsible for plan-related IT systems and data.

Don't leave your cybersecurity to chance. Ensure best practices with a comprehensive solution tailored for unions.

Thank you for your inquiry. Your submission request has been received.
Onsite Logic