As Employers Push AI Adoption, EHS Faces New Pressure to Use Technology Safely and Responsibly
Companies are rapidly making AI use an expectation for employees, but the shift brings new challenges for EHS professionals who must balance productivity gains with protecting sensitive personal, operational, and proprietary information.
Across many industries, a clear shift is underway: companies increasingly expect employees to use AI in their daily work. The message is simple and often repeated in different forms by senior leaders – if you are not using AI, your competitors are, and they are working faster and more efficiently because of it. AI is no longer a side experiment or an optional gadget; it is rapidly becoming part of normal, expected work, much like email or spreadsheets once did.
One clear example comes from Meta, the parent company of Facebook, Instagram, and WhatsApp.
An internal memo, reported by Business Insider, says that from 2026 Meta will use a new metric in performance reviews called “AI-driven impact”.
Employees will be rated on how they use AI to do their work better.
In 2025, those who show “exceptional AI-driven impact” will already get extra recognition, and Meta has created an internal “AI Performance Assistant” to help them write self-reviews.https://www.businessinsider.com/meta-ai-employee-performance-review-overhaul-2025-11
Other major employers, including Google and Microsoft, have openly told their teams that AI adoption is becoming a baseline expectation. https://www.businessinsider.com/google-employees-use-ai-or-get-left-behind-gemini-2025-8 As a result, the pressure to incorporate AI into everyday work is steadily spreading into non-technical functions — including environment, health, and safety (EHS) roles. At the same time, however, the same organizations warn employees to be extremely careful about what information they upload into AI systems, whether public or paid.
Even robust, paid AI platforms are still cloud services. They store the history of prompts and answers, know which account is using the system, and in theory can be affected by large-scale breaches, technical failures, misconfigurations, or legal requests. Free public models may introduce additional risk if their terms allow data to be reused for training or analytics. In practice, many users simply click “Agree” on the terms of service without reading the details, and may not realize they have given the provider the right to analyze or reuse their uploaded content. No provider can offer an absolute security guarantee. This means the safest protection for truly sensitive information is never to upload it at all.

Leave a Reply