AI systems and Human operators
ððģðĶðĒðŪðĢððĶ
ððĐðŠðī ðĒðģðĩðŠðĪððĶ ðĨðģðĒðļðī ðŠðŊðīðąðŠðģðĒðĩðŠð°ðŊ ð§ðģð°ðŪ ðĩðĐðĶ ðąðķðĢððŠðĪððš ðīðĐðĒðģðĶðĨ ðŠðŊðīðŠðĻðĐðĩðī ðŠðŊ ðĩðĐðĶ ð·ðŠðĨðĶð° "ððĐðĶ ðð°ðķðģðŊðĶðš ð§ðģð°ðŪ ððķðīðĩðĒðŠðŊðĒðĢðŠððŠðĩðš ðĩð° ððķðĩð°ðŊð°ðŪðš" ðĢðš ððĶðð·ðŠðŊ.ðĒðŠ. ððĐðĶ ðĨðŠðīðĪðķðīðīðŠð°ðŊðī ðąðģðĶðīðĶðŊðĩðĶðĨ ðĐðĶðģðĶ ðĶðđðąðð°ðģðĶ ðĪðģðŠðĩðŠðĪðĒð ðīðĒð§ðĶðĩðš ðĒðŊðĨ ðĐðķðŪðĒðŊ ð§ðĒðĪðĩð°ðģðī ðĪðĐðĒðððĶðŊðĻðĶðī ðŠðŊ ðĩðĐðĶ ðĨðĶðąðð°ðšðŪðĶðŊðĩ ð°ð§ ðð ðīðšðīðĩðĶðŪðī ðŠðŊ ðīðĒð§ðĶðĩðš-ðĪðģðŠðĩðŠðĪðĒð ðĨð°ðŪðĒðŠðŊðī.
ððĐðĶ ðĪð°ðŊðĩðĶðŊðĩ ðļðĒðī ðĨðĶð·ðĶðð°ðąðĶðĨ ðķðīðŠðŊðĻ ððąðĶðŊðð ððĐðĒðĩððð, ðļðŠðĩðĐ ððŊðĨðģðŠðš ðð°ðīðĩðšðķðŽ ððĶðĒðĨðŠðŊðĻ ðĩðĐðĶ ðąðģð°ðĪðĶðīðī ðĩðĐðģð°ðķðĻðĐ ðģðĶððĶð·ðĒðŊðĩ ðēðķðĶðģðŠðĶðī ðĒðŊðĨ ðŠðĩðĶðģðĒðĩðŠð·ðĶ ðąðģð°ðŪðąðĩðī ð°ð·ðĶðģ ðĒðŊ ðĶðđðĩðĶðŊðĨðĶðĨ ðĨðŠðīðĪðķðīðīðŠð°ðŊ. ððĐðŠððĶ ðð ðļðĒðī ðŠðŊðīðĩðģðķðŪðĶðŊðĩðĒð ðŠðŊ ð°ðģðĻðĒðŊðŠðīðŠðŊðĻ ðĒðŊðĨ ðģðĶð§ðŠðŊðŠðŊðĻ ðĩðĐðĶ ðŠðĨðĶðĒðī, ðĩðĐðŠðī ðĒðģðĩðŠðĪððĶ ðŠðī ðĩðĐðĶ ðģðĶðīðķððĩ ð°ð§ ðĒ ðĪð°ðððĒðĢð°ðģðĒðĩðŠð°ðŊ ðļðĐðĶðģðĶ ðĩðĐðĶ ðąðģðŠðŪðĒðģðš ðŠðĨðĶðĒðĩðŠð°ðŊ, ðīðšðŊðĩðĐðĶðīðŠðī, ðĒðŊðĨ ð§ðģðĒðŪðŠðŊðĻ ðļðĶðģðĶ ðĻðķðŠðĨðĶðĨ ðĢðš ðĩðĐðĶ ðĐðķðŪðĒðŊ ðĒðķðĩðĐð°ðģ.
ððš ðĢððĶðŊðĨðŠðŊðĻ ðĐðķðŪðĒðŊ ðĶðđðąðĶðģðĩðŠðīðĶ ðĒðŊðĨ ðð ðĒðīðīðŠðīðĩðĒðŊðĪðĶ, ðĩðĐðĶ ðĒðģðĩðŠðĪððĶ ðĒðŠðŪðī ðĩð° ðąðģð°ð·ðŠðĨðĶ ðĒ ðĪð°ðŪðąðģðĶðĐðĶðŊðīðŠð·ðĶ ðĒðŊðĨ ðĩðĐð°ðķðĻðĐðĩð§ðķð ðąðĶðģðīðąðĶðĪðĩðŠð·ðĶ ð°ðŊ ðĩðĐðĶ ðģðŠðīðŽðī ðĒðŊðĨ ðĪð°ðŪðąððĶðđðŠðĩðŠðĶðī ð°ð§ ðŠðŊðĩðĶðĻðģðĒðĩðŠðŊðĻ ðð ðŠðŊðĩð° ðīðĒð§ðĶðĩðš-ðĪðģðŠðĩðŠðĪðĒð ð°ðąðĶðģðĒðĩðŠð°ðŊðī.
ðĻ ðĢðŋðēðšðŪðððŋðē ðð ððēð―ðđðžððšðēðŧð ðķðŧ ðĶðŪðģðēðð-ððŋðķððķð°ðŪðđ ðĶððððēðšð: ðĨðķððļð ðŪðŧðą ðððšðŪðŧ ððŪð°ððžðŋð ððĩðŪðđðđðēðŧðīðēð ðĻ
As safety engineers, we often hear about the promises of AI revolutionising operations in safety-critical domains like transportation, energy, and industrial systems. While AI holds tremendous potential, we must pause and ask: Are we rushing into replacing human operators without fully engineering the foundation these systems rely on?
ð§ðĩðē ðĨðķððļð ðžðģ ðĢðŋðēðšðŪðððŋðē ðð ððŧððēðīðŋðŪððķðžðŧ
Too often, AI is introduced as a solution to fill gaps in under-engineered systems. This can lead to several risks:
- ððŊðĒðĨðĶðēðķðĒðĩðĶ ðð°ðģðĶ ððŊðĻðŠðŊðĶðĶðģðŠðŊðĻ: Foundational issues like handling uncertainties and edge cases may be overlooked, leaving AI to compensate for gaps it cannot fully address.
- ðð·ðĶðģ-ððąðĩðŠðŪðŠðīðĩðŠðĪ ððķðĩð°ðŊð°ðŪðš: AI is expected to perform flawlessly across all scenarios, even when data or system boundaries limit its situational awareness.
- ððŠðīðĶðŊðĻðĒðĻðĶðĨ ððąðĶðģðĒðĩð°ðģðī: Human operators, relegated to a passive "monitoring" role, may lose situational awareness, reducing their ability to intervene effectively in emergencies.
- ððąðĒðēðķðĶ ððĶðĪðŠðīðŠð°ðŊ-ððĒðŽðŠðŊðĻ: Unlike deterministic logic systems, AI often operates as a black box, making it difficult to understand, trust, or audit its decisions.
ððēðððžðŧð ðģðŋðžðš ðĢðŋðžððēðŧ ðĶððððēðšð
Take ETCS Level 3 (European Train Control System) as an example. It relies on robust, deterministic logic where all known scenarios are explicitly engineered, leaving room for human operators to manage unforeseen situations. There's no AI—just well-defined logic that ensures safety, transparency, and collaboration. This balance of automation and human oversight highlights the value of sound engineering over novelty.
ðððšðŪðŧ ððŪð°ððžðŋð ððžðŧððķðąðēðŋðŪððķðžðŧð
Replacing operators too quickly introduces serious human factors challenges:
- Operators may feel alienated or worry about being blamed if they override AI recommendations.
- In emergencies, fear of liability may push operators to follow AI advice, even when their intuition suggests otherwise.
- AI's bounded situational awareness cannot match human flexibility, especially when operators can coordinate with peers or access adjacent systems beyond AI's scope.
ðððķðđðąðķðŧðī ððĩðē ðĨðķðīðĩð ððŪðđðŪðŧð°ðē
To ensure safety and effectiveness, we must:
- ððŊðĻðŠðŊðĶðĶðģ ððŠðģðīðĩ, ððķðĩð°ðŪðĒðĩðĶ ððĒðĩðĶðģ: Fully address uncertainties and edge cases with deterministic logic before introducing AI.
- ððĶð§ðŠðŊðĶ ðð’ðī ðð°ððĶ ðððĶðĒðģððš: Use AI to augment decision-making, not replace core safety functions.
- ððŪðąð°ðļðĶðģ ððķðŪðĒðŊ ððąðĶðģðĒðĩð°ðģðī: Design systems where humans remain engaged and have the authority to intervene without fear of liability.
- ððŊðīðķðģðĶ ððđðąððĒðŠðŊðĒðĢðŠððŠðĩðš: Avoid black-box AI; prioritise transparency and auditability.
- ððĶðīðĩ ððŊðĨðĶðģ ððĶðĒððŠðīðĩðŠðĪ ðð°ðŊðĨðŠðĩðŠð°ðŊðī: Validate combined AI-human systems extensively before deployment.
ð§ðĩðē ððžðððžðš ððķðŧðē
AI is not a shortcut to safety—it’s a tool to enhance robust, well-engineered systems. Let’s prioritise core engineering and human-centric design to ensure that the integration of AI doesn’t come at the cost of reliability, safety, and trust.
________________________________________________
ðð―ð―ðēðŧðąðķð : ðĶððšðšðŪðŋð ðžðģ ððŪððŪðŋðąð ðģðŋðžðš ðķðŧððēðŋðģðŪð°ðķðŧðī ðð ðððððēðšð ððķððĩ ððŪðģðēðð ð°ðŋðķððķð°ðŪðđ ðððððēðšð ðŪðŧðą ðžðŋðīðŪðŧðķððŪððķðžðŧð
1. ððŊðĪð°ðŪðąððĶðĩðĶ ðð°ð·ðĶðģðĒðĻðĶ ð°ð§ ððąðĶðģðĒðĩðŠð°ðŊðĒð ððĪðĶðŊðĒðģðŠð°ðī:
The AI system is deployed without fully addressing all known uncertainties, edge cases, and failure scenarios, resulting in unpredictable behaviour during unmodeled events.
2. ðð°ðīðī ð°ð§ ððąðĶðģðĒðĩð°ðģ ððŠðĩðķðĒðĩðŠð°ðŊðĒð ððļðĒðģðĶðŊðĶðīðī:
Human operators become disengaged or overly reliant on AI, reducing their ability to effectively intervene in emergencies.
3. ððđðĪðĶðīðīðŠð·ðĶ ððģðķðīðĩ ðŠðŊ ðð ððĶðĪð°ðŪðŪðĶðŊðĨðĒðĩðŠð°ðŊðī:
Operators may blindly follow AI suggestions due to the system's perceived authority, even when the recommendations are flawed or lack sufficient context (automation bias).
4. ððĶðĻðĒð ðĒðŊðĨ ððĪðĪð°ðķðŊðĩðĒðĢðŠððŠðĩðš ððģðĶðīðīðķðģðĶðī ð°ðŊ ððąðĶðģðĒðĩð°ðģðī:
Fear of liability or legal consequences pushes operators to comply with AI advice, even when their intuition or expertise suggests an alternative action.
5. ððąðĒðēðķðĶ ðð ððĶðĪðŠðīðŠð°ðŊ-ððĒðŽðŠðŊðĻ:
AI operates as a black box, preventing operators and engineers from understanding or validating the reasoning behind its recommendations.
6. ððŠðīðŪðĒðĩðĪðĐðĶðĨ ððĪð°ðąðĶ ð°ð§ ðð ððšðīðĩðĶðŪ ððļðĒðģðĶðŊðĶðīðī:
The AI system has limited situational awareness, particularly at the boundaries of its defined domain, leading to errors when adjacent systems, underlying or external conditions affect safety.
7. ðð·ðĶðģðð°ðĒðĨðŠðŊðĻ ððąðĶðģðĒðĩð°ðģðī ðļðŠðĩðĐ ðð°ðļ-ððĒððķðĶ ðððĶðģðĩðī:
AI systems generate nuisance workload by recommending trivial or redundant actions, leading to frustration and potential system disengagement by operators.
8. ððąðĶðģðĒðĩð°ðģ ðððŠðĶðŊðĒðĩðŠð°ðŊ:
Premature introduction of AI reduces operator trust and engagement, as they feel their role is being diminished or undervalued.
9. ðð°ðŊð§ððŠðĪðĩðī ððķðģðŠðŊðĻ ððŪðĶðģðĻðĶðŊðĪðš ððĪðĶðŊðĒðģðŠð°ðī:
In high-pressure situations, AI recommendations may conflict with operator intuition, causing delays or errors in decision-making due to hesitation or fear of overriding the system.
10. ððŪðĢðĒððĒðŊðĪðĶðĨ ðð°ðīðĩ-ððŊðĪðŠðĨðĶðŊðĩ ððĪðĪð°ðķðŊðĩðĒðĢðŠððŠðĩðš:
AI recommendations are automatically recorded, while human decisions are not, creating an asymmetry in post-incident reviews that unfairly places blame on operators.
11. ððŊðĨðĶðģ-ððŊðĻðŠðŊðĶðĶðģðĶðĨ ððšðīðĩðĶðŪ ðð°ðķðŊðĨðĒðĩðŠð°ðŊðī:
Deployment of AI before addressing fundamental engineering challenges leads to a fragile system dependent on probabilistic logic rather than deterministic safety measures.
12. ððŊðĒðĨðĶðēðķðĒðĩðĶ ððŊðĩðĶðģð°ðąðĶðģðĒðĢðŠððŠðĩðš ððĪðģð°ðīðī ððšðīðĩðĶðŪðī:
AI systems are siloed and unable to communicate with adjacent systems or integrate data effectively, leading to blind spots in safety-critical operations.
13. ððģðĶðŪðĒðĩðķðģðĶ ððđðĪððķðīðŠð°ðŊ ð°ð§ ððķðŪðĒðŊ ððąðĶðģðĒðĩð°ðģðī:
Attempts to eliminate human operators entirely for the sake of novelty or cost reduction result in the loss of essential human flexibility and expertise in unforeseen scenarios.
14. ððŠðīðĒððŠðĻðŊðŪðĶðŊðĩ ð°ð§ ðð ðļðŠðĩðĐ ððķðŪðĒðŊ ððĶðĪðŠðīðŠð°ðŊ ððģð°ðĪðĶðīðīðĶðī:
AI systems fail to adapt to operator needs, producing irrelevant or non-contextual recommendations that conflict with human workflows, e.g., due to bias inherent from synthetic or real data (statistical, sampling, group attribution, confirmation, implicit or human cognitive bias).
15. ððĶðĻðģðĒðĨðĒðĩðŠð°ðŊ ð°ð§ ððĶðĒðŪ ðð°ðŪðŪðķðŊðŠðĪðĒðĩðŠð°ðŊ ðĒðŊðĨ ðð°ðððĒðĢð°ðģðĒðĩðŠð°ðŊ:
Operators lose access to informal or peer-based information-sharing mechanisms as AI systems cannot replicate human-to-human communication in complex scenarios.
16. ðð·ðĶðģ-ððąðĩðŠðŪðŠðīðĩðŠðĪ ððĶðąðð°ðšðŪðĶðŊðĩ ð°ð§ ðð ððķðĩð°ðŊð°ðŪðš:
AI systems are expected to function autonomously without sufficient validation or operational testing under real-world conditions, increasing the risk of failures.
17. ððŠðĒðī ðð°ðļðĒðģðĨ ðð°ð·ðĶððĩðš ðð·ðĶðģ ððĒð§ðĶðĩðš:
Organizational pressure to adopt AI for innovation undermines traditional safety engineering practices, compromising system reliability.
18. ððĒðŠððķðģðĶ ðĩð° ððĶðĪð°ðĻðŊðŠsðĶ ðð ððšðīðĩðĶðŪ ððŠðŪðŠðĩðĒðĩðŠð°ðŊðī:
AI systems do not adequately flag or defer to operators when faced with uncertainties or gaps in data, leading to unsafe actions or delays.
19. ððŪðĢðĒððĒðŊðĪðĶ ðŠðŊ ððĶðīð°ðķðģðĪðĶ ðððð°ðĪðĒðĩðŠð°ðŊ:
Over-investment in AI development diverts resources from improving deterministic safety logic or operator training, creating a suboptimal hybrid system.
20. ðð°ðīðī ð°ð§ ððĶðĨðķðŊðĨðĒðŊðĪðš ðŠðŊ ððĶðĪðŠðīðŠð°ðŊ-ððĒðŽðŠðŊðĻ:
Removing the operator as an active decision-maker eliminates a critical layer of redundancy, increasing the likelihood of single-point failures in AI-driven systems.