AI systems and Human operators

 ð˜—ð˜ģð˜Ķð˜Ēð˜Ūð˜Ģ𝘭ð˜Ķ

𝘛ð˜Đ𝘊ð˜ī ð˜Ēð˜ģð˜ĩ𝘊ð˜Ī𝘭ð˜Ķ ð˜Ĩð˜ģð˜Ēð˜ļð˜ī 𝘊ð˜Ŋð˜īð˜ąð˜Šð˜ģð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ 𝘧ð˜ģ𝘰ð˜Ū ð˜ĩð˜Đð˜Ķ ð˜ąð˜ķð˜Ģ𝘭𝘊ð˜Ī𝘭𝘚 ð˜īð˜Đð˜Ēð˜ģð˜Ķð˜Ĩ 𝘊ð˜Ŋð˜ī𝘊ð˜Ļð˜Đð˜ĩð˜ī 𝘊ð˜Ŋ ð˜ĩð˜Đð˜Ķ 𝘷𝘊ð˜Ĩð˜Ķ𝘰 "𝘛ð˜Đð˜Ķ 𝘑𝘰ð˜ķð˜ģð˜Ŋð˜Ķ𝘚 𝘧ð˜ģ𝘰ð˜Ū 𝘚ð˜ķð˜īð˜ĩð˜Ē𝘊ð˜Ŋð˜Ēð˜Ģ𝘊𝘭𝘊ð˜ĩ𝘚 ð˜ĩ𝘰 𝘈ð˜ķð˜ĩ𝘰ð˜Ŋ𝘰ð˜Ū𝘚" ð˜Ģ𝘚 𝘒ð˜Ķ𝘭𝘷𝘊ð˜Ŋ.ð˜Ē𝘊. 𝘛ð˜Đð˜Ķ ð˜Ĩ𝘊ð˜īð˜Īð˜ķð˜īð˜ī𝘊𝘰ð˜Ŋð˜ī ð˜ąð˜ģð˜Ķð˜īð˜Ķð˜Ŋð˜ĩð˜Ķð˜Ĩ ð˜Đð˜Ķð˜ģð˜Ķ ð˜Ķð˜đð˜ąð˜­ð˜°ð˜ģð˜Ķ ð˜Īð˜ģ𝘊ð˜ĩ𝘊ð˜Īð˜Ē𝘭 ð˜īð˜Ē𝘧ð˜Ķð˜ĩ𝘚 ð˜Ēð˜Ŋð˜Ĩ ð˜Đð˜ķð˜Ūð˜Ēð˜Ŋ 𝘧ð˜Ēð˜Īð˜ĩ𝘰ð˜ģð˜ī ð˜Īð˜Đð˜Ē𝘭𝘭ð˜Ķð˜Ŋð˜Ļð˜Ķð˜ī 𝘊ð˜Ŋ ð˜ĩð˜Đð˜Ķ ð˜Ĩð˜Ķð˜ąð˜­ð˜°ð˜šð˜Ūð˜Ķð˜Ŋð˜ĩ 𝘰𝘧 𝘈𝘐 ð˜ī𝘚ð˜īð˜ĩð˜Ķð˜Ūð˜ī 𝘊ð˜Ŋ ð˜īð˜Ē𝘧ð˜Ķð˜ĩ𝘚-ð˜Īð˜ģ𝘊ð˜ĩ𝘊ð˜Īð˜Ē𝘭 ð˜Ĩ𝘰ð˜Ūð˜Ē𝘊ð˜Ŋð˜ī.

𝘛ð˜Đð˜Ķ ð˜Ī𝘰ð˜Ŋð˜ĩð˜Ķð˜Ŋð˜ĩ ð˜ļð˜Ēð˜ī ð˜Ĩð˜Ķ𝘷ð˜Ķð˜­ð˜°ð˜ąð˜Ķð˜Ĩ ð˜ķð˜ī𝘊ð˜Ŋð˜Ļ ð˜–ð˜ąð˜Ķð˜Ŋ𝘈𝘐 𝘊ð˜Đð˜Ēð˜ĩ𝘎𝘗𝘛, ð˜ļ𝘊ð˜ĩð˜Đ 𝘈ð˜Ŋð˜Ĩð˜ģ𝘊𝘚 𝘒𝘰ð˜īð˜ĩ𝘚ð˜ķ𝘎 𝘭ð˜Ķð˜Ēð˜Ĩ𝘊ð˜Ŋð˜Ļ ð˜ĩð˜Đð˜Ķ ð˜ąð˜ģ𝘰ð˜Īð˜Ķð˜īð˜ī ð˜ĩð˜Đð˜ģ𝘰ð˜ķð˜Ļð˜Đ ð˜ģð˜Ķ𝘭ð˜Ķ𝘷ð˜Ēð˜Ŋð˜ĩ ð˜ēð˜ķð˜Ķð˜ģ𝘊ð˜Ķð˜ī ð˜Ēð˜Ŋð˜Ĩ 𝘊ð˜ĩð˜Ķð˜ģð˜Ēð˜ĩ𝘊𝘷ð˜Ķ ð˜ąð˜ģ𝘰ð˜Ūð˜ąð˜ĩð˜ī 𝘰𝘷ð˜Ķð˜ģ ð˜Ēð˜Ŋ ð˜Ķð˜đð˜ĩð˜Ķð˜Ŋð˜Ĩð˜Ķð˜Ĩ ð˜Ĩ𝘊ð˜īð˜Īð˜ķð˜īð˜ī𝘊𝘰ð˜Ŋ. 𝘞ð˜Đ𝘊𝘭ð˜Ķ 𝘈𝘐 ð˜ļð˜Ēð˜ī 𝘊ð˜Ŋð˜īð˜ĩð˜ģð˜ķð˜Ūð˜Ķð˜Ŋð˜ĩð˜Ē𝘭 𝘊ð˜Ŋ 𝘰ð˜ģð˜Ļð˜Ēð˜Ŋ𝘊ð˜ī𝘊ð˜Ŋð˜Ļ ð˜Ēð˜Ŋð˜Ĩ ð˜ģð˜Ķ𝘧𝘊ð˜Ŋ𝘊ð˜Ŋð˜Ļ ð˜ĩð˜Đð˜Ķ 𝘊ð˜Ĩð˜Ķð˜Ēð˜ī, ð˜ĩð˜Đ𝘊ð˜ī ð˜Ēð˜ģð˜ĩ𝘊ð˜Ī𝘭ð˜Ķ 𝘊ð˜ī ð˜ĩð˜Đð˜Ķ ð˜ģð˜Ķð˜īð˜ķ𝘭ð˜ĩ 𝘰𝘧 ð˜Ē ð˜Ī𝘰𝘭𝘭ð˜Ēð˜Ģ𝘰ð˜ģð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ ð˜ļð˜Đð˜Ķð˜ģð˜Ķ ð˜ĩð˜Đð˜Ķ ð˜ąð˜ģ𝘊ð˜Ūð˜Ēð˜ģ𝘚 𝘊ð˜Ĩð˜Ķð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ, ð˜ī𝘚ð˜Ŋð˜ĩð˜Đð˜Ķð˜ī𝘊ð˜ī, ð˜Ēð˜Ŋð˜Ĩ 𝘧ð˜ģð˜Ēð˜Ū𝘊ð˜Ŋð˜Ļ ð˜ļð˜Ķð˜ģð˜Ķ ð˜Ļð˜ķ𝘊ð˜Ĩð˜Ķð˜Ĩ ð˜Ģ𝘚 ð˜ĩð˜Đð˜Ķ ð˜Đð˜ķð˜Ūð˜Ēð˜Ŋ ð˜Ēð˜ķð˜ĩð˜Đ𝘰ð˜ģ.

𝘉𝘚 ð˜Ģ𝘭ð˜Ķð˜Ŋð˜Ĩ𝘊ð˜Ŋð˜Ļ ð˜Đð˜ķð˜Ūð˜Ēð˜Ŋ ð˜Ķð˜đð˜ąð˜Ķð˜ģð˜ĩ𝘊ð˜īð˜Ķ ð˜Ēð˜Ŋð˜Ĩ 𝘈𝘐 ð˜Ēð˜īð˜ī𝘊ð˜īð˜ĩð˜Ēð˜Ŋð˜Īð˜Ķ, ð˜ĩð˜Đð˜Ķ ð˜Ēð˜ģð˜ĩ𝘊ð˜Ī𝘭ð˜Ķ ð˜Ē𝘊ð˜Ūð˜ī ð˜ĩ𝘰 ð˜ąð˜ģ𝘰𝘷𝘊ð˜Ĩð˜Ķ ð˜Ē ð˜Ī𝘰ð˜Ūð˜ąð˜ģð˜Ķð˜Đð˜Ķð˜Ŋð˜ī𝘊𝘷ð˜Ķ ð˜Ēð˜Ŋð˜Ĩ ð˜ĩð˜Đ𝘰ð˜ķð˜Ļð˜Đð˜ĩ𝘧ð˜ķ𝘭 ð˜ąð˜Ķð˜ģð˜īð˜ąð˜Ķð˜Īð˜ĩ𝘊𝘷ð˜Ķ 𝘰ð˜Ŋ ð˜ĩð˜Đð˜Ķ ð˜ģ𝘊ð˜ī𝘎ð˜ī ð˜Ēð˜Ŋð˜Ĩ ð˜Ī𝘰ð˜Ūð˜ąð˜­ð˜Ķð˜đ𝘊ð˜ĩ𝘊ð˜Ķð˜ī 𝘰𝘧 𝘊ð˜Ŋð˜ĩð˜Ķð˜Ļð˜ģð˜Ēð˜ĩ𝘊ð˜Ŋð˜Ļ 𝘈𝘐 𝘊ð˜Ŋð˜ĩ𝘰 ð˜īð˜Ē𝘧ð˜Ķð˜ĩ𝘚-ð˜Īð˜ģ𝘊ð˜ĩ𝘊ð˜Īð˜Ē𝘭 ð˜°ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜ī.


ðŸšĻ ð—Ģð—ŋð—ē𝗚ð—Ū𝘁𝘂ð—ŋð—ē 𝗔𝗜 𝗗ð—ēð—―ð—đ𝗞𝘆𝗚ð—ēð—ŧ𝘁 ð—ķð—ŧ ð—Ķð—Ūð—ģð—ē𝘁𝘆-𝗖ð—ŋð—ķ𝘁ð—ķ𝗰ð—Ūð—đ ð—Ķ𝘆𝘀𝘁ð—ē𝗚𝘀: ð—Ĩð—ķ𝘀ð—ļ𝘀 ð—Ūð—ŧð—ą 𝗛𝘂𝗚ð—Ūð—ŧ 𝗙ð—Ū𝗰𝘁𝗞ð—ŋ𝘀 𝗖ð—ĩð—Ūð—đð—đð—ēð—ŧð—īð—ē𝘀 ðŸšĻ


As safety engineers, we often hear about the promises of AI revolutionising operations in safety-critical domains like transportation, energy, and industrial systems. While AI holds tremendous potential, we must pause and ask: Are we rushing into replacing human operators without fully engineering the foundation these systems rely on?


𝗧ð—ĩð—ē ð—Ĩð—ķ𝘀ð—ļ𝘀 𝗞ð—ģ ð—Ģð—ŋð—ē𝗚ð—Ū𝘁𝘂ð—ŋð—ē 𝗔𝗜 𝗜ð—ŧ𝘁ð—ēð—īð—ŋð—Ū𝘁ð—ķ𝗞ð—ŧ

Too often, AI is introduced as a solution to fill gaps in under-engineered systems. This can lead to several risks:

  • 𝘐ð˜Ŋð˜Ēð˜Ĩð˜Ķð˜ēð˜ķð˜Ēð˜ĩð˜Ķ 𝘊𝘰ð˜ģð˜Ķ 𝘌ð˜Ŋð˜Ļ𝘊ð˜Ŋð˜Ķð˜Ķð˜ģ𝘊ð˜Ŋð˜Ļ: Foundational issues like handling uncertainties and edge cases may be overlooked, leaving AI to compensate for gaps it cannot fully address.
  • 𝘖𝘷ð˜Ķð˜ģ-ð˜–ð˜ąð˜ĩ𝘊ð˜Ū𝘊ð˜īð˜ĩ𝘊ð˜Ī 𝘈ð˜ķð˜ĩ𝘰ð˜Ŋ𝘰ð˜Ū𝘚: AI is expected to perform flawlessly across all scenarios, even when data or system boundaries limit its situational awareness.
  • 𝘋𝘊ð˜īð˜Ķð˜Ŋð˜Ļð˜Ēð˜Ļð˜Ķð˜Ĩ ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģð˜ī: Human operators, relegated to a passive "monitoring" role, may lose situational awareness, reducing their ability to intervene effectively in emergencies.
  • ð˜–ð˜ąð˜Ēð˜ēð˜ķð˜Ķ 𝘋ð˜Ķð˜Ī𝘊ð˜ī𝘊𝘰ð˜Ŋ-𝘔ð˜Ē𝘎𝘊ð˜Ŋð˜Ļ: Unlike deterministic logic systems, AI often operates as a black box, making it difficult to understand, trust, or audit its decisions.

𝗟ð—ē𝘀𝘀𝗞ð—ŧ𝘀 ð—ģð—ŋ𝗞𝗚 ð—Ģð—ŋ𝗞𝘃ð—ēð—ŧ ð—Ķ𝘆𝘀𝘁ð—ē𝗚𝘀

Take ETCS Level 3 (European Train Control System) as an example. It relies on robust, deterministic logic where all known scenarios are explicitly engineered, leaving room for human operators to manage unforeseen situations. There's no AI—just well-defined logic that ensures safety, transparency, and collaboration. This balance of automation and human oversight highlights the value of sound engineering over novelty.

𝗛𝘂𝗚ð—Ūð—ŧ 𝗙ð—Ū𝗰𝘁𝗞ð—ŋ𝘀 𝗖𝗞ð—ŧ𝘀ð—ķð—ąð—ēð—ŋð—Ū𝘁ð—ķ𝗞ð—ŧ𝘀

Replacing operators too quickly introduces serious human factors challenges:

  • Operators may feel alienated or worry about being blamed if they override AI recommendations.
  • In emergencies, fear of liability may push operators to follow AI advice, even when their intuition suggests otherwise.
  • AI's bounded situational awareness cannot match human flexibility, especially when operators can coordinate with peers or access adjacent systems beyond AI's scope.

𝗕𝘂ð—ķð—đð—ąð—ķð—ŧð—ī 𝘁ð—ĩð—ē ð—Ĩð—ķð—īð—ĩ𝘁 𝗕ð—Ūð—đð—Ūð—ŧ𝗰ð—ē

To ensure safety and effectiveness, we must:

  • 𝘌ð˜Ŋð˜Ļ𝘊ð˜Ŋð˜Ķð˜Ķð˜ģ 𝘍𝘊ð˜ģð˜īð˜ĩ, 𝘈ð˜ķð˜ĩ𝘰ð˜Ūð˜Ēð˜ĩð˜Ķ 𝘓ð˜Ēð˜ĩð˜Ķð˜ģ: Fully address uncertainties and edge cases with deterministic logic before introducing AI.
  • 𝘋ð˜Ķ𝘧𝘊ð˜Ŋð˜Ķ 𝘈𝘐’ð˜ī 𝘙𝘰𝘭ð˜Ķ 𝘊𝘭ð˜Ķð˜Ēð˜ģ𝘭𝘚: Use AI to augment decision-making, not replace core safety functions.
  • 𝘌ð˜Ūð˜ąð˜°ð˜ļð˜Ķð˜ģ 𝘏ð˜ķð˜Ūð˜Ēð˜Ŋ ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģð˜ī: Design systems where humans remain engaged and have the authority to intervene without fear of liability.
  • 𝘌ð˜Ŋð˜īð˜ķð˜ģð˜Ķ 𝘌ð˜đð˜ąð˜­ð˜Ē𝘊ð˜Ŋð˜Ēð˜Ģ𝘊𝘭𝘊ð˜ĩ𝘚: Avoid black-box AI; prioritise transparency and auditability.
  • 𝘛ð˜Ķð˜īð˜ĩ 𝘜ð˜Ŋð˜Ĩð˜Ķð˜ģ 𝘙ð˜Ķð˜Ē𝘭𝘊ð˜īð˜ĩ𝘊ð˜Ī 𝘊𝘰ð˜Ŋð˜Ĩ𝘊ð˜ĩ𝘊𝘰ð˜Ŋð˜ī: Validate combined AI-human systems extensively before deployment.

𝗧ð—ĩð—ē 𝗕𝗞𝘁𝘁𝗞𝗚 𝗟ð—ķð—ŧð—ē

AI is not a shortcut to safety—it’s a tool to enhance robust, well-engineered systems. Let’s prioritise core engineering and human-centric design to ensure that the integration of AI doesn’t come at the cost of reliability, safety, and trust.

________________________________________________

ð—”ð—―ð—―ð—ēð—ŧð—ąð—ķ𝘅: ð—Ķ𝘂𝗚𝗚ð—Ūð—ŋ𝘆 𝗞ð—ģ 𝗛ð—Ū𝘇ð—Ūð—ŋð—ąð˜€ ð—ģð—ŋ𝗞𝗚 ð—ķð—ŧ𝘁ð—ēð—ŋð—ģð—Ū𝗰ð—ķð—ŧð—ī 𝗔𝗜 𝘀𝘆𝘀𝘁ð—ē𝗚𝘀 𝘄ð—ķ𝘁ð—ĩ 𝘀ð—Ūð—ģð—ē𝘁𝘆 𝗰ð—ŋð—ķ𝘁ð—ķ𝗰ð—Ūð—đ 𝘀𝘆𝘀𝘁ð—ē𝗚𝘀 ð—Ūð—ŧð—ą 𝗞ð—ŋð—īð—Ūð—ŧð—ķ𝘀ð—Ū𝘁ð—ķ𝗞ð—ŧ𝘀

1. 𝘐ð˜Ŋð˜Ī𝘰ð˜Ūð˜ąð˜­ð˜Ķð˜ĩð˜Ķ 𝘊𝘰𝘷ð˜Ķð˜ģð˜Ēð˜Ļð˜Ķ 𝘰𝘧 ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜Ē𝘭 𝘚ð˜Īð˜Ķð˜Ŋð˜Ēð˜ģ𝘊𝘰ð˜ī:

The AI system is deployed without fully addressing all known uncertainties, edge cases, and failure scenarios, resulting in unpredictable behaviour during unmodeled events.

2. 𝘓𝘰ð˜īð˜ī 𝘰𝘧 ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģ 𝘚𝘊ð˜ĩð˜ķð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜Ē𝘭 𝘈ð˜ļð˜Ēð˜ģð˜Ķð˜Ŋð˜Ķð˜īð˜ī:

Human operators become disengaged or overly reliant on AI, reducing their ability to effectively intervene in emergencies.

3. 𝘌ð˜đð˜Īð˜Ķð˜īð˜ī𝘊𝘷ð˜Ķ 𝘛ð˜ģð˜ķð˜īð˜ĩ 𝘊ð˜Ŋ 𝘈𝘐 𝘙ð˜Ķð˜Ī𝘰ð˜Ūð˜Ūð˜Ķð˜Ŋð˜Ĩð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜ī:

Operators may blindly follow AI suggestions due to the system's perceived authority, even when the recommendations are flawed or lack sufficient context (automation bias).

4. 𝘓ð˜Ķð˜Ļð˜Ē𝘭 ð˜Ēð˜Ŋð˜Ĩ 𝘈ð˜Īð˜Ī𝘰ð˜ķð˜Ŋð˜ĩð˜Ēð˜Ģ𝘊𝘭𝘊ð˜ĩ𝘚 𝘗ð˜ģð˜Ķð˜īð˜īð˜ķð˜ģð˜Ķð˜ī 𝘰ð˜Ŋ ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģð˜ī:

Fear of liability or legal consequences pushes operators to comply with AI advice, even when their intuition or expertise suggests an alternative action.

5. ð˜–ð˜ąð˜Ēð˜ēð˜ķð˜Ķ 𝘈𝘐 𝘋ð˜Ķð˜Ī𝘊ð˜ī𝘊𝘰ð˜Ŋ-𝘔ð˜Ē𝘎𝘊ð˜Ŋð˜Ļ:

AI operates as a black box, preventing operators and engineers from understanding or validating the reasoning behind its recommendations.

6. 𝘔𝘊ð˜īð˜Ūð˜Ēð˜ĩð˜Īð˜Đð˜Ķð˜Ĩ 𝘚ð˜Īð˜°ð˜ąð˜Ķ 𝘰𝘧 𝘈𝘐 𝘚𝘚ð˜īð˜ĩð˜Ķð˜Ū 𝘈ð˜ļð˜Ēð˜ģð˜Ķð˜Ŋð˜Ķð˜īð˜ī:

The AI system has limited situational awareness, particularly at the boundaries of its defined domain, leading to errors when adjacent systems, underlying or external conditions affect safety.

 7. 𝘖𝘷ð˜Ķð˜ģ𝘭𝘰ð˜Ēð˜Ĩ𝘊ð˜Ŋð˜Ļ ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģð˜ī ð˜ļ𝘊ð˜ĩð˜Đ 𝘓𝘰ð˜ļ-𝘝ð˜Ē𝘭ð˜ķð˜Ķ 𝘈𝘭ð˜Ķð˜ģð˜ĩð˜ī:

AI systems generate nuisance workload by recommending trivial or redundant actions, leading to frustration and potential system disengagement by operators.

8. ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģ 𝘈𝘭𝘊ð˜Ķð˜Ŋð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ:

Premature introduction of AI reduces operator trust and engagement, as they feel their role is being diminished or undervalued.

9. 𝘊𝘰ð˜Ŋ𝘧𝘭𝘊ð˜Īð˜ĩð˜ī 𝘋ð˜ķð˜ģ𝘊ð˜Ŋð˜Ļ 𝘌ð˜Ūð˜Ķð˜ģð˜Ļð˜Ķð˜Ŋð˜Ī𝘚 𝘚ð˜Īð˜Ķð˜Ŋð˜Ēð˜ģ𝘊𝘰ð˜ī:

In high-pressure situations, AI recommendations may conflict with operator intuition, causing delays or errors in decision-making due to hesitation or fear of overriding the system.

10. 𝘐ð˜Ūð˜Ģð˜Ē𝘭ð˜Ēð˜Ŋð˜Īð˜Ķð˜Ĩ 𝘗𝘰ð˜īð˜ĩ-𝘐ð˜Ŋð˜Ī𝘊ð˜Ĩð˜Ķð˜Ŋð˜ĩ 𝘈ð˜Īð˜Ī𝘰ð˜ķð˜Ŋð˜ĩð˜Ēð˜Ģ𝘊𝘭𝘊ð˜ĩ𝘚:

AI recommendations are automatically recorded, while human decisions are not, creating an asymmetry in post-incident reviews that unfairly places blame on operators.

 11. 𝘜ð˜Ŋð˜Ĩð˜Ķð˜ģ-𝘌ð˜Ŋð˜Ļ𝘊ð˜Ŋð˜Ķð˜Ķð˜ģð˜Ķð˜Ĩ 𝘚𝘚ð˜īð˜ĩð˜Ķð˜Ū 𝘍𝘰ð˜ķð˜Ŋð˜Ĩð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜ī:

Deployment of AI before addressing fundamental engineering challenges leads to a fragile system dependent on probabilistic logic rather than deterministic safety measures.

12. 𝘐ð˜Ŋð˜Ēð˜Ĩð˜Ķð˜ēð˜ķð˜Ēð˜ĩð˜Ķ 𝘐ð˜Ŋð˜ĩð˜Ķð˜ģð˜°ð˜ąð˜Ķð˜ģð˜Ēð˜Ģ𝘊𝘭𝘊ð˜ĩ𝘚 𝘈ð˜Īð˜ģ𝘰ð˜īð˜ī 𝘚𝘚ð˜īð˜ĩð˜Ķð˜Ūð˜ī:

AI systems are siloed and unable to communicate with adjacent systems or integrate data effectively, leading to blind spots in safety-critical operations.

13. 𝘗ð˜ģð˜Ķð˜Ūð˜Ēð˜ĩð˜ķð˜ģð˜Ķ 𝘌ð˜đð˜Ī𝘭ð˜ķð˜ī𝘊𝘰ð˜Ŋ 𝘰𝘧 𝘏ð˜ķð˜Ūð˜Ēð˜Ŋ ð˜–ð˜ąð˜Ķð˜ģð˜Ēð˜ĩ𝘰ð˜ģð˜ī:

Attempts to eliminate human operators entirely for the sake of novelty or cost reduction result in the loss of essential human flexibility and expertise in unforeseen scenarios.

14. 𝘔𝘊ð˜īð˜Ē𝘭𝘊ð˜Ļð˜Ŋð˜Ūð˜Ķð˜Ŋð˜ĩ 𝘰𝘧 𝘈𝘐 ð˜ļ𝘊ð˜ĩð˜Đ 𝘏ð˜ķð˜Ūð˜Ēð˜Ŋ 𝘋ð˜Ķð˜Ī𝘊ð˜ī𝘊𝘰ð˜Ŋ 𝘗ð˜ģ𝘰ð˜Īð˜Ķð˜īð˜īð˜Ķð˜ī:

AI systems fail to adapt to operator needs, producing irrelevant or non-contextual recommendations that conflict with human workflows, e.g., due to bias inherent from synthetic or real data (statistical, sampling, group attribution, confirmation, implicit or human cognitive bias).

15. 𝘋ð˜Ķð˜Ļð˜ģð˜Ēð˜Ĩð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ 𝘰𝘧 𝘛ð˜Ķð˜Ēð˜Ū 𝘊𝘰ð˜Ūð˜Ūð˜ķð˜Ŋ𝘊ð˜Īð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ ð˜Ēð˜Ŋð˜Ĩ 𝘊𝘰𝘭𝘭ð˜Ēð˜Ģ𝘰ð˜ģð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ:

Operators lose access to informal or peer-based information-sharing mechanisms as AI systems cannot replicate human-to-human communication in complex scenarios.

16. 𝘖𝘷ð˜Ķð˜ģ-ð˜–ð˜ąð˜ĩ𝘊ð˜Ū𝘊ð˜īð˜ĩ𝘊ð˜Ī 𝘋ð˜Ķð˜ąð˜­ð˜°ð˜šð˜Ūð˜Ķð˜Ŋð˜ĩ 𝘰𝘧 𝘈𝘐 𝘈ð˜ķð˜ĩ𝘰ð˜Ŋ𝘰ð˜Ū𝘚:

AI systems are expected to function autonomously without sufficient validation or operational testing under real-world conditions, increasing the risk of failures.

17. 𝘉𝘊ð˜Ēð˜ī 𝘛𝘰ð˜ļð˜Ēð˜ģð˜Ĩ 𝘕𝘰𝘷ð˜Ķ𝘭ð˜ĩ𝘚 𝘖𝘷ð˜Ķð˜ģ 𝘚ð˜Ē𝘧ð˜Ķð˜ĩ𝘚:

Organizational pressure to adopt AI for innovation undermines traditional safety engineering practices, compromising system reliability.

18. 𝘍ð˜Ē𝘊𝘭ð˜ķð˜ģð˜Ķ ð˜ĩ𝘰 𝘙ð˜Ķð˜Ī𝘰ð˜Ļð˜Ŋ𝘊sð˜Ķ 𝘈𝘐 𝘚𝘚ð˜īð˜ĩð˜Ķð˜Ū 𝘓𝘊ð˜Ū𝘊ð˜ĩð˜Ēð˜ĩ𝘊𝘰ð˜Ŋð˜ī:

AI systems do not adequately flag or defer to operators when faced with uncertainties or gaps in data, leading to unsafe actions or delays.

19. 𝘐ð˜Ūð˜Ģð˜Ē𝘭ð˜Ēð˜Ŋð˜Īð˜Ķ 𝘊ð˜Ŋ 𝘙ð˜Ķð˜ī𝘰ð˜ķð˜ģð˜Īð˜Ķ 𝘈𝘭𝘭𝘰ð˜Īð˜Ēð˜ĩ𝘊𝘰ð˜Ŋ:

Over-investment in AI development diverts resources from improving deterministic safety logic or operator training, creating a suboptimal hybrid system.

 20. 𝘓𝘰ð˜īð˜ī 𝘰𝘧 𝘙ð˜Ķð˜Ĩð˜ķð˜Ŋð˜Ĩð˜Ēð˜Ŋð˜Ī𝘚 𝘊ð˜Ŋ 𝘋ð˜Ķð˜Ī𝘊ð˜ī𝘊𝘰ð˜Ŋ-𝘔ð˜Ē𝘎𝘊ð˜Ŋð˜Ļ:

Removing the operator as an active decision-maker eliminates a critical layer of redundancy, increasing the likelihood of single-point failures in AI-driven systems.