⚠️ Security Alert: The Silent Risk of Assembler Code in IBM z/OS Environments
A real, current, and largely invisible operational risk
In most large European financial institutions, the
IBM z/OS mainframe remains the backbone of the business. Payments, settlement, core banking, custody, clearing, and regulatory reporting all depend on it.
However, there is a critical risk that is rarely addressed and even more seldom monitored: the execution of apparently legitimate
Assembler programs capable of causing immediate and severe operational impact, without exploiting known vulnerabilities or configuration weaknesses.
This risk is not theoretical. It is real, executable, and fully aligned with
insider threat and
supply-chain scenarios—precisely the type of risks that
DORA and
NIS2 explicitly require institutions to manage.
A real scenario, validated in a controlled environment
After identifying the existence of malicious
Assembler code, documented in non-official technical forums and clandestine repositories, we decided to reproduce the scenario in a fully controlled laboratory environment.
The program left the
LPAR in a
software wait state, from which the only possible recovery was a full
IPL of the LPAR.
Why traditional mainframe resilience is not enough
IBM z/OS mainframe environments are designed to tolerate almost any hardware failure:
- The failure of a single LPAR does not interrupt service; other Sysplex members continue operating.
- Even the loss of an entire Sysplex can trigger predefined contingency plans and alternative sites.
However, these environments are not designed to protect against malicious or negligent software failures executed with sufficient privileges.
In a multi-LPAR freeze scenario, such a program could simultaneously impact:
- ALL production LPARs in the primary data center
- ALL production LPARs in the secondary or disaster recovery site
- ALL development, pre-production, QA, and training LPARs in the primary data center
- ALL equivalent non-production LPARs in the secondary site (where such environments exist)
A program of this nature can remain dormant, waiting for a specific date and time, and then execute simultaneously across multiple LPARs.
The core issue: small code, systemic impact
Assembler on
z/OS is:
- Extremely powerful
- Executed very close to the operating system
- Capable of performing irreversible actions if poorly written or intentionally malicious
Today’s reality is that:
- There are very few Assembler experts left
- Thousands of legacy programs run in production
- Manual code review is practically nonexistent
- Trust is based on code age, not on continuous analysis
A program consisting of just a few lines, properly structured and apparently legitimate, can:
- Stop a logical partition (LPAR)
- Freeze critical processes
- Force a production IPL
- Cause immediate business unavailability
- Trigger a major operational crisis without any visible “cyberattack.”
From a regulatory perspective, this is not a technical incident. It is a failure of
operational resilience.
Why is this scenario a significant concern under DORA and NIS2
Neither
DORA nor
NIS2 focuses exclusively on external malware.
Both regulations emphasize:
- Insider risk
- Privileged code
- Lack of preventive controls
- Early detection capabilities
- Systemic impact of a single event
A malicious or negligent
Assembler program fits perfectly into:
- ICT Risk (DORA)
- Operational disruption
- Insider threat
- Absence of continuous controls
- Inability to demonstrate due diligence to regulators
In practice, many institutions cannot demonstrate:
- Which critical Assembler programs actually exist
- Who executes them
- What their real operational impact would be if they behaved abnormally
- How their execution would be detected in near real time
From a regulatory standpoint, the absence of preventive and continuous detection controls in this scenario is not a technical weakness, but a severe deficiency in
ICT risk governance, directly linked to:
- Articles 5–6 of DORA (ICT risk governance)
- Article 21 of NIS2 (risk management measures)
The most significant risk: a false sense of security
The issue is not that these programs exist. The issue is the assumption that “nothing will ever happen” because:
- “It has been running for 20 years.”
- “It has always worked.”
- “Nobody really knows how to modify it anymore.”
- “It is historical code.”
From a risk perspective, this is the opposite of a control.
In many cases:
- Only one person holds the knowledge
- There is no substitution or succession
- There is no impact analysis
- There is no continuous monitoring
This represents a
single point of failure—both human and technical.
Key questions CISOs and CIOs should ask today.
If your institution operates
IBM z/OS, these questions are immediate:
- Do we know which Assembler programs run in production?
- Do we have real visibility into their behavior?
- Could we detect the execution of an abnormal program within minutes?
- Can we demonstrate continuous control to regulators?
- What would happen if one of these programs caused a total service outage?
If any of these questions cannot be answered clearly, the risk already exists.
Conclusion: this is not hacking — it is resilience
This is not an article about hacking. It is not an exploit. It is not a
CVE.
It is a structural operational risk at the core of the European financial system.
DORA and
NIS2 will not ask how an incident occurred. They will ask:
“What controls were in place to prevent or detect it?”
And in many mainframe environments today, the honest answer is:
insufficient.
📌 Final note
At
Bsecure, we help European financial institutions identify, monitor, and demonstrate effective control over these types of risks in
IBM z/OS environments through a continuous, auditable approach fully aligned with
DORA and
NIS2.
If you wish to assess your real exposure to this risk, you can contact us confidentially via
go2bsecure.com.