Following our latest DORA webinar, we are pleased to share with you the answers to the questions you asked us during the session.
The requirements for DORA widely vary, therefore it will impact you directly in some situations and indirectly in others. DORA's main spirit consists of radically upgrading the cyber security and cyber resilience maturity of the whole EU financial sector and that also includes ICT service providers (whether you are a cloud, software, or telecommunications provider). The main point that will impact you indirectly will be in the contractual, legal, and compliance aspect. Many financial entities will start reviewing their contract agreements with ICT service providers and ICT service providers should also implement such review to ensure their contracts are aligned to DORA requirements (e.g., cyber resilience specifications, information security controls, legal disclaimers, incident management requirements, privacy statements, data-lifecycle details).
Moreover, if an ICT supplier is deemed to be a critical supplier by the competent authorities, then the supplier will be directly subject to DORA and other oversight requirements. We believe that DORA will subject a wide range of ICT service providers of software, cloud services, and telecommunications under the regulatory authority of the EU and local national regulators like the MFSA. ICT service providers must ensure that all DORA requirements are in place to comply with the new regulation by closely collaborating with the financial entities to which they supply products and services.
The existence, applicability, scope, and content of the Guidance Document on Technology Arrangements, ICT and Security Risk Management and Outsourcing Arrangements will be assessed and the Authority will consider what appropriate action to take based on such assessment. The Authority will communicate the outcome in due time and through its official channels accordingly.
Threat Intelligence has its place for both aspects. However, on its own, threat intelligence feeds and subscriptions enrich the organisation's awareness of the cyber threat landscape by aggregating threat data from across hundreds of organisations, arming security teams with external knowledge about indicators of attacks and indicators of compromise, ensuring that they are more proactive, predictive and can make overall better decisions.
With respect to threat led penetration testing, as the name suggests, threat intelligence is a necessary component for a penetration test to be considered "threat led". This means that, beyond traditional reconnaissance methods and open-source intelligence techniques, the penetration test attack scenarios are influenced by targeted threat intelligence and information of the organisation in scope and the sector it operates in. This makes such an exercise more realistic in nature and provides added value to the overall security programme.
Yes, PwC has its own specialised and global cyber threat intelligence (CTI) team consisting of individuals with various backgrounds including industry, law enforcement, military, and intelligence communities. Our CTI team creates, uses, and enriches its own threat intelligence, across a wide variety of security services. This provides us with first-hand knowledge and experience about generating and consuming threat intelligence.
In this respect, PwC provides threat intelligence services in different formats and deliverables, including subscriptions to our threat intelligence reports and feeds, dark web monitoring, as well as access to our custom-built threat intelligence portal.
Threat led penetration testing, or TLPT, is essentially a threat intelligence-driven red team assessment. Red teaming capabilities have been available within the local market for numerous years, however, in our experience only few companies opt for such advanced form of security testing. On the other hand, the provision of bespoke threat intelligence is perhaps a more recent advent in the cybersecurity ecosystem, and more so within the Maltese cyber security market. The combination of both is what makes TLPT a more realistic and value-adding type of security test that surpasses traditional penetration tests and vulnerability assessment.
As PwC, we are able to provide threat led penetration testing services using a combination of local and international resources. In this way, we can leverage the knowledge and skills of testers physically situated in Malta and their understanding of the local context, as well as the experience of testers and analysts who have worked on complex TLPT assignments for large financial institutions, in line with frameworks such as TIBER-EU and CBEST.
The examples of cyber risk reporting and dashboards shown within the webinar are part of PwC's Cyber Risk Reporting Platform (CRRP). CRRP is PwC's latest turnkey solution to optimise your cyber risk maturity and easily engage the Board and any stakeholders on the matter of cyber risks. The examples shown in the webinar are just a small snapshot of the several cyber risks, Key Risk Indicators (KRIs), and risk reporting elements that CRRP can offer. CRRP is a totally customizable solution that has been built upon renowned industry frameworks such as MITRE ATT&CK, FAIR, PwC’s Cyber Security Framework (built upon NIST CSF, CIS, and ISO 27001), IRAM2, among others.
Running on Microsoft Power BI, CRRP can help you close the gap between cyber risks and the business by helping you understand:
The Digital Operational Resilience Act (the “Regulation” is expected to constitute lex specialis to Directive (EU) 2016/1148 and the upcoming NIS 2. The Regulation’s proposal states that in relation to financial entities identified as operators of essential services pursuant to national rules transposing Article 5 of Directive (EU) 2016/1148, the Regulation shall be considered a sector-specific Union legal act for the purposes of Article 1(7) of that Directive which states that “Where a sector-specific Union legal act requires operators of essential services or digital service providers either to ensure the security of their network and information systems or to notify incidents, provided that such requirements are at least equivalent in effect to the obligations laid down in this Directive, those provisions of that sector-specific Union legal act shall apply”. Article 2 (6) of the NIS 2 proposal also states that “Were provisions of sector–specific acts of Union law require essential or important entities either to adopt cybersecurity risk management measures or to notify incidents or significant cyber threats, and where those requirements are at least equivalent in effect to the obligations laid down in this Directive, the relevant provisions of this Directive, including the provision on supervision and enforcement laid down in Chapter VI, shall not apply”.
The present understanding is that the Regulation’s proposed requirement/s on insurance cover is in relation to professional indemnity and is attributed to testers under Article 24.
ICT-related incidents can occur for many different reasons, some of which are completely unrelated to cyber-attacks. Such incidents may still affect the provision of services to clients and therefore might need to be reported to the regulator depending on the effects on confidentiality, availability and integrity (CIA) of the information being processed by the IT systems.
On the other hand, a cyber-attack is a direct, malicious attack that is purposely designed to compromise information or otherwise digital assets. The identification of such an attack should trigger the appropriate security procedures (depending on the attack scenario), as well as the necessary information sharing sharing procedures. The latter may vary depending on the scale and success of the cyber-attack. The financial institution may opt to share the attack information with the necessary competent authority so that this information can then be shared among other organisations within the industry as a way to protect the ecosystem. In the event that an attack results in a breach, the organisation shall also report the incident to the competent authority so that the necessary investigations and damage management process can begin. Other obligations for reporting incidents under other regulations (such as the GDPR) still apply.
Article 28 (9) of the Commission's original proposal states "Financial entities shall not make use of an ICT third-party service provider established in a third country that would be designated as critical pursuant to point (a) of paragraph 1 if it were established in the Union".
The Council has recommended the following changes "Financial entities shall not make use of a critical ICT third-party service provider established in a third country unless that ICT third-party service provider has an undertaking constituted in the Union under the law of a Member State and has concluded contractual arrangements in accordance with Article 27(2b)" whilst the EP has recommended the following changes "Financial entities shall refrain from using an ICT third-party service provider established in a third country that would be designated as critical pursuant to paragraph 1 that did not establish a subsidiary in the Union within 12 months following the designation".
Whilst the above are the latest texts available at the time of writing, since the text is not yet final, one cannot at this stage provide an explicit confirmation on the matter.
This is a very interesting oberservation. The Regulation seems to be looking at the concept of "ICT Third Party Risk" whilst making use of the word "outsourcing" possibly less frequently. In its mandate, the EP has indeed recommmended the following amendment to the definition of ICT Third Party Risk "‘ICT third-party risk’ means ICT risk that may arise for a financial entity in relation to its use of ICT services provided by ICT third-party service providers or by subcontractors of the latter, including through outsourcing arrangements" but then none of the mandates include the word "outsourcing" within the definition of "ICT Third-Party Service Provider".
To date, there are no concrete dates set for when DORA will go live. The European Commission has only published the initial proposal for DORA and consultation work is currently taking place to come up with the specific requirements which will be mandated by the regulation.
Following this, the regulation will be published in the Official Journal of the EU - this will provide specific details regarding DORA and it will officially specify the adoption day. Following this day, a grace period of 12 months will be applied after which all components of DORA shall become applicable, with the exception of Articles 23 and 24. The latter Articles will only become applicable 36 months following the adoption day. Note that such provisions may change with new revisions of the regulation text.
An inspiration from the ESA guidelines on outsourcing can definitely be observed with the main difference being the oversight framework of critical ICT third-party service providers (Section II).
Yes, external testers refers to individuals external to the organisation being tested, meaning that an external party will be contracted to perform the TLPT. This requirement is consistent with what is seen within other security frameworks and standard, such as SWIFT CSP.
Whilst the word “formal” or indeed “codes of conduct” and “ethical framework” are not specifically defined within the Regulation, “formal” here would be taken to mean “officially recognised”.
Essentially, the EP proposal recommends the additional text “whether the testers are from within the Union, or from a third country” to Article 24 1. (c). The meaning here is that the text (“certified by an accreditation body in a Member State or adhere to formal codes of conduct or ethical frameworks”) applies whether testers are from within the Union of from a third country. In answer to question “What are hence the implications for the location of data requirements?”, taking for instance TIBER-EU, the framework document specifies that “it is the responsibility of the entities, TI providers and RT providers to ensure that they conduct tests within the remit of all laws and regulations, and appropriate risk management controls (e.g. contracts) are in place to enforce this”.
DORA shall required that all critical ICT systems and applications are to be tested annually, however, it does limit such testing to be performed in the scope of a TLPT. While it is ideal that a TLPT scope covers all or most of the high-value assets of the organisation for the test to be as realistic as possible, the objective of a TLPT is not to test critical systems and applications in a comprehensive manner, bur rather test the organisation's people, processes, and technologies. In this respect, periodic vulnerability assessments and narrow-scoped penetration testing should be leveraged to ensure that such systems and applications are being checked for security issues in a regular manner.
As per the MFSA Supervision Priorities 2022, in 2022, the MFSA will start working on the establishment of an Advanced Digital Operational Resilience Testing Framework.
Link to MFSA Supervision Priorities can be accessed here.
To date, there are no concrete dates set for when DORA will go live. The European Commission has only published the initial proposal for DORA and consultation work is currently taking place to come up with the specific requirements which will be mandated by the regulation.
Following this, the regulation will be published in the Official Journal of the EU - this will provide specific details regarding DORA and it will officially specify the adoption day. Following this day, a grace period of 12 months will be applied after which all components of DORA shall become applicable, with the exception of Articles 23 and 24. The latter Articles will only become applicable 36 months following the adoption day. Note that such provisions may change with new revisions of the regulation text.
It would be recommended to seek the exact definitions of, and further information and clarifications about, micro-enterprises, small enterprises and medium-sized enterprises within Commission recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprise (2003/361/EC) and within the updated user guide, respectively, all available here. Please note that the terms micro, small and medium-sized enterprises (SMEs) are expected to be used sparingly within the upcoming Digital Operational Resilience Act, specifically in relation to proportionality, and therefore a clear understanding of the size of one’s respective financial entity as well as any prospective changes are very important.
The regulation's text is indeed yet to be finalised and, therefore, some changes are expected. However, in the interim we could focus on leveraging on mechanisms that are already in place. Many Service Organisations (especifially cloud solution providers) allow us to gauge the design and operating effectiveness of their controls in place with Third Party Assurance reports. Specifically to ICT risks, SOC 2 Reports can be used to provide comfort over one, or many, of the Trust Service Criteria - Security, Availability, Processing Integrity, Confidentiality and/or Privacy. While this could not necessarily be a requirement coming out of DORA, it is one effective way of how we can start bridging the gap.
The regulation's text is indeed yet to be finalised and finally approved and, therefore, some changes are expected. However, in the interim it is advised to rely on existing guidance documentations by the MFSA, notably the 'Guidance on Technology Arrangements, ICT and Security Risk Management, and Outsourcing Arrangements' issued in December 2020. A gap assessment against these guidelines will provide you with a view of the organisation's as-is state for addressing and managing ICT and security risks in terms of implemented capabalities or processes and related technical controls, against what is expected by the local regulator for an organisation of its size and risk profile. Bridging the identified gaps from such an assessment will essentially serve as an excellent way of preparing for DORA, ensuring a smoother and quicker alignment process with the newly established regulation.
The understanding is that the Digital Operational Resilience Act and the respective accompanying Directive, along with measures at other levels (e.g. Regulatory Technical Standards) are expected to eventually supersede the current ecosystem of guidelines (and their respective transposition where/if applicable) as they stand at the present moment, however, given that the text is not yet final, it is too early to specify the exact changes.
There is no direct link between DORA rules and ICAAP reporting. However, since banks are required to describe IT systems used to gather, store, aggregate and disseminate risk data used for ICAAP and ILAAP, any enhancements in the IT systems as a result of DORA will naturally result in changes in such description.
In order to provide some examples, we need to first discuss how we can translate the risk appetite into actionable and measurable risk criteria. The risk appetite statement and risk tolerance for the organisation will be set up by the first tier, the Board. The second tier, focused on risk ownership, will be responsible to translate the risk appetite into actionable risk criteria. There are various methods to achieve this objective, however one good place to start is establishing risk criteria boundaries (as defined within the OCTAVE Allegro risk framework). Once our risk criteria is defined, the third tier will use it to decide which risk treatment strategy they will implement for each specific risk. Once this done, an organisation is ready to start setting up the risk metrics and indicators.
A posible methodology is to use a bottom up approach. KCIs (Key Control Indicators) give us an indication on how well a certain control is implemented. Each control could be mapped to many KCIs covering, for example, control performance, coverage, or effectiveness. KPIs (Key Performence Indicators) will agregate many KCIs and will be mainly focus on measuring the risk program performance and status. Lastly, KRIs (Key Risk indicators) will aggregate KPIs while at the same time providing a high-level report on how well risks are being managed as they are usually mapped to multiple risks. To facilitate alignment between these different types of metrics one could take a look to the ISO 27004 and FAIR frameworks.
An example for a KCI could be the "mean time to implement Microsoft security patches across the EMEA business unit", with the related KPI being the "percentage (%) of critical information assets with all security patches up-to-date", and the related KRI being "leackage of sensitive information". One should note that the KRI will be placed within a loss exceedence curve (if the organisation opts to quantify risks) or within a probability/impact heat map (if the organisation opts for a qualitative apporach instead) to gauge if the KRI falls within the organisational risk tolerance boundaries. Some other examples of KRIs could be "disruption of business operations", "loss of trust and reputation", or "theft/loss of sensitive assets (e.g., information, human resources, funds)". KRIs could be seen as vague or with a high level of abstraction. The latter is intentional as KRIs are grouping various risks and the associated KCIs, while communicating the risk management initiatives to the higher levels of the organisation.