EU - Country Commercial Guide
Cyber-Security

Information on Network and Information Systems (NIS) Security Directive

Last published date: 2021-10-19

Network and Information Systems (NIS) Directive

The Directive on security of network and information systems (NIS), applicable since 2016, sets baseline requirements to ensure better protection of critical infrastructures in the European Union.  The NIS Directive sets basic principles for Member States for common minimum capacity building and strategic cooperation.  It also directs operators of essential services and digital service providers to ensure that they apply basic common security requirements. Obligations for operators of both groups include taking technical and organizational measures for risk management; to prevent and minimize the impact of security incidents; and to notify, without undue delay, incidents having a significant impact on the continuity of the essential services they provide.  Member States have implemented this directive in different ways, particularly with respect to operators of essential services, which led to a proposed legislative modification of the NIS Directive (the NIS 2 Directive) in December 2020.  If adopted into law, the NIS 2 Directive would obligate more entities and sectors to strengthen security requirements, address the security of supply chains, streamline reporting obligations, and introduce more stringent supervisor measures and stricter enforcement requirements. 

Cybersecurity Act

The March 2019 Cybersecurity Act set up a mechanism to develop a voluntary certification scheme for information and communications technology security products, processes, and services.  The European Commission has not yet proposed the specific areas that would benefit from certification schemes, and the European Union Agency for Cybersecurity has created ad-hoc stakeholder groups to help it create certification schemes, which includes industry participation in accordance with the Act.

Draft Regulation on Artificial Intelligence

On April 21, 2021, the European Commission published its draft regulation on artificial intelligence, known as the Artificial Intelligence (AI) Act, which is the first proposed regulation on artificial intelligence in the world.  The AI Act would promote the development of such technologies; harness their potential benefits; and protect individuals against potential threats to their health, safety, and fundamental rights that artificial intelligence systems might pose.  The AI Act is part of a package of Commission initiatives aimed at positioning the European Union as a world leader in trustworthy and ethical technological innovation.  The AI Act would include a risk-based approach to regulating artificial intelligence and would apply to any artificial intelligence system that affects the European Union’s single market, irrespective of the provider’s location, and includes online platforms, financial services, vehicles, machinery, industrial tools, toys, and medical devices.  A pilot program for regulating artificial intelligence will be tested in Spain in 2022, a year before the regulation is expected to enter into force in the European Union.

European Strategy for Data

On November 25, 2020, the European Commission introduced the Data Governance Act, the Digital Services Act, and the Digital Markets Act under the rubric of the European Strategy for Data, the Commission’s vision for a single market that supports global competitiveness and data sovereignty, among other goals.

Data Governance Act

The Data Governance Act would establish a legal framework for the reuse of public sector data covered by intellectual property rights, confidential non-personal data, and personal data. While the General Data Protection Regulation regulates international transfers of personal data, the Data Governance Act would regulate international transfers of non-personal data by a user who was granted access to such data by the public sector.  In addition, the Act would establish a supervisory framework for data sharing service providers; it would facilitate the collection and processing of data made available by individuals or private entities for altruistic purposes, including through a voluntary registration system for “data altruism organizations;” and create a European Data Innovation Board to enable the sharing of best practices by Member States and to advise the Commission on cross-sector interoperability standards.

Digital Markets Act

The Digital Markets Act would regulate the market power of large online platforms to achieve fairer and more open digital markets within the European Union.  The Act would regulate certain “gatekeeper firms” – large online platforms that impact how other companies interact with users online through digital services such as searching, social networking, cloud computing, and advertising services – and apply to those large online platforms with more than 45 million active users, services in at least three Member States, and 6.5 billion euro in annual turnover in the last three years or 65 billion euros in market value in the last year.  The Digital Markets Act would prohibit these gatekeepers from engaging in self-preferencing activities and restricting access to services connected to their platforms, such as online marketplaces like an app store, and be barred from preventing users from removing pre-installed software or apps.  Under the proposed act, EU regulators could levy fines of up to ten percent of global annual turnover of these firms, and, limitedly, break up certain parts of their corporate operations.

Digital Services Act

The Digital Services Act aims to harmonize mechanisms throughout the European Union for the removal of illegal content and to require due diligence for certain online service providers, including internet access providers, domain name registrants, cloud and webhosting services, and online platforms.  The Digital Services Act would regulate “very large online platforms;” those online platforms that would reach at least ten percent of the population in the European Union.  The Act would require those platforms to conduct annual risk assessments on the availability of illegal content through their platform and its effects on fundamental rights, public health, and public security; to provide greater transparency about their operations, including algorithms used, advertising, and content; to report information associated with a serious criminal offense; and to suspend services that frequently provide illegal content.