EU Archives | TrustArc https://trustarc.com/topic-resource/eu/ Mon, 12 Aug 2024 15:10:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://trustarc.com/wp-content/uploads/2024/02/cropped-favicon-32x32.png EU Archives | TrustArc https://trustarc.com/topic-resource/eu/ 32 32 Your Guide for Smooth Cross-Border Data Transfers and Global CBPRs https://trustarc.com/resource/webinar-your-guide-for-smooth-cross-border-data-transfers-and-global-cbprs/ Thu, 30 May 2024 17:45:17 +0000 https://trustarc.com/?post_type=resource&p=4801
Webinar

Your Guide for Smooth Cross-Border Data Transfers and Global CBPRs

  • On-Demand

Global data transfers can be tricky due to different regulations and individual protections in each country. Sharing data with vendors has become such a normal part of business operations that some may not even realize they’re conducting a cross-border data transfer!

The Global CBPR Forum launched the new Global Cross-Border Privacy Rules framework in May 2024 to ensure that privacy compliance and regulatory differences across participating jurisdictions do not block a business’s ability to deliver its products and services worldwide.

To benefit consumers and businesses, Global CBPRs promote trust and accountability while moving toward a future where consumer privacy is honored and data can be transferred responsibly across borders.

This webinar will review:

  • What is a data transfer and its related risks
  • How to manage and mitigate your data transfer risks
  • How do different data transfer mechanisms like the EU-US DPF and Global CBPR benefit your business globally
  • Globally what are the cross-border data transfer regulations and guidelines

Webinar Speakers

Val Ilchenko General Counsel & Chief Privacy Officer, TrustArc
Noël Luke Chief Assurance Officer, TrustArc
Beatrice Botti VP, Chief Privacy Officer, DoubleVerify
Guadalupe Sampedro Partner, Cooley
 
]]>
A week in Privacy – EDPB/Meta plus TikTok, Nebraska, and more https://trustarc.com/resource/spp-s5-ep12/ Thu, 25 Apr 2024 15:02:16 +0000 https://trustarc.com/?post_type=resource&p=4583 UK privacy law update: Proposed changes to UK GDPR / Data Protection Act  https://trustarc.com/resource/uk-privacy-law-update-uk-gdpr/ Tue, 16 Apr 2024 12:01:00 +0000 https://trustarc.com/?post_type=resource&p=4222
Article

UK privacy law update: Proposed changes to UK GDPR / Data Protection Act

Four years after Brexit, the UK’s data protection laws are being reviewed by the UK Government again – mostly to ensure it can govern data rights in the country under UK law, rather than deferring to EU law.

Organizations operating in multiple jurisdictions must comply with all applicable data protection laws for each territory. TrustArc’s Regulatory Guidance helps organizations stay abreast of ever-evolving privacy laws across multiple jurisdictions.

There is some urgency among UK lawmakers to drive these changes since the Retained EU Law (Revocation and Reform) Act 2023 became law on January 1, 2024, removing some post-Brexit obligations under European Union law as applied to the UK GDPR and UK Data Protection Act.

The UK Department for Science, Innovation and Technology (DSIT) highlighted this change in its draft Data Protection (Fundamental Rights and Freedoms) (Amendment) Regulations 2023, published on September 11, 2023.

In its explanatory note accompanying the draft, DSIT stated the regulations will:

  • “revoke and replace Article 4(28) of the UK General Data Protection Regulation and section 205(1A) of the Data Protection Act 2018 which relate to the meaning of references to fundamental rights and fundamental freedoms in data protection legislation”; and
  • “insert new definitions of fundamental rights and fundamental freedoms into the UK GDPR and DPA 2018 so that after the end of 2023 … [these references] … will be references to rights under the European Convention on Human Rights within the meaning of the Human Rights Act 1998.”

UK Data Protection Laws in the 21st Century

The UK Government has enforced data privacy and protection under three main sets of laws this century:

  1. Privacy and Electronic Communications Regulations 2003, which came into force on December 11, 2003, and focus on data confidentially and the consequences of data breaches.
  2. UK General Data Protection Regulation (UK GDPR), which became law on April 27, 2016, a few months after the introduction of the EU General Data Protection Regulation (EU GDPR) and became applicable on January 1, 2021. The UK GDPR mostly reflects fundamental personal data rights covered in the EU GDPR, though narrows their application to UK-based organizations and organizations outside the UK that process UK citizens’ personal data.
  3. UK Data Protection Act 2018 (DPA), which replaced the UK’s original DPA (passed in 1988, updated in 1998) and augments UK citizens’ privacy rights under GDPR with stronger rules around specific categories of personal information such as ethnic background, political opinions and health.

Amendments to data protection laws in the UK are being reviewed by Parliament under a proposed bill titled Data Protection and Digital Information Bill (No.2).

Bill to Amend UK GDPR Intends to ‘Cut Paperwork’

The UK Parliament’s Data Protection and Digital Information Bill (No.2) is the second recent attempt in the UK Parliament to bring data rights under UK law, rather than EU law.

The original version of the Data Protection and Digital Information Bill was introduced in the House of Commons on July 18, 2022, and stalled for several months.

That proposed Bill was then withdrawn so the updated version could be introduced on March 8, 2023.

Later that day, the UK Information Commissioner Office issued a press release about the Data Protection and Digital Information Bill (No.2) headlined “British Businesses to Save Billions Under New UK Version of GDPR”, with the subheading promising “New data laws to cut down pointless paperwork for businesses and reduce annoying cookie pop-ups”.

While there is a proposal to reduce some requirements for cookie consent pop-ups, the Bill also proposes tougher penalties for ‘nuisance’ calls and texts up to £17.5 million or 4% of global turnover, whichever is greater.

UK Information Commissioner John Edwards said he welcomed the reintroduction of the Bill and supported its ambition “to enable organizations to grow and innovate whilst maintaining high standards of data protection rights”, adding “data protection law needs to give people confidence to share their information to use the products and services that power our economy and society”.

On the later aim – to give people the confidence to share their information – the Bill contains a commitment to establish a digital verification service framework so individuals can more easily and safely prove their identity digitally, and thus speed up their interactions with organizations.

Further amendments to the Data Protection and Digital Information Bill (No.2) were proposed in November and December 2023. Edwards released new commentary on the Bill on December 19, 2023.

He continues to seek changes to the text such as:

  • improving several definitions, particularly for activities considered ‘high-risk processing’;
  • greater independence for the ICO (“namely removing the Secretary of State approval over statutory ICO codes”);
  • updating rules about the ICO’s activities to allow the Office to serve information, enforcement and penalty notices electronically;
  • extending the reporting period for personal data breaches under Privacy and Electronic Communications Regulations from 24 to 72 hours (aligned with UK GDPR);
  • tightening rules around processing data when used for government audits or investigations of individuals, especially related to tax and social security – Edwards notes stronger safeguards are needed to protect individuals against arbitrary interference with their rights; and
  • clarifying rules for businesses responding to subject access requests to reduce ‘vexatious’ requests and organizations only need to run ‘reasonable and proportionate searches’.

Overview of Key Proposed Amendments to UK GDPR

The UK Information Commissioner’s Office media releases state the Data Protection and Digital Information Bill’s proposed amendments to UK data protection laws will “introduce a simple, clear and business-friendly framework that will not be difficult or costly to implement”.

The intents and claims for these amendments are summarized below.

1. Simpler UK GDPR Compliance

Proponents of the amendments claim they will ‘cut pointless paperwork’ in current UK data protection laws by giving organizations more flexibility over how they meet compliance requirements. The changes especially target reporting requirements under UK GDPR, which the Information Commissioner’s Officer noted were based on the existing EU GDPR’s “highly prescriptive, top-down approach to data protection regulation which can limit organizations’ flexibility to manage risks and places disproportionate burdens on small businesses.”

However, there is a caveat: organizations will need to appoint a member of senior management as ‘Senior Person Responsible’, a role which effectively replaces the previously required role of Data Protection Officer.

Claimed benefits: organizations will only need to maintain records of processing activities for personal data if those processing activities “pose high risks to individuals’ rights and freedoms”.

2. Continued Compliance for International Data Transfers

The ICO states the reforms are also intended to ensure the UK maintains data adequacy with the EU and build international confidence in the UK’s data protection standards to support “the free flow of personal data between like-minded countries”.

Claimed benefits: businesses operating in the UK that are already compliant with existing UK data laws will be allowed to continue using their existing international data transfer mechanisms to share personal data overseas. The ICO says “This will ensure British businesses do not need to pay more costs or complete new checks to show they’re compliant with the updated rules”.

[See section below: UK-US Data Bridge: International Data Transfer Adequacy]

3. Permitted Processing of Personal Data Without Consent

Organizations have always had to weigh their interests in collecting personal data against individuals’ privacy rights; the amendments provide some leeway for the collection of personal data if the insights from that data are in the public interest.

Claimed benefits: organizations may collect personal data without needing consent where they can prove collection and sharing of that data is necessary to “prevent crime, safeguard national security or protect vulnerable individuals”.

4. Broader Definition of Scientific Research

The ICO states “current data laws are unclear on how scientists can process personal data for research purposes, which holds them back from completing vital research that can improve the lives of people across the country”. The new Bill proposes an updated definition giving commercial organizations similar freedoms as academics to collect and use/reuse data for scientific research.

Claimed benefits: the Bill proposes reducing paperwork and legal costs for researchers, which the ICO claims will “encourage more scientific research in the commercial sector”. The new Bill contains a non-exhaustive definition of scientific research which remains any processing that “could reasonably be described as scientific and could include activities such as innovative research into technological development”.

5. Safeguards Applied to AI

The ICO notes the current data protection laws in the UK are “complex and lack clarity for solely automated decision-making and profiling which makes it difficult for organizations to responsibly use these types of technologies”. The new Bill clarifies rules for businesses using automated decision-making. It includes requirements for businesses to make people aware they may be subject to automated decisions, explain the reason/s for processing, and notify them of their rights, including rights to “challenge and seek human review when those decisions may be inaccurate or harmful”.

Claimed benefits: the ICO says these updated rules will “Increase public and business confidence in AI technologies”, while giving businesses, AI developers, and individuals “greater clarity about when these important safeguards for solely automated decision-making must apply”.

Amendments Focused on National Security

A UK Government press release published on November 23, 2023, claimed a handful of proposed changes to the Bill “will safeguard the public, prevent fraud, and unlock post-Brexit opportunities”.

The main changes sought by the Government are:

  • Access to targeted individuals’ financial activities data – giving government agencies new powers to require data from third parties (such as banks and other financial institutions), which could be used to help identify fraud; and
  • Retention of targeted individuals’ biometrics data – allowing national security agencies (such as Counter Terrorism Police) to keep for longer the biometric data of individuals identified by an agency as ‘posing a potential threat to national security’. This update brings retention of biometric data such as fingerprints in line with INTERPOL’s data retention rules.

Although the UK GDPR isn’t being revoked by the Retained EU Law Act, it will be more tightly interpreted through UK case law, rather than EU case law.

In the EU, while each member state can pass legislation permitting some exemptions to personal data rights in cases of national security, the EU GDPR contains stronger safeguards for individual rights versus government organizations’ interests.

The proposed changes to UK data privacy and protection law generally keep many of the UK GDPR’s data protection principles that apply to all organizations processing personal data in the UK.

When the UK GDPR came into effect it carved out greater national security exemptions from some data protection rules around the collection, processing, and use of personal information than those allowed under the EU GDPR.

These carveouts for intelligence services, immigration control, and national security effectively limit personal data rights for citizens when government organizations choose to apply them.

UK-US Data Bridge: International Data Transfer Adequacy

The UK extension to the EU-US Data Privacy Framework came into force on October 12, 2023, which allows certified organizations in the US to transfer the personal data of UK citizens more readily. It replaces previous requirements for safeguards such as international data transfer agreements or contract clauses.

The UK-US Data Bridge was established on September 21, 2023, by the UK Secretary of State for Science, Innovation, and Technology, the Rt Hon Michelle Donelan MP. The Secretary for State also laid adequacy regulations in Parliament, supported by the US Attorney General’s decision on September 18, 2023, to designate the UK as a ‘qualifying state’.

To use the UK-US Data Bridge organizations must prove compliance with UK GDPR rules on the protection of UK citizens’ personal data and gain certification to the Data Privacy Framework (DPF) list.

International Data Transfers

Map your data and demonstrate compliance with applicable laws in each territory you operate.

Learn more

Data Privacy Framework Verification

Get verified for EU-U.S. Data Privacy Framework and the UK Extension to the EU-U.S. DPF.

Start building trust

Demonstrating DPF verification is critical for your global compliance and data transfer mechanisms and includes:

  • Privacy-compliant data flows
  • Operationalizing data mechanisms for accountability, such as strong privacy notices
  • Verified seal to show the organization has met compliance requirements and is committed to protecting personal data and privacy.

To participate in the UK Extension to the EU-U.S. DPF an organization must also participate in the EU-U.S. DPF, whereas it is possible to participate exclusively in either the EU-U.S. DPF or the Swiss-U.S. DPF.

Key Topics

Get the latest resources sent to your inbox

Subscribe
]]>
Everything you need to know on the EU AI Act https://trustarc.com/resource/everything-eu-ai-act/ Wed, 10 Apr 2024 13:21:03 +0000 https://trustarc.com/?post_type=resource&p=4199
Articles

Artificial intelligence: All you need to know about the new European Union AI Act

Passed in March 2024, the European Union’s Artificial Intelligence (AI) Act aims to ensure consumer rights are safe and AI applications are ethical without placing undue burden on businesses.

Artificial intelligence is part of our daily lives, transforming industries from healthcare to entertainment, transport to education. Streaming services can use algorithms to suggest playlists and create personalized content; AI-powered digital assistants set reminders and help manage daily tasks; online shopping systems provide recommendations based on digital history; and AI helps identify patterns of fraudulent activity in banking transactions, among many other applications.

Artificial intelligence can help personalize, target, recognize and predict information. In many ways, it’s a huge asset to businesses and society in general and helps us solve many problems. But as AI becomes smarter and smarter, it also brings challenges, particularly when it comes to privacy, fairness, ethics accountability, and safety.

While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes.

Setting the AI standard

The European Union has always been a trendsetter regarding privacy laws, establishing the General Data Protection Regulation (GDPR) – the toughest privacy and security law in the world – in 2018. Several countries and individual U.S. states have followed suit since.

Now, in the face of booming AI applications, the European Union has established the AI Act, passed in the European Parliament on 13 March 2024, becoming the first legislation of its kind in the world.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the European commissioner for internal market, wrote on X (formerly known as Twitter).

What is the AI Act?

The AI Act is the first-ever legal framework on artificial intelligence, which addresses the risks of AI and positions Europe to play a leading role globally. It sets out strict requirements for both AI developers and deployers and aims to reduce the burdens to businesses while respecting fundamental rights, safety, and ethical principles.

Key principles of the AI Act include:

  1. Human-centric approach: The AI Act puts humans at the center of AI development and use. It emphasizes that AI systems should be designed to serve the best interests of people and society as a whole.
  2. Transparency: This is crucial for building trust in AI. The act requires that AI systems be transparent in their operations, meaning that users should be aware when they are interacting with an AI system, and they should understand how it works.
  3. Accountability: When something goes wrong with an AI system, there should be someone responsible. The AI Act introduces the concept of ‘provider accountability’, meaning that the individuals or organizations developing, deploying, or operating AI systems are held responsible for their actions.
  4. Safety and security: AI systems must be safe and secure for users and the broader public. The AI Act sets requirements for risk management, data quality, and cybersecurity to ensure that AI systems do not pose undue risks.
  5. Data governance: Data is the lifeblood of AI. The act establishes rules for the quality and governance of data used to train and operate AI systems, with a focus on protecting personal and sensitive information.

How does the AI Act work?

The AI Act divides tech into various categories of risk. The riskier the AI application, the more scrutiny it faces.

The levels of risk are:

  • Minimal risk: Think AI-enabled video games or filters, content recommendation systems, spam filters… It’s expected the vast majority of AI applications will fall into this category.
  • Limited risk: Risks associated with a lack of transparency in AI usage. For example, letting humans know they are working with machines when using chatbots, and identifying AI-generated content to providers.
  • High risk: Tech used in critical infrastructure, essential services, educational training, law enforcement, voter behavior, administration of justice, migration and border control, among others. AI systems will always be considered high-risk if they perform profiling of humans.
  • Unacceptable risk: This includes AI systems considered a threat to safety, for example from social scoring by governments to emotion recognition, untargeted ‘scraping’ of the internet for facial images, and toys using voice assistance that encourage dangerous behavior. These will be banned.

How do I know whether an AI system is high-risk?

The AI Act clearly defines what it considers to be ‘high risk’, and sets out a solid methodology that helps identify these systems within the legal framework. Given that this is a constantly and fast-evolving industry, the European Commission has stated that it will ensure what is on this list is updated regularly.

Who does the AI Act apply to?

The AI Act covers a broad spectrum of AI systems, ranging from simple chatbots to sophisticated autonomous vehicles. This legal framework extends its reach to both the public and private sectors within and beyond the EU borders, provided that the AI system is introduced into the Union market or its usage impacts individuals within the EU.

It pertains to both providers, such as developers of screening tools, and deployers of high-risk AI systems, like a bank acquiring said screening tool. Additionally, importers of AI systems must ensure that the foreign provider has completed the necessary conformity assessment process, bears a European Conformity (CE) marking, and is accompanied by the requisite documentation and usage instructions.

Providers of free and open-source models are mostly exempt from these requirements. Furthermore, the obligations do not cover research, development, and prototyping activities conducted before market release. Additionally, the regulation excludes AI systems intended solely for military, defense, or national security purposes, regardless of the entity carrying out these activities.

What does compliance with the AI Act involve?

For organizations developing or using AI systems within the EU, compliance with the AI Act means adhering to its requirements and following specific procedures.

Some aspects of compliance include:

  • Documentation and transparency: Organizations must keep detailed documentation on their AI systems, including how they work, their purpose, and potential risks. They also need to ensure transparency in their communication with users about AI involvement.
  • Risk assessment and mitigation: High-risk AI systems require thorough risk assessments to identify potential harms. Organizations must implement measures to mitigate these risks and ensure the safety and rights of individuals.
  • Data protection and privacy: Compliance with existing data protection regulations, such as the GDPR, is essential. Organizations must handle personal and sensitive data ethically and securely.
  • Testing and quality assurance: Before deploying AI systems, organizations need to conduct rigorous testing to ensure they operate as intended and meet safety standards. Ongoing monitoring and updates are also necessary.

Does the European AI Act impact the rest of the world?

The main goal of the new EU AI Act is not just to promote trustworthy AI within Europe, but also to spread this standard globally, ensuring that all AI systems uphold fundamental rights, safety, and ethical practices.

In China, companies are required to obtain proper approvals before offering AI services.

On the other hand, the United States is still developing its approach to regulating AI. Although Congress is considering new laws, some cities and states in America have already passed their regulations. These laws restrict the use of AI in various areas, such as police investigations and employment practices.

How will the AI Act be enforced?

Implementing the AI Act comes with its challenges, including the need for resources, expertise, and ongoing monitoring. Additionally, as AI technologies evolve, the regulations will need to adapt to address emerging risks and opportunities.

For now, European Member States play a crucial role in making sure regulations are followed and enforced. To do this, each Member State needs to choose one or more national authorities to oversee how the rules are applied and put into action. These authorities will also be in charge of keeping an eye on the market to make sure everything is working as it should.

To make things smoother and have an official contact point for the public and others, each Member State will pick one national authority to supervise everything. This authority will also represent the country in the European Artificial Intelligence Board.

For extra knowledge and advice, there will be an advisory group made up of different kinds of people, like those from the industry, small businesses, civil society, and universities.

Additionally, the Commission will create a new European AI Office inside itself. This office will watch over AI models that are used for general purposes. It will work closely with the European Artificial Intelligence Board and will have support from a group of independent experts with scientific knowledge.

How will the AI Act impact innovation?

While the AI Act introduces new responsibilities and regulations, it also aims to foster innovation and competitiveness within the EU. By providing a clear framework for ethical AI development, businesses can build trust with consumers and investors, leading to greater adoption of AI technologies.

When does the AI Act come into force?

The European Union’s AI Act was adopted by the European Parliament in March 2024 and is expected to enter into force at the end of the legislature in May 2024, after passing final checks and receiving endorsement from the European Council. Implementation of the AI Act will then be staggered from 2025 onward.

What are the implications of breaking the AI Act?

Non-compliance with the rules can lead to fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

Essential Guide to GDPR

Practical steps to manage the EU General Data Protection Regulation.

Download now

Responsible AI Certification

Demonstrate your organization’s commitment to data protection and governance.

Get certified
Key Topics

Get the latest resources sent to your inbox

Subscribe
]]>
EU Google Consent Changes: Meet Requirements with TrustArc’s Google-certified Consent Manager Platform https://trustarc.com/resource/eu-google-certified-consent-manager-platform/ Tue, 27 Feb 2024 21:07:39 +0000 https://trustarc.com/?post_type=resource&p=2651
Articles

EU Google Consent Changes: Meet Requirements with TrustArc’s Google-certified Consent Manager Platform

Google is introducing significant changes to the way its advertising and analytics products operate across EEA and UK markets. Utilizing a Google CMP (Consent Manager Platform) partner ensures best practices are followed to maintain functionality.

Starting March 2024, Google’s “EU Consent Mode V2” is mandatory for certain Google products ensuring users’ consents are collected before being able to utilize certain functionality in Google’s products.

What’s the history of Google Consent Mode V1 and V2?

The EU Google Consent Mode V1 was optional and was first introduced in 2015 to improve compliance with data privacy laws for advertising purposes. It included a revision of how Google tracks and optimizes data for programmatic advertising strategies.

The EU Google Consent Mode V2 is now required for tracking and using a Google-certified Consent Management Platform (CMP) ensures that your experience follows best practices. Google tracking takes place only when consent has been given via the enabled Google Consent Mode consent manager experience. It is important to ensure that the configurations and implementations of your consent experience are accurate with your Google Tag Manager.

TrustArc’s knowledgeable and highly skilled Technical Account Management team can ensure that your TrustArc Google Consent Mode experience is correctly configured and functioning as intended for compliance and optimal advertising experience.

How does Google Consent Mode work?

Google Consent Mode can be deployed on a site in one of two methods – a Basic or Advanced deployment. With a Basic deployment, Google Tags are not fired until the user opts in. With an Advanced deployment, Google Tags continuously fires cookieless pings until consent is given. You can learn more in Google’s documentation here.

Who is impacted by the mandatory EU Google Consent Mode V2 requirement?

Organizations deploying cookies or trackers for behavioral or targeted ad marketing/ remarketing in Europe should pay attention! This impacts organizations using Google tools: all Google Ad Services (Ad Mob, Ad Serve, Ad Manager), Google Analytics, and Google Tag Manager.

What is the impact?

Organizations not using Google Consent Mode V2 will experience measurement loss affecting marketing campaigns. Impacting all your advertising activities, campaign optimization, and conversion metrics.

Why the change?

Google has made an important change to its advertising tools, including Google Ads. The Consent Mode will become mandatory for all users starting from March 2024. Companies utilizing Google Ads will need to implement Google Consent Mode to avoid the blocking of personalized ads such as remarketing. In the future, Google plans to block conversion tracking as well for those who don’t comply.

What are the benefits of using a Google Consent Manager Platform (CMP) partner?

You can rest assured that you provide the best advertising experience while meeting all technical requirements with Google. Save time with codeless implementation, and know that your CMP partner is continuously upgrading integrations to Google’s latest standards.

Key Topics

Get the latest resources sent to your inbox

Subscribe

Learn more about how you can take advantage of TrustArc’s Cookie Consent Manager, a Google CMP partner today!

Learn more
]]>
RAW privacy and GrumpyGDPR with Rie Aleksandra Walle https://trustarc.com/resource/spp-s5-ep4/ Wed, 21 Feb 2024 21:44:00 +0000 https://trustarc.com/?post_type=resource&p=3274 EU Cloud Code of Conduct FAQs https://trustarc.com/resource/eu-cloud-code-of-conduct-faqs/ Thu, 18 Jan 2024 20:52:00 +0000 https://trustarc.com/?post_type=resource&p=3383
FAQs

EU Cloud Code of Conduct FAQs

What is the scope of the EU Cloud Code of Conduct?

The EU Cloud Code of Conduct is a self-regulation instrument that makes it easier to demonstrate compliance with the EU GDPR. It translates the legal requirements of the Regulation into operational controls that organisations can implement. The Code covers all aspects of the GDPR, from individual rights to data security, and also includes a governance section that is designed to support the effective and transparent implementation, management, and evolution of the Code.

 
]]>
Essential Guide to GDPR https://trustarc.com/resource/essential-guide-gdpr/ Mon, 01 Jan 2024 18:33:00 +0000 https://trustarc.com/?post_type=resource&p=3286
eBooks

Essential Guide to the GDPR

Practical Steps to Manage the EU General Data Protection Regulation

Years after its implementation, enforcement of the General Data Protection Regulation (GDPR) is in full swing and fines are commonly reaching into the millions and billions. To avoid suffering significant losses, small, medium, and large businesses need a plan for GDPR compliance, fast! Using the Essential Guide to the GDPR, you can decipher over 200 pages of GDPR legal text into practical implementation steps that minimize risk, ensure compliance, build trust, and protect your brand.
 
 

Key takeaways include:
  • A five phase GDPR compliance roadmap for implementation

  • Comprehensible steps for ongoing GDPR Compliance

  • Messaging to get the compliance program investment your team needs

The GDPR Has Worldwide Application

If your business offers goods or services, has employees, physical buildings, or a website accessible by data subjects in the 27 EU Member States, it’s most likely subject to GDPR. Because the GDPR protects the personal data of individuals, which includes anyone physically residing in the EU, even if they are not EU citizens, its applicability is extremely broad. Don’t get caught off guard, get GDPR compliant.

“As of October 2022, Data Protection Authorities have issued over 1,300 fines totaling over $2 billion dollars for GDPR non-compliance.”

– CMS Enforcement Tracker

 
]]>
Privacy Biz and Liz (Liz Denham) https://trustarc.com/resource/spp-s4-ep43/ Fri, 01 Dec 2023 21:37:00 +0000 https://trustarc.com/?post_type=resource&p=3267 All messed up and places to go: a week in privacy plus Brussels (with Amit Ghadia) https://trustarc.com/resource/spp-s4-ep42/ Wed, 22 Nov 2023 21:37:00 +0000 https://trustarc.com/?post_type=resource&p=3266