Risk Management Archives | TrustArc https://trustarc.com/topic-resource/risk-management/ Tue, 20 Aug 2024 20:32:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://trustarc.com/wp-content/uploads/2024/02/cropped-favicon-32x32.png Risk Management Archives | TrustArc https://trustarc.com/topic-resource/risk-management/ 32 32 AI Readiness Assessment https://trustarc.com/resource/ai-readiness-assessment/ Tue, 20 Aug 2024 20:32:23 +0000 https://trustarc.com/?post_type=resource&p=5150 How to Build a Vendor Risk Management Program https://trustarc.com/resource/webinar-how-to-build-a-vendor-risk-management-program/ Mon, 22 Jul 2024 15:51:35 +0000 https://trustarc.com/?post_type=resource&p=5039
Webinar

How to Build a Vendor Risk Management Program

  • On-Demand

Developing a robust vendor risk management program is critical for safeguarding your organization against potential threats arising from third-party relationships. In an era where businesses increasingly rely on external vendors to deliver essential services, understanding and managing the associated risks have never been more important. This webinar will explore the essentials of creating a comprehensive framework to identify, assess, and mitigate risks linked to your vendors.

Our panel of experts will guide you through the indispensable steps to establish an effective vendor risk management strategy. They’ll address key questions such as: What are the primary risks associated with third-party vendors? How can you evaluate and monitor vendor performance to ensure compliance and security? What practices should be implemented to maintain ongoing risk assessments and resilience?

This webinar will review:

  • The critical components of a successful vendor risk management program
  • Practical steps to evaluate and manage vendor risks effectively
  • Strategies for continuous monitoring and performance assessment of third-party vendors
  • How to integrate vendor risk management into your overall risk strategy and business operations

Join us for an in-depth exploration of vendor risk management and learn how TrustArc can support your journey toward improved third-party risk oversight.

Webinar Speakers

Cathleen Doyel Deputy General Counsel, TrustArc
Whitney Schneider-White Partner, BakerHostetler
 
]]>
Testing Artificial Intelligence (AI) Systems https://trustarc.com/resource/testing-artificial-intelligence-ai-systems/ Wed, 05 Jun 2024 19:27:39 +0000 https://trustarc.com/?post_type=resource&p=4860
Templates

Testing Artificial Intelligence (AI) Systems

]]>
Data Protection and Responsible Generative AI Use: A Comprehensive Guide https://trustarc.com/resource/data-protection-responsible-generative-ai-use/ Tue, 05 Mar 2024 17:04:00 +0000 https://trustarc.com/?post_type=resource&p=3081
Articles

Data Protection and Responsible Generative AI Use: A Comprehensive Guide

Casey Kuktelionis

In 2023, artificial intelligence (AI) crashed into organizations like a tidal wave. By the year’s end, ChatGPT reached 100 million weekly active users, and Goldman Sachs strategists observed 36% of S&P companies discussing AI on conference calls. And now you can’t open an email without the mention of AI. From the front lines to the boardroom, AI discussions are happening everywhere.

While AI isn’t new (think Siri or Alexa), new tools and uses have recently accelerated. For example, AI is used heavily in creating superior customer experiences – 92% of businesses are driving growth with AI-driven personalization. Furthermore, the AI market is expected to grow by over 13x over the next decade.

Yet, despite the increasing value and potential of AI, consumers’ trust in organizations using AI is declining. The IAPP reports that 60% of consumers have already lost trust in organizations over their AI use.

Why is AI use causing a loss of trust in organizations?

Consumer concern stems from a lack of attention to responsible AI use. While AI is being touted by boards, not enough companies have established guidelines and training for its use.

Salesforce research demonstrates that despite 28% of workers using AI at work, 69% of workers reported they haven’t received or completed training to use generative AI safely. And 79% of workers say they don’t have clearly defined policies for using generative AI for work.

Workday’s latest global study agrees, with 4 in 5 employees saying their company has yet to share guidelines on responsible AI use.

Additionally, consumers are no strangers to the risks and cons of AI use. Many have tested generative technologies and were left disappointed. Whether you experienced a generative AI fail to properly create a hand or provide accurate information, you’re likely familiar with some of its limitations.

In fact, workplace AI use is already making headlines. For example, Samsung banned the use of ChatGPT due to employees accidentally leaking confidential company information. Or this headline, Most employees using AI tools for work aren’t telling their bosses.

Lastly, concerns and legal considerations surrounding the collection, use, and storage of personal data continue. The use of large language models, like ChatGPT, is already in question. The New York Times recently filed a copyright infringement lawsuit against OpenAI, and other prominent authors have also followed suit.

AI use and business relationships

And it’s not just about consumers. As businesses adopt AI, third-party vendors and partners question AI use and data practices during vendor screening and risk management. Understanding and addressing these concerns is vital to building trust in the age of AI.

Ultimately, the goal for businesses is to balance innovation and trust. AI delivers positive business outcomes and efficiency when harnessed and used responsibly.

Still, many organizations are wrestling with this challenge. TrustArc’s 2023 Global Privacy Benchmarks Survey revealed that “artificial intelligence implications in privacy” ranked as the #1 global concern.

Are organizations required to use AI responsibly?

Data protection laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) cover much of the world’s population. Comprehensive privacy laws aim to protect individuals’ privacy rights and regulate how organizations handle personal data. Thus some of these regulations already include AI use.

For example, the CCPA, as amended by the California Privacy Rights Act (CPRA), gives the California Privacy Protection Agency the authority to regulate automated decision-making technology (ADMT). And draft regulations are underway.

In Europe, Article 22 of the GDPR protects individuals from automated decision-making, including profiling. It prohibits subjecting individuals to decisions “based solely on automated processing”. This means that in certain instances, human intervention is required for decisions about individuals, not just technology. The UK GDPR has similar rules.

What’s more, lawmakers are trying to keep up with technological advances like AIPrivacy professionals must watch closely as various legislation is proposed and enacted. Some examples include:

  • EU AI Act (enforcement expected in 2025)
  • Canada’s Artificial Intelligence and Data Act (AIDA)

Bookmark the International Association of Privacy Professionals Global AI Law and Policy Tracker to stay up to date on global AI regulations. And review a summary of some of some key AI-focused regulations and governance frameworks around the world: AI Regulations: Prepare for More AI Rules on Privacy Rights, Data Protection, and Fairness.

The FTC is watching

In the United States, the FTC closely monitors AI companies and their use. In early 2024, the FTC warned“Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC.”

Later, the FTC announced it launched inquiries into five companies regarding their recent AI investments and partnerships. And on February 13, 2024, it reminded AI (and other) companies that quietly changing your terms of service could be unfair or deceptive.

What is responsible generative AI use?

The glitz of generative AI has caused some to forget that it’s just a new tool. And even though it changes how people work, the basics of data protection haven’t changed. What data is being collected, stored, and used? How is it being used? Can you control it? Is there a service provider agreement?

The data protection foundations of yesterday are still relevant today when considering AI use.

Data protection foundations

  • Transparency and Consent: Be transparent about how the organization collects, uses, and shares personal data. Obtain explicit consent from individuals before processing their data.
  • Data Minimization: Collecting more data than necessary in the digital expanse is tempting. But it’s often best to adopt a “less is more” approach. Collect only the data that is necessary for a specific purpose and limit the retention period to minimize the risk of unauthorized access or misuse. Consequently, data minimization is a standard in most privacy regulations.
  • Data Security: Implement robust security measures to protect personal data from unauthorized access, disclosure, alteration, or destruction. This includes encryption, access controls, and regular security audits. It’s about building a fortress that safeguards privacy.
  • Accountability: Understand, be responsible for, and be able to demonstrate compliance with data protection and security principles.

Leading responsible generative AI use in your organization

There’s still much to learn about generative AI and privacy. As technology and regulations continue to evolve, so do privacy programs.

To start, encourage responsible AI use proactively by using a framework, developing employee guidelines, fostering a culture of privacy, and updating your third-party risk management process.

Adopt a privacy framework

Rather than getting lost in the alphabet soup of global privacy laws and regulations, a framework approach can operationalize your privacy program. Some frameworks worth considering include:

As a baseline, a framework will recommend updating policies and notices to include AI use. For instance, your acceptable use of information resources policy, internal data privacy policy, and your data privacy notice (included at all points where personal data is collected).

Nymity Framework

Download the Nymity Privacy Management and Accountability Framework

Download now

Nymity Research

Learn more about TrustArc’s Nymity Research

Learn more

Develop employee AI use guidelines

AI use in organizations looks like the Wild West right now. Employees are admittedly using unapproved AI tools at work. Now is the time to rein in the horses with some risk based guidelines.

Based on your organization’s risk tolerance and the purpose of AI use in the workplace, develop employee guidelines for AI use. Include use cases, examples, and specific restrictions. What shouldn’t go into generative AI models?

At a minimum, most recommend that no personal data or sensitive organizational data is inputted into public AI tools. If employees use other generative AI tools that come with a service agreement, determine how those tools will be assessed, approved, and implemented.

Continue to connect with privacy professionals to discuss how they manage AI data governance in their organizations. Because this is an evolving industry there’s much to learn from each other.

Train employees and foster a culture of privacy

Once employee guidelines for responsible AI use are established, it’s time to train your employees. To help your employees understand the importance of responsible AI use, start by establishing a common language.

Keeping employees informed is the best defense against the limitations of generative AI. Because the landscape is continuously changing, plan to do frequent training as you update the guidelines and responsible AI use cases.

Fostering a culture of privacy in your organization reduces risk, builds trust, and even helps with privacy regulation compliance!

Update your third-party risk management processes and privacy risk assessments

If they haven’t already, it’s likely that your business partners and vendors will question how your organization is managing AI data governance. And likewise you should update your third-party data privacy risk assessment processes to include AI governance.

What updates need to be made to assess external AI systems and vendors? How does this impact data flows and sharing with current and future partners and vendors? What defined roles and responsibilities of third parties have changed or need to be updated?

Conduct due diligence around the data privacy and security posture of all current and potential vendors and processors. Routinely reassess current vendors and partners with updated guidelines. To do so, leverage the Privacy Impact Assessments (PIAs) you already know. While traditional PIAs may not address AI challenges, they can be elevated to account for the specific characteristics and risks of AI.

Also, consider how you will prove your responsible use of AI to your partners and vendors. For some AI adopters, the TRUSTe Responsible AI certification is the best way to demonstrate accountable AI use and transparent data practices.

Join the vanguard of responsible AI

Lead the charge in responsible AI adoption and data governance. Become a part of our community of AI adopters and position your organization as a trailblazer in privacy innovation and data protection.

Get the latest resources sent to your inbox

Subscribe
NymityAI

Do you want to learn the law faster and easier?

Try, NymityAI Beta. Your personalized privacy legal navigator. Save time in your research process with expert answers in seconds. Obtain precise privacy answers with citations to pinpoint your topics, while our fine-tuned AI search engine does the work. Work smarter with Nymity Content powered by trusted privacy and legal experts over 25 years.

]]>
Keeping a [low] Profile: Looking at Profiling https://trustarc.com/resource/spp-s5-ep3/ Thu, 15 Feb 2024 19:05:23 +0000 https://trustarc.com/?post_type=resource&p=2147 Managing Online Tracking Technology Vendors: A Checklist for Compliance https://trustarc.com/resource/webinar-managing-online-tracking-technology-vendors-a-checklist-for-compliance/ Thu, 15 Feb 2024 15:48:03 +0000 https://trustarc.com/?post_type=resource&p=3814
Webinar

Managing Online Tracking Technology Vendors: A Checklist for Compliance

  • On Demand

Unlock the definitive guide to managing your online tracking technology vendors effectively. This webinar delves into a comprehensive and actionable set of best practices that every organization needs. From meticulous website scans to in-depth contract reviews, from precise consent categorization to harmonizing diverse frameworks, our checklist ensures you cover all the crucial touchpoints. Equip yourself with this essential framework and confidently navigate the complex landscape of online tracking compliance, using our step-by-step roadmap as your trusted reference.

Join our panel of experts in the webinar as they equip you with the knowledge and strategies for navigating vendor relationships under CPRA.

This webinar will review:

  • Insights into key US and EU laws affecting tracking technology practices
  • Best practices for managing tracker risk, including website scans, banner behavior, consent categorization, and tag manager alignment
  • Implementing internal processes for cross-collaboration
  • How contract requirements affect tracker categorization

Webinar Speakers

Andrew Scott Privacy Counsel, TrustArc
Ryan Ostendorf Product Manager, TrustArc
Taylor A. Bloom Partner, BakerHostetler
 
]]>
PII Data: Implications for your Business Goals https://trustarc.com/resource/pii-data-personally-identifiable-information/ Tue, 30 Jan 2024 22:13:00 +0000 https://trustarc.com/?post_type=resource&p=2074
Article

PII Data: Implications for your Business Goals

All organizations collect various types of data (information), including personally identifiable Information (PII). PII data can be sensitive or non-sensitive, and more often than not, is called by employee mistakes as well as a target in a data breach. In some situations, these data breaches get exposed on the Dark Web.

As a consumer, you’ve likely received some type of alert that information like your email address or telephone number has been exposed in a data breach. This is often just the tip of the iceberg regarding the consequences of PII data getting into the wrong hands.

If regulators can track down the source of the breach there are often penalties and financial consequences for businesses. Additionally, when PII data is exposed, consumers lose trust in the organization that didn’t properly protect that information from both internal mishandling or external bad actors.

What is Personally Identifiable Information (PII) Data?

As technology progresses, some argue that the definition of Personally Identifiable Information (PII) must progress as well.

PII data is any information about an individual that can be used to identify that individual, including information that can be combined with other personal or non-information to identify the individual.

The National Institute of Standards and Technology (NIST) defines PII as “information that can be used to distinguish or trace an individual’s identity – such as name, social security number, biometric data records – either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual (e.g. data and place of birthday, mother’s maiden name, etc.).”

PII data includes religion, geographical indicators, employment information, personal health information, and behavioral characteristics such as activities and schools attended. In some situations, IP addresses, passport or license numbers, and financial account numbers, combined with other data points further enrich an individual’s “online” profile.personal data

As more data types are introduced, more questions about how to define PII data arise. Are usernames or social media handles PII? Is information collected by cars and IoT devices treated as PII?

The answers to these questions have important business implications to consider. Misusing or mishandling PII data can be costly both financially and particularly when consumer trust is lost.

Personally Identifiable Information vs. Personal Data

While Personally Identifiable Information and Personal Data may seem similar, they’re not the same thing. The GDPR doesn’t use the term Personally Identifiable Information and instead uses the term Personal Data.

As defined in the GDPR, personal data is “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”

The European Commission provides personal data examples such as:

  • Name and surname
  • Home address
  • Email address
  • Identification card numbers
  • Location data
  • Health data (prescriptions, mental health)
  • Financial data (bank accounts, credit cards)
  • Passports
  • IP address
  • A cookie ID
  • The advertising identifier of your phone
  • Data held by a hospital or doctor
  • While both PII and Personal Data include common data attributes(names, email, home, passports, and license/identification card numbers), personal data explicitly covers a few categories PII data leaves out(cookie ID, the advertising identifier of your phone (device ID), location data).

At a higher level, PII is used to distinguish an individual, and personal data includes any information related to the individual, whether it identifies them specifically or not.

What Qualifies as PII?

Specifically, this data is considered to be PII:

  • Name, maiden name, mother’s maiden name, alias
  • Passport #, Social Security #, Drivers License #, Taxpayer Identification #
  • Address (personal or business)
  • Email address
  • Telephone numbers
  • Vehicle registration number, vehicle title number, or Vehicle Identification Number
  • Financial Account Numbers, Credit Card Numbers
  • Personal Health Information (PHI), Patient Identification Number
  • Biometric Records – Personal characteristics, including a photographic image of faces or other distinguishing characteristics, x-rays, fingerprints, or other biometric image or template data (retina scan, voice signature, facial geometry)

Other information can also become PII when combined with publicly available information used to specifically “identify” an individual. This data is considered linked or linkable to one of the examples above.

For example, non-PII that can become PII under certain conditions:

  • Internet Protocol (IP) address or Media Access Control (MAC) address
  • Web cookies, trackers
  • Date of Birth
  • Place of Birth
  • Religion
  • Weight
  • Activities
  • Geographical Indicators
  • Employment or Educational Information, such as where someone works, worked in the past, or where they attended school
  • Financial Information

Sensitive PII is information that, when disclosed, would jeopardize one’s individual rights and thus result in some harm to the individual. This includes financial information (like credit card numbers), health information, criminal records, and the like. Depending on the jurisdiction, some PII may have greater sensitivity.

Under GDPR these data are classified as special category data (race, ethnicity, political opinions, religion, etc.) and warrant the highest level of security, integrity, and explicit consent to be “processed.”

It’s important to note that while all sensitive PII IS PII, NOT all PII is considered sensitive. But no matter the type, safeguarding PII data is vital to maintaining privacy and trust.

PII in the Context of Cybersecurity

Cybercriminals use simple phishing, vishing, and smishing scams to gain access to one’s PII. Furthermore, Cybercriminals know that PII data gets them one step closer to their ultimate goal of one’s SPI (which has significant value in the Dark Web).

Despite increased cybersecurity technology, cybercrime continues to mount as more data is shared due to the benefits of the Internet of Things. Moreover, the exponential growth and ubiquitous access to AI have increased cybercrime’s sophistication. This in turn has increased the risk of internal or external data breaches. Therefore, taking measures to secure one’s PII from the outset is critical to breaking this vicious cycle.

The Impact of PII Data on Identity Theft

Identity theft occurs when criminals use PII data to impersonate individuals, again for financial gain. By accessing PII data, a criminal could open up new credit card accounts, apply for loans, or even file fraudulent tax returns in your name.

One infamous example of such a case is the Equifax data breach in 2017, where the personal information of 147 million people was exposed, leading to widespread identity theft. More recently, there have been several notable breaches :

In 2023, the genetics testing company 23andMe was hacked causing the exposure of genetic information and PII of 6.9 million people.
Earlier in 2023, Progress Software’s MOVEitTransfer enterprise file transfer tool was exploited causing a ripple effect of over 2,000 organizations reportedly being attacked and data thefts affecting 62 million people and counting.

Top Considerations for Protecting PII

Protecting PII data is more than just a best practice—it’s a necessity. Here are eight proactive steps you can take to emphasize PII protection:

  1. Establish a Data Privacy and Security Program: Build a Program that fosters collaboration between privacy compliance and infosec teams and ensures support from senior leadership.
  2. Data Minimization: Only collect PII you need to complete the intended purpose and when the purpose is over permanently purge from the environment (including backup systems).
  3. Know Your Data and Risks: Understand what PII data you collect, where it’s stored, who has access, and how it’s used and shared.
  4. Limit Access: Only give access to PII data to those who need it to perform their job function.
  5. Keep Hardware Current: Keep all your devices, including smartphones, computers, and tablets, up to date with the latest software and security patches.
  6. Train Your Team: Ensure everyone in your organization understands their role in protecting PII data and provide specific job training for those “processing” PII.
  7. Stay Compliant and Vigilant: Follow relevant privacy laws and regulations, and keep your policies and procedures up-to-date; Conduct ongoing system penetration testing to ensure data security
  8. Prepare for Data Incidents: Have a plan for dealing with data incidents and breaches, including notification procedures; Consider performing breach simulation exercises annually to remain vigilant and ready to act in extreme circumstances.

Get Support to Protect Your Business PII Data

Protecting PII data is not just about compliance—it’s about safeguarding trust, privacy, and your reputation. As privacy professionals, it’s our responsibility to ensure that PII data is treated with the respect it deserves. TrustArc is a partner in this journey, offering expert guidance and cutting-edge solutions in PII data protection.

Get the latest resources sent to your inbox

Subscribe
]]>
How to Mitigate Third-Party Vendor Risk for Your Privacy Program https://trustarc.com/resource/vendor-risk-management-guide/ Mon, 15 Jan 2024 19:21:00 +0000 https://trustarc.com/?post_type=resource&p=3362
eBooks

How to Mitigate Third-Party Vendor Risk for Your Privacy Program

Managing third-party vendors to ensure compliance with regulatory requirements can seem frustrating and unmanageable. With the varying laws across the world (CCPA, GDPR, and PIPL to name a few) cracking down on how data is managed between organization and third-party vendors, having a vendor privacy program is essential. To avoid non-compliance and punitive measures, it is important to be properly track and monitor the flow of data.

Key takeaways include:
  • The risks third-party vendors pose for your organization under the different global regulations

  • What elements a vendor risk program should have to efficiently assist to mitigate unnecessary risk

  • Tips and best practices to implement within your privacy program for best results

 
]]>
So Many States, So Many Privacy Laws https://trustarc.com/resource/so-many-states-so-many-privacy-laws/ Wed, 10 Jan 2024 19:43:00 +0000 https://trustarc.com/?post_type=resource&p=3521
Whitepaper

So Many States, So Many Privacy Laws

How to Keep Up with US State Privacy Law Updates

The rapidly changing landscape of customer data and privacy laws has challenged marketers and business legal teams for years. Just when they think they’re all caught up with new regulations, another one is thrown into the mix. With so many bills moving in and out of legislative sessions, marketing and legal teams need a way to keep track of consumer data and privacy acts to stay in compliance.

Key takeaways include:
  • Comparisons between proposed, rejected, and approved U.S. state privacy laws

  • Recent developments in U.S. state privacy laws

  • Best practices and tips on how your business can keep up with the rapid changes in privacy laws

 
]]>
Essential Guide to GDPR https://trustarc.com/resource/essential-guide-gdpr/ Mon, 01 Jan 2024 18:33:00 +0000 https://trustarc.com/?post_type=resource&p=3286
eBooks

Essential Guide to the GDPR

Practical Steps to Manage the EU General Data Protection Regulation

Years after its implementation, enforcement of the General Data Protection Regulation (GDPR) is in full swing and fines are commonly reaching into the millions and billions. To avoid suffering significant losses, small, medium, and large businesses need a plan for GDPR compliance, fast! Using the Essential Guide to the GDPR, you can decipher over 200 pages of GDPR legal text into practical implementation steps that minimize risk, ensure compliance, build trust, and protect your brand.
 
 

Key takeaways include:
  • A five phase GDPR compliance roadmap for implementation

  • Comprehensible steps for ongoing GDPR Compliance

  • Messaging to get the compliance program investment your team needs

The GDPR Has Worldwide Application

If your business offers goods or services, has employees, physical buildings, or a website accessible by data subjects in the 27 EU Member States, it’s most likely subject to GDPR. Because the GDPR protects the personal data of individuals, which includes anyone physically residing in the EU, even if they are not EU citizens, its applicability is extremely broad. Don’t get caught off guard, get GDPR compliant.

“As of October 2022, Data Protection Authorities have issued over 1,300 fines totaling over $2 billion dollars for GDPR non-compliance.”

– CMS Enforcement Tracker

 
]]>