Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.” Continue Reading Biometric Data Risks on the Rise

When a data breach occurs at a company, not only is customer data vulnerable but so is employee information. But what obligations do employers owe their employees?

This issue was recently decided in part, at least with respect to Pennsylvania employers, in Dittman v. UPMC, 43 WAP 2017, 2018 WL 6072199, at *14 (Pa. Nov. 21, 2018).  In Dittman, a group of employees sued their employer, the University of Pittsburg Medical Center, for failure to take reasonable care to protect their personal private information.  On appeal, the Supreme Court of Pennsylvania overturned the decision of the lower court and held that an employer owes a common law duty of care to its employees to use reasonable care to safeguard their sensitive data as stored on the employer’s internet-accessible computer system. Notably, the employees’ position was not that the employer engaged in any misfeasance, but nonfeasance for failure to prevent the harm from occurring. The Supreme Court found that the mere fact that third parties committed the wrongdoing – the data breach – did not negate the duty of the employer to safeguard the employees’ sensitive information that they were required to provide the employer as a condition of employment.

The Dittman case is certainly not the first time a group of employees sued an employer based upon a data breach of the employer’s computer system that resulted in the disclosure of the employees’ personally identifiable information. In Sackin v. TransPerfect Global, Inc., 278 F. Supp. 739 (S.D.N.Y. 2017), the employer moved to dismiss a class action filed by the employees, which motion was denied, in part. Among other things, the district court found that the complaint sufficiently stated a cause of action for breach of common law duty of care and that the employer violated its duty to take reasonable steps to protect the employees’ data. The court also found that a viable cause of action existed for breach of the implied contract between the employer and employees, but not for breach of the terms of the employment contract. With respect to the former, the conduct and course of dealing between the parties was deemed to rise to the level of an implied contract because, as a prerequisite of employment, the employees were required to provide the employer with certain sensitive data, and given how commonplace data and identity theft are in the current day and age, the court found an implied assent by the recipient to protect that data. Continue Reading Employers Beware and Take Reasonable Care

As the number of data breaches increases, so do the number of data breach-related lawsuits, whether styled as class actions or individual lawsuits. To the extent these lawsuits are commenced in the federal courts, it gives rise to the question of what satisfies Article III standing. Merely because a data breach may have occurred and personally identifiable information may have been exposed, or is at risk of being exposed, does not necessarily confer standing of the party whose information has been compromised in the absence of actual harm. As with most litigations, the answer also depends, at least in part, in what jurisdiction the lawsuit is commenced.

In Gilot v. Equivity, 18-CV-3492 (WFK), 2018 WL 3653150, at *1 (E.D.N.Y. July 31, 2018), the district court reinforced the Second Circuit’s position on what is required for a plaintiff to have Article III standing. In Gilot, an action commenced by an individual was dismissed for lack of standing where it was only alleged that the unauthorized release of her personally identifiable information to a third party without her consent could lead to potential identity theft. The words “could” and “potential” are important because in the Second Circuit, as in the First, Third and Eighth Circuits, having been put at risk, without actual harm, is insufficient to confer Article III standing upon a plaintiff.

The Eleventh Circuit generally follows the First, Second, Third, and Eighth Circuits; however, the threshold for damages to confer standing is lower. In Muransky v. Godiva Chocolatier, Inc., 905 F.3d 1200 (11th Cir. 2018), the plaintiff alleged that the merchant violated the Fair and Accurate Credit Transactions Act (FACTA) by printing an untruncated receipt with more than five digits of the customer’s credit card number. This statutory violation was sufficient to withstand a motion to dismiss for lack of standing since it constituted damages in the form of the plaintiff needing to bear the cost of safely keeping or disposing of the receipt to avoid someone obtaining the credit card number. Continue Reading Standing Considerations in Federal Data Breach Litigation

On October 18, 2018, the Food and Drug Administration (“FDA”) released draft guidance outlining its plans for the management of cybersecurity risks in medical devices. Commenters now have until March 17, 2019, to submit comments to the FDA and get their concerns on the record. More information about submitting comments can be found at the end of this post.

This FDA guidance revision will replace existing guidance released in 2014, which as you can see, includes recommendations, but does not attempt to classify devices. The recent draft guidance takes a more aggressive posture and separates devices into those with a Tier 1 “Higher Cybersecurity Risk” and those with a Tier 2 “Standard Cybersecurity Risk.”

Tier 1 devices are those that meet the following criteria:

1) The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and

2) A cybersecurity incident affecting the device could directly result in harm to multiple patients.

Tier 2 devices are any medical device that does not meet the criteria in Tier 1.

The FDA has varying guidance for devices depending on the Tier of the device. The FDA provides guidance for Tier 1 and Tier 2 devices on applying the NIST Cybersecurity Framework, providing appropriate cybersecurity documentation, and adhering to labeling recommendations.

Continue Reading FDA Releases Draft Guidance on Cybersecurity for Health Devices

Cathay Pacific recently disclosed that a data breach occurred exposing information for as many as 9.4 million people – the largest airline data breach ever. The extent of the information obtained varied from credit card information (although it is reported that only partial credit information was obtained or that the cards were expired), to telephone numbers, dates of birth, frequent flier numbers, passport numbers, government ID numbers, and past travel information.

Shortly after Cathay Pacific revealed its breach, British Airways announced that the data breach it incurred last month may have been included information for an additional 185,000 customers than initially disclosed (which last month was reported to be 380,000 customers – although British Airways is now claiming it is possibly less). While an investigation is ongoing, the breach is believed to have included, among other things, payment details, inclusive of – for at least some customers – the CVV number.

Our take

No sector is safe from data breaches and some are either more vulnerable and/or more attractive to cyber criminals than others because of the types of information stored. The airline industry is one where the companies are likely to have a treasure trove of personally identifiable information. This is a valuable reminder that, as a business, it is important to be sensitive and cognizant to the types of customer data in your possession and be sure to take the necessary steps to keep it secure.

Shipman & Goodwin attorney Daniel Schwartz will co-present on data privacy issues and the necessary steps employers must take to protect employee data, as part of the firm’s 2018 Labor and Employment Fall Seminar.

During the session, “If You Collect It, You Must Protect It: Dealing with Employee Data Privacy Issues,” Dan will discuss data protection worries of human resources and review state and federal laws and regulations pertaining to workplace privacy, including the Personnel Files Act, GDPR, California statutes, and HIPAA compliant releases.

When: October 25, 2018, 8:00 AM – 12:00 PM EDT

Where: Hartford Marriott Downtown, 200 Columbus Boulevard, Hartford, CT

Click here for more information on the event.

 

Effective January 1, 2020, California will require manufacturers of “connected devices” to equip those devices with reasonable security features. An example of a reasonable security feature (provided in the bill) would be to assign each device a unique password or to prompt the user to generate a password on setup.

This new law follows a trend that has been gathering steam since 2015, when the FTC provided security guidance to Internet of Things device manufacturers. Just a year later, the Mirai botnet used a DDos attack to take down a number of popular web services, in one of the first major Internet of Things attacks. DDos attacks leverage the internet connections (bandwidth) of large numbers of unsuspecting persons. First, the bad-actor infects the person’s device with malware. Then these devices can be remotely-forced to connect simultaneously to various targets (think Netflix), overwhelming their ability to communicate and shutting down the service. These types of large-scale attacks are especially dangerous in the Internet of Things context, where otherwise innocuous devices such as light-fixtures, DVRs, toasters, pet-feeders, and countless others begin to come online.

While this new bill asks very little of manufacturers, it is a crucial first step that will force manufacturers of internet-connected devices to put in place at least some common-sense security features.

Our take

This new bill requires very little of manufacturers and provides very little in terms of security for consumers. To address Internet of Things security, both regulators and companies need to provide platforms and standards that are easy to integrate, update, and adopt.

Just last month, the National Institute of Standards and Technology (“NIST”), in concert with the National Cybersecurity Center of Excellence (“NCCoE”), jointly published a behemoth guide to securing Electronic Health Records (“EHR”) on mobile devices.

The guide is a reaction to the growing number of issues with EHR in the mobile application context, as healthcare organizations often have poor EHR integration with their mobile apps. Mobile devices have so many obvious benefits from patient communication to care coordination that organizations are going with the implement first, secure later approach, creating major headaches down the road when the inevitable security incident occurs. In their guide, NIST and NCCoE provide a full analysis of provider side access risks where the provider adds patient information into an EHR system through a mobile device and that same EHR data is accessed elsewhere by another provider via a separate mobile device.

The guide provides a roadmap for healthcare organizations that:

  • maps security characteristics to standards and best practices from NIST and other standards organizations, and to the HIPAA Security Rule
  • provides a detailed architecture and capabilities that address security controls
  • facilitates ease of use through automated configuration of security controls
  • addresses the need for different types of implementation, whether in-house or outsourced
  • provides a how-to for implementers and security engineers seeking to re-create or reference design in whole or in part

We recommend reviewing the guide during the planning phase of any EHR-related mobile application implementation. For a quick overview of the guide, see the one page fact sheet here.

Our take

The guide provides a timely and valuable starting point for CIOs and Privacy Officers that are considering a mobile app implementation. At a high level, §8’s Risk Questionnaire (page 216) provides a great resource for those organizations looking to understand the types of questions they need to ask when selecting a cloud-based EHR vendor. The tables that follow these questionnaires will help an engaged leader to understand the universe and severity of the risks that come with the move to mobile.

As the lazy days of summer wind down slowly at first, and then all at once, now is a good time for a reminder that your own employees returning to work full steam may pose the biggest threat to your cybersecurity. According to the U.S. Department of Health and Human Services Office for Civil Rights, July was the worst month this year for healthcare data breaches. So far in 2018, more individual records have been exposed than for all of 2017, including 1.4 million individual records exposed in the biggest breach from July, which was attributed to a phishing attack. These statistics back up a Verizon report on PHI data breaches that came out earlier this year and found that 58% of PHI data breaches involved insiders, and that healthcare is the only industry in which internal actors post the biggest threat to organizations.

But that doesn’t mean healthcare alone is vulnerable to insider threats, as a Department of Justice criminal complaint filed in June and released earlier this month demonstrates. That complaint alleges that an $81 million bank heist suffered by a Bangladesh bank was carried out by North Korean cybercriminals and started with the criminals sending spearphishing emails to targeted individuals. In those emails, a purported job applicant would ask for a personal interview and attach a .zip file that the applicant claimed was a resume. When opened, the .zip file automatically downloaded malware to the recipient’s computer, which ultimately made its way to the bank’s IT system. This allowed the hackers to allegedly impersonate bank employees, access the SWIFT network, and transfer funds from the bank’s account to an account in the Philippines. Additional malware was used to cover their tracks.

Our take

While certain manipulation of a network as seen in the Bangladesh bank heist may take some skill and expertise, phishing and its targeted variant of spearphising are straightforward exploitations of human error. They demonstrate that allocating budget to pay for cybersecurity technology may not be enough, and resources also need to be spent on employee training and culture shifting. Certainly, layered technology solutions that address different weak points, including two-factor authentication, are important and helpful, but organizations need to take a wider view of cybersecurity and risk reduction to both account for, and attempt to correct, human error.

Nielsen, famed global information and measurement company, was hit last week with a shareholder lawsuit in the Southern District of New York alleging that the EU’s new privacy regulation is to blame for missed targets in its Q2 earnings report, and that Nielsen should have known the hit was coming. The proposed class action claims that Nielsen and two top executives not only made false and misleading statements regarding the company’s preparation for the implementation of the GDPR and the increased restrictions it places on the collection of personal data, but also concealed the adverse effects these restrictions would have on Nielsen’s market position. The lawsuit also argues that Nielsen’s reliance on and access to large data set providers, such as Facebook, was far more important for its financial growth than previously disclosed. Nielsen admitted in its reporting of second quarter results that consumer data privacy considerations placed pressure on it, its clients, and its partners, and specifically cited the GDPR as one such consideration. Nielsen also announced in its second quarter earnings report that its current CEO would retire at the end of 2018. In addition to this proposed class action filed last week, several other law firms have posted notices in the financial press indicating they have filed class actions against Nielsen on behalf of investors, and notifying potential class members of deadlines to act or participate.

One of those law firms has also posted in the financial press that it has commenced class action lawsuits on behalf of shareholders against Facebook, mirroring somewhat the claims against Nielsen. The suits allege in particular that Facebook made materially false or misleading claims and failed to disclose that GDPR’s implementation would have a negative impact on the use of Facebook, its revenue growth and profitability due to new restrictions data collection and the imposition of an informed consent requirement in some contexts. Those suits also allege that Facebook failed to disclose that the costs to Facebook of complying with GDPR would have a materially adverse effect on its revenue, projected growth, and overall financial health.

Our take

While traditional shareholder suits related to data privacy and security tend to allege that a company failed to comply with data privacy regulations, such as following a data breach, the allegations in these recently announced suits alter the formulation to say that these companies were unprepared for the negative business impacts of proper compliance, and then lied about it. If these suits are successful, they will have far-reaching implications for the ways that publicly-traded companies and their boards conceptualize and assess “cyber risk” and the impacts of new data privacy regulations on their business models. Regardless of whether they are successful or not, however, they reiterate the need for companies from across the business spectrum to pay attention to data privacy and begin assessing both the burdens and benefits of complying with new data privacy regulations as soon as possible after they are announced.