Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.” Continue Reading Biometric Data Risks on the Rise
The Upper San Juan Health Service District d/b/a Pagosa Springs Medical Center (“PSMC”), a critical access hospital in Colorado, has agreed to a $111,400 settlement with the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) to resolve a complaint alleging that a former PSMC employee continued to have remote access to PSMC’s web-based scheduling calendar because PSMC failed to deactivate the former employee’s username and password following termination of employment. OCR investigated the complaint and discovered that PSMC impermissibly disclosed the protected health information (“PHI”) of 557 patients to the former employee. Moreover, OCR determined that PSMC did not have a Business Associate agreement in place with the vendor of the web-based scheduling calendar.
The Resolution Agreement also includes a two-year Corrective Action Plan. Under the Corrective Action Plan, PSMC must: (i) revise its policies and procedures relating to Business Associates and uses and disclosures of PHI; (ii) submit proposed training materials on the revised policies and procedures for OCR’s review and train workforce members in accordance with the approved training materials; (iii) develop a current Risk Analysis and submit such analysis to OCR for review; and (iv) upon OCR’s approval of the Risk Analysis, provide OCR with a risk management plan that addresses and mitigates the security risks and vulnerabilities identified in the Risk Analysis and documentation that the risk management plan is being implemented.
The Resolution Agreement and Corrective Action Plan are available here.
HIPAA requires covered entities and business associates to terminate a workforce member’s access to all systems and databases containing PHI upon the date the workforce member’s employment, or other arrangement with the entity, ends. The PSMC settlement serves as a reminder that the electronic health record is not the only database for which access must be terminated. HIPAA entities should develop a checklist that identifies all systems and databases containing PHI to ensure all access to PHI is terminated upon a workforce member’s separation from the entity.
When a data breach occurs at a company, not only is customer data vulnerable but so is employee information. But what obligations do employers owe their employees?
This issue was recently decided in part, at least with respect to Pennsylvania employers, in Dittman v. UPMC, 43 WAP 2017, 2018 WL 6072199, at *14 (Pa. Nov. 21, 2018). In Dittman, a group of employees sued their employer, the University of Pittsburg Medical Center, for failure to take reasonable care to protect their personal private information. On appeal, the Supreme Court of Pennsylvania overturned the decision of the lower court and held that an employer owes a common law duty of care to its employees to use reasonable care to safeguard their sensitive data as stored on the employer’s internet-accessible computer system. Notably, the employees’ position was not that the employer engaged in any misfeasance, but nonfeasance for failure to prevent the harm from occurring. The Supreme Court found that the mere fact that third parties committed the wrongdoing – the data breach – did not negate the duty of the employer to safeguard the employees’ sensitive information that they were required to provide the employer as a condition of employment.
The Dittman case is certainly not the first time a group of employees sued an employer based upon a data breach of the employer’s computer system that resulted in the disclosure of the employees’ personally identifiable information. In Sackin v. TransPerfect Global, Inc., 278 F. Supp. 739 (S.D.N.Y. 2017), the employer moved to dismiss a class action filed by the employees, which motion was denied, in part. Among other things, the district court found that the complaint sufficiently stated a cause of action for breach of common law duty of care and that the employer violated its duty to take reasonable steps to protect the employees’ data. The court also found that a viable cause of action existed for breach of the implied contract between the employer and employees, but not for breach of the terms of the employment contract. With respect to the former, the conduct and course of dealing between the parties was deemed to rise to the level of an implied contract because, as a prerequisite of employment, the employees were required to provide the employer with certain sensitive data, and given how commonplace data and identity theft are in the current day and age, the court found an implied assent by the recipient to protect that data. Continue Reading Employers Beware and Take Reasonable Care
A few months ago we posted an update on the California Consumer Privacy Act, a mini-GDPR that contains serious privacy ramifications for the U.S. privacy landscape. Likely in response to the upcoming 2020 go-live for the California law, various groups have noticed an uptick in lobbying directed at the passage of a federal privacy law that would pre-empt the California law and help harmonize the various state laws. Pushing to the front of that effort is a new draft federal privacy law proposed by Intel.
The Intel law looks to be written specifically to pre-empt the California law, as it contains language that would pre-empt any State law with civil provisions designed to reduce privacy risk through the regulation of personal data. This pre-emption contains limited exceptions for state-data-breach, contract, consumer protection, and various other laws, but it would drive a hole through California’s law. Furthermore, Intel’s proposed law could pre-empt various specific laws such as Illinois biometric data protection law, and because it does not include any notice provision — it would be reliant on the state-breach-notification statutes to find violations in the first place.
Beyond frustrating state attempts at personal information regulation, the law creates penalty caps that result in disproportionate punishments for smaller and mid-size security incidents and allow larger incidents, typical of a larger company, to operate on an eat-the-fine basis. For example: The Equifax breach from earlier this year affected 143 million Americans. If regulators chose to bring an action, the maximum penalties for the action could be up to $16,500 per violation — that means a maximum penalty of 2.3 trillion dollars. The penalty cap however was set at 1 billion dollars, meaning the largest data breaches will face the lowest penalty-per-impacted individual.
This proposed national privacy law would primarily serve the interests of the largest players in the tech and data industry, while providing harsher relative penalties to smaller and mid-size players. This law or something similar is likely to see serious political debate in the next few years as lobbying efforts intensify. Expect the heat to turn up as we near January 1, 2020.
As the number of data breaches increases, so do the number of data breach-related lawsuits, whether styled as class actions or individual lawsuits. To the extent these lawsuits are commenced in the federal courts, it gives rise to the question of what satisfies Article III standing. Merely because a data breach may have occurred and personally identifiable information may have been exposed, or is at risk of being exposed, does not necessarily confer standing of the party whose information has been compromised in the absence of actual harm. As with most litigations, the answer also depends, at least in part, in what jurisdiction the lawsuit is commenced.
In Gilot v. Equivity, 18-CV-3492 (WFK), 2018 WL 3653150, at *1 (E.D.N.Y. July 31, 2018), the district court reinforced the Second Circuit’s position on what is required for a plaintiff to have Article III standing. In Gilot, an action commenced by an individual was dismissed for lack of standing where it was only alleged that the unauthorized release of her personally identifiable information to a third party without her consent could lead to potential identity theft. The words “could” and “potential” are important because in the Second Circuit, as in the First, Third and Eighth Circuits, having been put at risk, without actual harm, is insufficient to confer Article III standing upon a plaintiff.
The Eleventh Circuit generally follows the First, Second, Third, and Eighth Circuits; however, the threshold for damages to confer standing is lower. In Muransky v. Godiva Chocolatier, Inc., 905 F.3d 1200 (11th Cir. 2018), the plaintiff alleged that the merchant violated the Fair and Accurate Credit Transactions Act (FACTA) by printing an untruncated receipt with more than five digits of the customer’s credit card number. This statutory violation was sufficient to withstand a motion to dismiss for lack of standing since it constituted damages in the form of the plaintiff needing to bear the cost of safely keeping or disposing of the receipt to avoid someone obtaining the credit card number. Continue Reading Standing Considerations in Federal Data Breach Litigation
On November 2, 2018, the Office of the NJ Attorney General and the NJ Division of Consumer Affairs (collectively, the “State”) announced a $200,000 settlement with the now-dissolved ATA Consulting, LLC, which did business as Best Medical Transcription, (“Best Medical”), and its owner, Tushar Mathur. The settlement resolves allegations involving Best Medical’s role in a 2016 breach that affected more than 1,650 patients of Virtua Medical Group (“VMG”), a network of medical and surgical practices in southern New Jersey. Notably, in addition to civil penalties and reimbursement of attorneys’ fees and investigative costs, the settlement permanently bars Mathur from managing or owning a business in New Jersey.
VMG had contracted with Best Medical for the provision of transcription services. Specifically, three VMG practices submitted dictations of doctors’ letters, medical notes, and other reports to Best Medical through a telephone recording service. Best Medical would then upload the recorded sound files to a password-protected File Transfer Protocol (“FTP”) site and Best Medical’s subcontractor transcribed the dictations into text documents, which were subsequently posted on the FTP site.
In January 2016, it was discovered that the FTP site was inadvertently misconfigured by Mathur during a software update, which changed the security restrictions such that the FTP site was accessible over the internet without the need for any authentication. The files had been indexed by Google, which meant that an individual conducting a Google search using search terms that happened to be included in the dictations could have obtained search results with links to access and download the exposed files. VMG learned of the incident when it received a phone call from a patient indicating that her daughter had found portions of her medical records through a Google web search. VMG had not received notice of the breach from Best Medical. Continue Reading Vendor Responsible for Breach Barred from Conducting Business in NJ
As of November 1, consumer credit reporting agencies Equifax, Experian and TransUnion are now subject to the New York DFS cybersecurity regulations that first went into effect back in March 2017. In October 2017, following Equifax’s 2017 data breach and smaller breaches suffered by Experian years earlier, DFS passed new proposed regulations applicable to consumer credit reporting agencies, which went into effect in June of this year. These regulations at 23 NYCRR 201 require consumer credit reporting agencies to register with DFS, outlines prohibited practices of consumer credit reporting agencies, and requires consumer credit reporting agencies to comply with DFS’ cybersecurity regulations at 23 NYCRR 500. Consumer credit reporting agencies were required to register with DFS either by September 15, or within 15 days of becoming subject to the regulations, and as with the Part 500 regulations, the Part 201 regulations have phased-in effective dates for compliance with the cybersecurity regulations, which began on November 1. Unlike the Part 500 regulations, consumer credit reporting agencies have less time between the first compliance date and the second, and less time overall from the first compliance date to the fourth and final compliance date on December 31, 2019. Continue Reading NYDFS Cybersecurity Check-In
On October 18, 2018, the Food and Drug Administration (“FDA”) released draft guidance outlining its plans for the management of cybersecurity risks in medical devices. Commenters now have until March 17, 2019, to submit comments to the FDA and get their concerns on the record. More information about submitting comments can be found at the end of this post.
This FDA guidance revision will replace existing guidance released in 2014, which as you can see, includes recommendations, but does not attempt to classify devices. The recent draft guidance takes a more aggressive posture and separates devices into those with a Tier 1 “Higher Cybersecurity Risk” and those with a Tier 2 “Standard Cybersecurity Risk.”
Tier 1 devices are those that meet the following criteria:
1) The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and
2) A cybersecurity incident affecting the device could directly result in harm to multiple patients.
Tier 2 devices are any medical device that does not meet the criteria in Tier 1.
The FDA has varying guidance for devices depending on the Tier of the device. The FDA provides guidance for Tier 1 and Tier 2 devices on applying the NIST Cybersecurity Framework, providing appropriate cybersecurity documentation, and adhering to labeling recommendations.
Cathay Pacific recently disclosed that a data breach occurred exposing information for as many as 9.4 million people – the largest airline data breach ever. The extent of the information obtained varied from credit card information (although it is reported that only partial credit information was obtained or that the cards were expired), to telephone numbers, dates of birth, frequent flier numbers, passport numbers, government ID numbers, and past travel information.
Shortly after Cathay Pacific revealed its breach, British Airways announced that the data breach it incurred last month may have been included information for an additional 185,000 customers than initially disclosed (which last month was reported to be 380,000 customers – although British Airways is now claiming it is possibly less). While an investigation is ongoing, the breach is believed to have included, among other things, payment details, inclusive of – for at least some customers – the CVV number.
No sector is safe from data breaches and some are either more vulnerable and/or more attractive to cyber criminals than others because of the types of information stored. The airline industry is one where the companies are likely to have a treasure trove of personally identifiable information. This is a valuable reminder that, as a business, it is important to be sensitive and cognizant to the types of customer data in your possession and be sure to take the necessary steps to keep it secure.