On March 1, 2017, the New York State Department of Financial Services’ (“DFS”) first-in-nation Cybersecurity Regulations, designed to protect consumers and financial institutions from cyber-attacks, went into effect (the “Regulations”). See, 23 NYCRR Part 500. The “first-in-nation” nature of the Regulations is extremely important to note: the Regulations apply not only to what is referred to in the Regulations as a “Covered Entity” based in New York, but also to those that merely do business in New York. The Regulations also do not just cover financial institutions, but any business entity that is covered by the banking law, insurance law, or financial services laws. As such, the impact of the Regulation is wide-sweeping. On August 22, 2017 we published an alert relating to, and providing an overview, of the Regulations and on and February 6, 2018 and August 28, 2018 we published follow-ups highlighting the next round of disclosures required under the Regulations. Shipman & Goodwin LLP Data Privacy Team members Bill Roberts and Damien Privitera also conducted a CLE webinar – Compliance Checkup: NY DFS Cybersecurity Regulations – on August 7, 2018, which can be accessed here. Continue Reading NYSDFS Upcoming Deadlines Fast Approaching: Next Key Dates are February 15, 2019 and March 1, 2019
Last week, the French data privacy authority fined Google €50 million (about $57 million) for what it called “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” The Commission Nationale de L’informatqiue et des Libertés (CNIL) said that it began its investigation of Google on June 1, 2018 after receiving complaints from two different digital rights advocacy groups on May 25 and May 28, 2018, right when the GDPR was entering into force. In response, the CNIL set out to review the documents available to a user when creating a Google account during Android configuration. Upon that review, the CNIL found two alleged violations of the GDPR, including: (1) a lack of transparency and specificity about essential information such as the purpose of the data processing and the categories and data retention periods of personal data used for personalizing advertisements; and (2) lack of valid consent for ads personalization.
Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.” Continue Reading Biometric Data Risks on the Rise
After eleven years of litigation, including two decisions by the Connecticut Supreme Court, Byrne v. Avery Center for Obstetrics and Gynecology, P.C. has finally reached a verdict. Last month, the jury awarded the plaintiff $853,000 in damages in connection with her physician practice’s 2005 release of medical records in response to a non-HIPAA compliant subpoena. The subpoena was issued in connection with a paternity suit brought by the plaintiff’s former boyfriend, a man whom the plaintiff had specifically requested her physician practice not share her medical information with.
Without speculating too much about its judicial progeny, Byrne nevertheless highlights several areas of HIPAA compliance that should be areas of heightened review for physicians and medical providers now. Please click here for a detailed analysis of this verdict and its implications for providers.
On December 12, 2018, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) released a Request for Information (“RFI”) “to assist OCR in identifying provisions of the Health Insurance Portability and Accountability Act (“HIPAA”) privacy and security regulations that may impede the transformation to value-based health care or that limit or discourage coordinated care among individuals and covered entities (including hospitals, physicians, and other providers, payors, and insurers), without meaningfully contributing to the protection of the privacy or security of individuals’ protected health information.” Through this RFI, OCR seeks public comment regarding whether and how the HIPAA Privacy and Security Rules could be revised to promote value-based care and care coordination without jeopardizing individuals’ rights to privacy. OCR will accept comments through February 12, 2019.
Specifically, OCR has requested comments regarding the following four topics: Continue Reading OCR Seeks Public Comment on HIPAA Reform
The Upper San Juan Health Service District d/b/a Pagosa Springs Medical Center (“PSMC”), a critical access hospital in Colorado, has agreed to a $111,400 settlement with the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) to resolve a complaint alleging that a former PSMC employee continued to have remote access to PSMC’s web-based scheduling calendar because PSMC failed to deactivate the former employee’s username and password following termination of employment. OCR investigated the complaint and discovered that PSMC impermissibly disclosed the protected health information (“PHI”) of 557 patients to the former employee. Moreover, OCR determined that PSMC did not have a Business Associate agreement in place with the vendor of the web-based scheduling calendar.
The Resolution Agreement also includes a two-year Corrective Action Plan. Under the Corrective Action Plan, PSMC must: (i) revise its policies and procedures relating to Business Associates and uses and disclosures of PHI; (ii) submit proposed training materials on the revised policies and procedures for OCR’s review and train workforce members in accordance with the approved training materials; (iii) develop a current Risk Analysis and submit such analysis to OCR for review; and (iv) upon OCR’s approval of the Risk Analysis, provide OCR with a risk management plan that addresses and mitigates the security risks and vulnerabilities identified in the Risk Analysis and documentation that the risk management plan is being implemented.
The Resolution Agreement and Corrective Action Plan are available here.
HIPAA requires covered entities and business associates to terminate a workforce member’s access to all systems and databases containing PHI upon the date the workforce member’s employment, or other arrangement with the entity, ends. The PSMC settlement serves as a reminder that the electronic health record is not the only database for which access must be terminated. HIPAA entities should develop a checklist that identifies all systems and databases containing PHI to ensure all access to PHI is terminated upon a workforce member’s separation from the entity.
On December 4, 2018, New York Attorney General Barbara D. Underwood announced a $4.95 million settlement with Oath, Inc. (f/k/a AOL Inc.), a wholly-owned subsidiary of Verizon Communications, Inc., for alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) as a result of its involvement with online behavioral advertising auctions. This settlement represents the largest penalty ever in a COPPA enforcement matter in U.S. history.
Through its investigation, the New York Attorney General’s Office discovered that AOL collected, used, and disclosed personal information of website users under the age of 13 without parental consent in violation of COPPA. Specifically, the company was charged with having “conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13.” The New York Attorney General found that AOL operated several ad exchanges and permitted clients to use its display ad exchange to sell ad space on COPPA-covered websites, despite the fact that the exchange was not capable of conducting a COPPA-compliant auction that involved third-party bidders. AOL was charged with having knowledge that these websites were subject to COPPA because evidence demonstrated that: (i) several AOL clients had provided AOL with notice that their websites were subject to COPPA and (ii) AOL had conducted a review of the content and privacy policies of client websites and had designated certain websites as being child-directed. Additionally, the New York Attorney General charged AOL with having placed ads through other exchanges in violation of COPPA. Specifically, whenever AOL participated and won an auction for ad space on a COPPA-covered website, AOL ignored any information it received from an ad exchange indicating that the ad space was subject to COPPA and collected information about the website users to serve a targeted advertisement to the users. Continue Reading Oath (f/k/a AOL) Agrees to Record $5 Million COPPA Settlement
When a data breach occurs at a company, not only is customer data vulnerable but so is employee information. But what obligations do employers owe their employees?
This issue was recently decided in part, at least with respect to Pennsylvania employers, in Dittman v. UPMC, 43 WAP 2017, 2018 WL 6072199, at *14 (Pa. Nov. 21, 2018). In Dittman, a group of employees sued their employer, the University of Pittsburg Medical Center, for failure to take reasonable care to protect their personal private information. On appeal, the Supreme Court of Pennsylvania overturned the decision of the lower court and held that an employer owes a common law duty of care to its employees to use reasonable care to safeguard their sensitive data as stored on the employer’s internet-accessible computer system. Notably, the employees’ position was not that the employer engaged in any misfeasance, but nonfeasance for failure to prevent the harm from occurring. The Supreme Court found that the mere fact that third parties committed the wrongdoing – the data breach – did not negate the duty of the employer to safeguard the employees’ sensitive information that they were required to provide the employer as a condition of employment.
The Dittman case is certainly not the first time a group of employees sued an employer based upon a data breach of the employer’s computer system that resulted in the disclosure of the employees’ personally identifiable information. In Sackin v. TransPerfect Global, Inc., 278 F. Supp. 739 (S.D.N.Y. 2017), the employer moved to dismiss a class action filed by the employees, which motion was denied, in part. Among other things, the district court found that the complaint sufficiently stated a cause of action for breach of common law duty of care and that the employer violated its duty to take reasonable steps to protect the employees’ data. The court also found that a viable cause of action existed for breach of the implied contract between the employer and employees, but not for breach of the terms of the employment contract. With respect to the former, the conduct and course of dealing between the parties was deemed to rise to the level of an implied contract because, as a prerequisite of employment, the employees were required to provide the employer with certain sensitive data, and given how commonplace data and identity theft are in the current day and age, the court found an implied assent by the recipient to protect that data. Continue Reading Employers Beware and Take Reasonable Care
The Commerce Department’s Bureau of Industry and Security (“BIS”) recently published an advanced notice of proposed rulemaking asking for public comment on criteria to identify “emerging technologies that are essential to U.S. national security,” for example because they have potential intelligence collection applications or could provide the United States with a qualitative intelligence advantage.
BIS is the federal agency that primarily oversees commercial exports. Over the summer, Congress passed the Export Control Reform Act of 2018 and authorized BIS to establish appropriate controls on the export of emerging and foundational technologies. Although by no means exclusive or final, BIS has proposed an initial list of areas that may become “emerging technologies,” including artificial intelligence/machine learning technology, brain-computer interfaces, and advanced surveillance technology, such as faceprint and voiceprint technologies. If BIS ultimately determines a technology will be subject to export controls, it will likely receive a newly-created Export Control Classification Number on BIS’s Commerce Control List and would require a license before export to any country subject to a U.S. embargo, including arms embargos (e.g., China). Continue Reading Is Your Technology an “Emerging Technology?”