Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.” Continue Reading Biometric Data Risks on the Rise
Alexander Cox is involved in a wide range of privacy issues, but focuses on cybersecurity issues and how to communicate opaque technical information and challenges with clients and regulators. In order to help clients comply with the patchwork of privacy laws, Alex combines his technical and legal knowledge to create user-friendly compliance tools that put privacy issues into practice. In his downtime Alex is a computer enthusiast who enjoys tinkering with various operating system/hardware configurations, donating his computing resources to various distributed computing projects. Alex's complete biography can be found here.
A few months ago we posted an update on the California Consumer Privacy Act, a mini-GDPR that contains serious privacy ramifications for the U.S. privacy landscape. Likely in response to the upcoming 2020 go-live for the California law, various groups have noticed an uptick in lobbying directed at the passage of a federal privacy law that would pre-empt the California law and help harmonize the various state laws. Pushing to the front of that effort is a new draft federal privacy law proposed by Intel.
The Intel law looks to be written specifically to pre-empt the California law, as it contains language that would pre-empt any State law with civil provisions designed to reduce privacy risk through the regulation of personal data. This pre-emption contains limited exceptions for state-data-breach, contract, consumer protection, and various other laws, but it would drive a hole through California’s law. Furthermore, Intel’s proposed law could pre-empt various specific laws such as Illinois biometric data protection law, and because it does not include any notice provision — it would be reliant on the state-breach-notification statutes to find violations in the first place.
Beyond frustrating state attempts at personal information regulation, the law creates penalty caps that result in disproportionate punishments for smaller and mid-size security incidents and allow larger incidents, typical of a larger company, to operate on an eat-the-fine basis. For example: The Equifax breach from earlier this year affected 143 million Americans. If regulators chose to bring an action, the maximum penalties for the action could be up to $16,500 per violation — that means a maximum penalty of 2.3 trillion dollars. The penalty cap however was set at 1 billion dollars, meaning the largest data breaches will face the lowest penalty-per-impacted individual.
This proposed national privacy law would primarily serve the interests of the largest players in the tech and data industry, while providing harsher relative penalties to smaller and mid-size players. This law or something similar is likely to see serious political debate in the next few years as lobbying efforts intensify. Expect the heat to turn up as we near January 1, 2020.
On October 18, 2018, the Food and Drug Administration (“FDA”) released draft guidance outlining its plans for the management of cybersecurity risks in medical devices. Commenters now have until March 17, 2019, to submit comments to the FDA and get their concerns on the record. More information about submitting comments can be found at the end of this post.
This FDA guidance revision will replace existing guidance released in 2014, which as you can see, includes recommendations, but does not attempt to classify devices. The recent draft guidance takes a more aggressive posture and separates devices into those with a Tier 1 “Higher Cybersecurity Risk” and those with a Tier 2 “Standard Cybersecurity Risk.”
Tier 1 devices are those that meet the following criteria:
1) The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and
2) A cybersecurity incident affecting the device could directly result in harm to multiple patients.
Tier 2 devices are any medical device that does not meet the criteria in Tier 1.
The FDA has varying guidance for devices depending on the Tier of the device. The FDA provides guidance for Tier 1 and Tier 2 devices on applying the NIST Cybersecurity Framework, providing appropriate cybersecurity documentation, and adhering to labeling recommendations.
Effective January 1, 2020, California will require manufacturers of “connected devices” to equip those devices with reasonable security features. An example of a reasonable security feature (provided in the bill) would be to assign each device a unique password or to prompt the user to generate a password on setup.
This new law follows a trend that has been gathering steam since 2015, when the FTC provided security guidance to Internet of Things device manufacturers. Just a year later, the Mirai botnet used a DDos attack to take down a number of popular web services, in one of the first major Internet of Things attacks. DDos attacks leverage the internet connections (bandwidth) of large numbers of unsuspecting persons. First, the bad-actor infects the person’s device with malware. Then these devices can be remotely-forced to connect simultaneously to various targets (think Netflix), overwhelming their ability to communicate and shutting down the service. These types of large-scale attacks are especially dangerous in the Internet of Things context, where otherwise innocuous devices such as light-fixtures, DVRs, toasters, pet-feeders, and countless others begin to come online.
While this new bill asks very little of manufacturers, it is a crucial first step that will force manufacturers of internet-connected devices to put in place at least some common-sense security features.
This new bill requires very little of manufacturers and provides very little in terms of security for consumers. To address Internet of Things security, both regulators and companies need to provide platforms and standards that are easy to integrate, update, and adopt.
Just last month, the National Institute of Standards and Technology (“NIST”), in concert with the National Cybersecurity Center of Excellence (“NCCoE”), jointly published a behemoth guide to securing Electronic Health Records (“EHR”) on mobile devices.
The guide is a reaction to the growing number of issues with EHR in the mobile application context, as healthcare organizations often have poor EHR integration with their mobile apps. Mobile devices have so many obvious benefits from patient communication to care coordination that organizations are going with the implement first, secure later approach, creating major headaches down the road when the inevitable security incident occurs. In their guide, NIST and NCCoE provide a full analysis of provider side access risks where the provider adds patient information into an EHR system through a mobile device and that same EHR data is accessed elsewhere by another provider via a separate mobile device.
The guide provides a roadmap for healthcare organizations that:
- maps security characteristics to standards and best practices from NIST and other standards organizations, and to the HIPAA Security Rule
- provides a detailed architecture and capabilities that address security controls
- facilitates ease of use through automated configuration of security controls
- addresses the need for different types of implementation, whether in-house or outsourced
- provides a how-to for implementers and security engineers seeking to re-create or reference design in whole or in part
We recommend reviewing the guide during the planning phase of any EHR-related mobile application implementation. For a quick overview of the guide, see the one page fact sheet here.
The guide provides a timely and valuable starting point for CIOs and Privacy Officers that are considering a mobile app implementation. At a high level, §8’s Risk Questionnaire (page 216) provides a great resource for those organizations looking to understand the types of questions they need to ask when selecting a cloud-based EHR vendor. The tables that follow these questionnaires will help an engaged leader to understand the universe and severity of the risks that come with the move to mobile.
On August 24, 2018, the California Legislature published the first round of proposed amendments to the California Consumer Privacy Act, which was signed into law on June 28, 2018 and would take effect January 2020. The full text with amendments can be found here. Here are our major highlights:
The proposal narrows slightly the previously expansive definition of “personal information,” which previously included information such as a user’s IP address. Now “personal information” will require the information to be capable of being associated with a particular consumer or household. This helps minimize some of the runaway impacts of the previously expansive definition without losing its all-inclusive character. The proposal also pushes back the January 2020 deadline to July 2020 for the Attorney General to implement and draft mandated regulations. This will cause major compliance risk for organizations, as the law will be “effective” for some time without clear guidance from regulators.
On the health-privacy front, the law had only provided an exemption for covered entities under HIPAA, creating confusion and compliance concerns for holders of healthcare data about whether this exemption also covered business associates. New amendments now expand the exemption to include business associates. Financial privacy also received clarification with the GLBA receiving an exemption (while preserving consumer’s right to sue in case of a breach) instead of the previous ambiguous exemption that applied only where there was a “conflict.”
The real story in these proposed amendments is that they change very little. Industry groups will be happy that the newly narrowed personal information definition is something they can work with, but consumers managed to preserve many of their major rights in this first revision. The right to opt-out will remain a serious battle going forward as the deletion of customer data is both difficult and expensive for industry to implement.
Members of Shipman & Goodwin’s Privacy and Data Protection team join their health law colleagues in explaining how health centers can protect their client data as health care transforms with the use of tools like patient portals and telemedicine in the breakout session The Digital Era: Ensuring Data Privacy in the Age of Transformation.
For more information, please click here.
2:30 PM – 3:15 PM EDT
A major trigger for passing the new California privacy law was the recent Cambridge Analytica scandal, wherein Facebook allowed Cambridge Analytica to gather large quantities of personal information from users without their consent and in many cases in conflict with Facebook’s own privacy policies. “California Consumer Privacy Act” Section 2 (g). These ill-gotten data were then harnessed to create a database for targeted advertisements, which Cambridge Analytica sold access to for various purposes.
Today, Facebook relies on user data to sell targeted ads that constitute the near-entirety of Facebook’s bottom line, as shown here. Facebook’s use of consumer data to target ads is their core business and California’s new law appears to take aim directly at the heart by providing consumers a “right to delete” that could require Facebook and others to delete both directly gathered personal information and the indirect inference based personal information they rely on to target ads.