The U.S. Department of Health and Human Services (“HHS”) recently released a publication entitled “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients,” which sets forth a “common set of voluntary, consensus-based, and industry-led guidelines, best practices, methodologies, procedures, and processes” to improve cybersecurity in the health care and public health sector. This publication was developed by a task group consisting of more than 150 health care and cybersecurity experts from the public and private sectors and focuses upon the “five most prevalent cybersecurity threats and the ten cybersecurity practices to significantly move the needle for a broad range of organizations” in the health care industry.

The five cybersecurity threats addressed in the publication are: (i) e-mail phishing attacks; (ii) ransomware attacks; (iii) loss or theft of equipment or data; (iv) insider, accidental or intentional data loss; and (v) attacks against connected medical devices that may affect patient safety.

The publication recognizes that cybersecurity recommendations will largely depend upon an organization’s size. Therefore, the publication is broken up into two separate technical volumes that are intended for IT and IT security professionals: (i) Technical Volume 1, which discusses ten cybersecurity practices for small health care organizations and (ii) Technical Volume 2, which discusses ten cybersecurity practices for medium-sized and large health care organizations. Specifically, the ten cybersecurity practices described in the Technical Volumes are as follows: Continue Reading HHS Warns Health Care Organizations of Cybersecurity Threats

The popular social media app, Muscial.ly (now known as TikTok), which allows users to make videos of themselves lip syncing to songs, recently entered into a record $5.7 million settlement with the Federal Trade Commission (“FTC”) to resolve allegations of illegal collection of children’s data in violation of the Children’s Online Privacy Protection Act of 1998 (“COPPA”).

To register for the Musical.ly app, users provide their email address, phone number, username, first and last name, short bio, and a profile picture. In addition to allowing users to create music videos, the Musical.ly app provides a platform for users to post and share the videos publicly. The app also had a feature whereby a user could discover a list of other users within a 50-mile radius with whom the user could connect and interact.

The FTC’s complaint alleged that Musical.ly was operating within the purview of COPPA in that (i) the Musical.ly app was “directed to children” and (ii) Musical.ly had actual knowledge that the company was collecting personal information from children. Specifically, the complaint alleged that the app was “directed to children” because the music library includes songs from popular children’s movies and songs popular among children and tweens. Furthermore, the FTC asserted that Musical.ly had actual knowledge that children under the age of 13 were registered users of the app because: (i) in December 2016, a third party publicly alleged in an interview with the cofounder of Musical.ly, Inc. that seven of the app’s most popular users appeared to be children under age 13; (ii) many users self-identify as under 13 in their profile bios or provide school information indicating that they are under the age of 13; and (iii) since at least 2014, Musical.ly received thousands of complaints from parents of children under the age of 13 who were registered users of the app. Continue Reading Fines for COPPA Violations Continue to Trend Upward

Last week, the French data privacy authority fined Google €50 million (about $57 million) for what it called “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” The Commission Nationale de L’informatqiue et des Libertés (CNIL) said that it began its investigation of Google on June 1, 2018 after receiving complaints from two different digital rights advocacy groups on May 25 and May 28, 2018, right when the GDPR was entering into force. In response, the CNIL set out to review the documents available to a user when creating a Google account during Android configuration. Upon that review, the CNIL found two alleged violations of the GDPR, including: (1) a lack of transparency and specificity about essential information such as the purpose of the data processing and the categories and data retention periods of personal data used for personalizing advertisements; and (2) lack of valid consent for ads personalization.

The first alleged violation feeds the second alleged violation here, as the CNIL said users’ consent to ads personalization could not be sufficiently informed when the information presented to them was dispersed over several documents requiring “sometimes up to 5 or 6 actions.” Thus, it isn’t that Google does not provide enough information, but that it does not present the information in one place for the about 20 services that are being offered. And the CNIL stated that the purposes of processing are too vague, meaning a user cannot tell if Google is relying on his or her consent or Google’s own legitimate interests as the legitimate basis of processing. Last, the CNIL found certain of Google’s ads personalization options were pre-checked, although GDPR views unambiguous consent as coming only from an affirmative action such as checking a non-pre-checked box, and that Google’s non-pre-checked boxes for accepting its Privacy Policy and Terms of Service were all-or-nothing consents for all processing activities, whereas GDPR requires specific consent for each purpose. Continue Reading Google Fined by French Regulators for GDPR Gaps

Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.” Continue Reading Biometric Data Risks on the Rise

On December 4, 2018, New York Attorney General Barbara D. Underwood announced a $4.95 million settlement with Oath, Inc. (f/k/a AOL Inc.), a wholly-owned subsidiary of Verizon Communications, Inc., for alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) as a result of its involvement with online behavioral advertising auctions. This settlement represents the largest penalty ever in a COPPA enforcement matter in U.S. history.

Through its investigation, the New York Attorney General’s Office discovered that AOL collected, used, and disclosed personal information of website users under the age of 13 without parental consent in violation of COPPA. Specifically, the company was charged with having “conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13.” The New York Attorney General found that AOL operated several ad exchanges and permitted clients to use its display ad exchange to sell ad space on COPPA-covered websites, despite the fact that the exchange was not capable of conducting a COPPA-compliant auction that involved third-party bidders. AOL was charged with having knowledge that these websites were subject to COPPA because evidence demonstrated that: (i) several AOL clients had provided AOL with notice that their websites were subject to COPPA and (ii) AOL had conducted a review of the content and privacy policies of client websites and had designated certain websites as being child-directed. Additionally, the New York Attorney General charged AOL with having placed ads through other exchanges in violation of COPPA.   Specifically, whenever AOL participated and won an auction for ad space on a COPPA-covered website, AOL ignored any information it received from an ad exchange indicating that the ad space was subject to COPPA and collected information about the website users to serve a targeted advertisement to the users. Continue Reading Oath (f/k/a AOL) Agrees to Record $5 Million COPPA Settlement

A little more than six months after that day in May when privacy policy updates flooded our inboxes and the GDPR came into force, a new study of small business owners in the UK has found that many people and businesses remain essentially “clueless” about the law and its requirements. Commissioned by Aon, the study found that nearly half of the 1,000 small business owners polled are confused about the privacy and security requirements of the law, which could lead many businesses to be in breach of the GDPR without even realizing it. Some examples of potential violations reported by the businesses included paper visitor books logging all visitors to the business and viewable to subsequent visitors, training materials featuring full details of real-life case studies, the use of personal devices by employees for work purposes, and inadequate storage and disposal of paper records. The study also found that business owners were not clear on what constitutes a data breach – thinking the term did not apply to paper records or personal data that was mistakenly posted or sent to the wrong person by email or fax – nor were they clear on the notification requirements, either to the UK’s data protection authority, the Information Commissioner’s Office (“ICO”), or to affected individuals. These small business owners should avail themselves of the ICO’s recent insight into its GDPR enforcement approach from earlier this month, which indicates that ignorant non-compliance likely won’t be looked at favorably. Continue Reading GDPR Guidance and Other Goings-On

The Commerce Department’s Bureau of Industry and Security (“BIS”) recently published an advanced notice of proposed rulemaking asking for public comment on criteria to identify “emerging technologies that are essential to U.S. national security,” for example because they have potential intelligence collection applications or could provide the United States with a qualitative intelligence advantage.

BIS is the federal agency that primarily oversees commercial exports. Over the summer, Congress passed the Export Control Reform Act of 2018 and authorized BIS to establish appropriate controls on the export of emerging and foundational technologies. Although by no means exclusive or final, BIS has proposed an initial list of areas that may become “emerging technologies,” including artificial intelligence/machine learning technology, brain-computer interfaces, and advanced surveillance technology, such as faceprint and voiceprint technologies. If BIS ultimately determines a technology will be subject to export controls, it will likely receive a newly-created Export Control Classification Number on BIS’s Commerce Control List and would require a license before export to any country subject to a U.S. embargo, including arms embargos (e.g., China). Continue Reading Is Your Technology an “Emerging Technology?”

A few months ago we posted an update on the California Consumer Privacy Act, a mini-GDPR that contains serious privacy ramifications for the U.S. privacy landscape. Likely in response to the upcoming 2020 go-live for the California law, various groups have noticed an uptick in lobbying directed at the passage of a federal privacy law that would pre-empt the California law and help harmonize the various state laws. Pushing to the front of that effort is a new draft federal privacy law proposed by Intel.

The Intel law looks to be written specifically to pre-empt the California law, as it contains language that would pre-empt any State law with civil provisions designed to reduce privacy risk through the regulation of personal data. This pre-emption contains limited exceptions for state-data-breach, contract, consumer protection, and various other laws, but it would drive a hole through California’s law. Furthermore, Intel’s proposed law could pre-empt various specific laws such as Illinois biometric data protection law, and because it does not include any notice provision — it would be reliant on the state-breach-notification statutes to find violations in the first place.

Beyond frustrating state attempts at personal information regulation, the law creates penalty caps that result in disproportionate punishments for smaller and mid-size security incidents and allow larger incidents, typical of a larger company, to operate on an eat-the-fine basis. For example: The Equifax breach from earlier this year affected 143 million Americans. If regulators chose to bring an action, the maximum penalties for the action could be up to $16,500 per violation — that means a maximum penalty of 2.3 trillion dollars. The penalty cap however was set at 1 billion dollars, meaning the largest data breaches will face the lowest penalty-per-impacted individual.

Our take

This proposed national privacy law would primarily serve the interests of the largest players in the tech and data industry, while providing harsher relative penalties to smaller and mid-size players. This law or something similar is likely to see serious political debate in the next few years as lobbying efforts intensify. Expect the heat to turn up as we near January 1, 2020.

On October 18, 2018, the Food and Drug Administration (“FDA”) released draft guidance outlining its plans for the management of cybersecurity risks in medical devices. Commenters now have until March 17, 2019, to submit comments to the FDA and get their concerns on the record. More information about submitting comments can be found at the end of this post.

This FDA guidance revision will replace existing guidance released in 2014, which as you can see, includes recommendations, but does not attempt to classify devices. The recent draft guidance takes a more aggressive posture and separates devices into those with a Tier 1 “Higher Cybersecurity Risk” and those with a Tier 2 “Standard Cybersecurity Risk.”

Tier 1 devices are those that meet the following criteria:

1) The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and

2) A cybersecurity incident affecting the device could directly result in harm to multiple patients.

Tier 2 devices are any medical device that does not meet the criteria in Tier 1.

The FDA has varying guidance for devices depending on the Tier of the device. The FDA provides guidance for Tier 1 and Tier 2 devices on applying the NIST Cybersecurity Framework, providing appropriate cybersecurity documentation, and adhering to labeling recommendations.

Continue Reading FDA Releases Draft Guidance on Cybersecurity for Health Devices

In a recent letter to the Federal Trade Commission (“FTC”), Senators Edward J. Markey (D-Mass) and Richard Blumenthal (D-Conn), expressed their concern regarding a recent study, which “indicates that numerous apps directed at children have been accessing geolocation data and transmitting persistent identifiers without parental consent” in violation of the Children’s Online Privacy Protection Act of 1998 (“COPPA”). In addition, the senators voiced concerns that parents are being misled by app developers, the advertising companies they work with, and app stores because such apps are placed in the “kids” or “families” sections of app stores. In other words, these apps should not be marketed as appropriate for children if they are engaging in activity that violates COPPA. The senators urged the FTC to review the extent to which app developers, advertising companies, and app stores are complying with COPPA. The senators requested a response from the FTC by October 31.

The study referenced in the senators’ letter comprised of a review of 5,855 “child-friendly” apps for compliance with COPPA. The researchers found that approximately 57% of these apps were engaging in activity prohibited by COPPA. For example, the researchers concluded that over 1,000 of the apps analyzed shared persistent identifiers with third parties. Furthermore, they found that 235 of the apps analyzed accessed geolocation information without verifiable parental consent, with a number of apps also sharing this information with advertising companies.

A copy of the senators’ letter to the FTC can be found here.

Our take

COPPA was designed to protect children under the age of 13 from overreaching by marketers by providing parents control over what information is collected from their young children online. This increased scrutiny by lawmakers of the data collection and use practices of child-friendly apps should serve as a reminder for app developers to review their products, and the terms of their agreements with the advertising companies they work with, for compliance with COPPA.