In what can be seen as growing unease with the use of facial recognition and other biometric identification, at least by the government, the San Francisco Board of Supervisors voted Tuesday to ban city agencies’ use of facial recognition software, including law enforcement agencies. The move, the first for a U.S. city, came as part of an ordinance that added public oversight to the city’s procurement and deployment of surveillance technology more broadly. While the ordinance does nothing to regulate private individuals’ or companies’ development, use or sale of facial recognition technology, privacy advocates are nevertheless praising the move as a stand against growing government surveillance, and the opening salvo in the regulation of what some see as a currently flawed and potentially discriminatory technology.
Continue Reading

On or around July 17, 2015, UCLA Health suffered a cyberattack that affected approximately 4.5 million individuals’ personal and health information.  A week later, the Regents of the University of California were hit with a series of class action suits related to the breach.  After four years of litigation, the matter is coming to a close.  On June 18, 2019, the court will finally determine whether the settlement reached by the parties is fair, reasonable, and adequate.  At present, the total cost of the settlement may exceed $11 million.  This settlement is just one example of how a privacy incident can embroil an organization in costly litigation for years after the initial incident and underlines the benefits of implementing secure systems and procedures before an incident occurs.

The proposed settlement will require UCLA to provide two years of credit monitoring, identity theft protection, and insurance coverage for affected persons.  UCLA will also set aside $2 million to settle claims for any unreimbursed losses associated with identity theft.  UCLA will spend an additional $5.5 million plus any remaining balance on the $2 million claims budget towards cybersecurity enhancements for the UCLA Health Network.  In total, there would be $7.5 million dollars set aside to reimburse claims and enhance security procedures.  However, UCLA must also cover the up-to $3.4 million in fees and costs of the class action plaintiffs’ attorneys.
Continue Reading

The U.S. Department of Health and Human Services (“HHS”) recently released a publication entitled “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients,” which sets forth a “common set of voluntary, consensus-based, and industry-led guidelines, best practices, methodologies, procedures, and processes” to improve cybersecurity in the health care and public health sector. This publication was developed by a task group consisting of more than 150 health care and cybersecurity experts from the public and private sectors and focuses upon the “five most prevalent cybersecurity threats and the ten cybersecurity practices to significantly move the needle for a broad range of organizations” in the health care industry.

The five cybersecurity threats addressed in the publication are: (i) e-mail phishing attacks; (ii) ransomware attacks; (iii) loss or theft of equipment or data; (iv) insider, accidental or intentional data loss; and (v) attacks against connected medical devices that may affect patient safety.

The publication recognizes that cybersecurity recommendations will largely depend upon an organization’s size. Therefore, the publication is broken up into two separate technical volumes that are intended for IT and IT security professionals: (i) Technical Volume 1, which discusses ten cybersecurity practices for small health care organizations and (ii) Technical Volume 2, which discusses ten cybersecurity practices for medium-sized and large health care organizations. Specifically, the ten cybersecurity practices described in the Technical Volumes are as follows:
Continue Reading

The popular social media app, Muscial.ly (now known as TikTok), which allows users to make videos of themselves lip syncing to songs, recently entered into a record $5.7 million settlement with the Federal Trade Commission (“FTC”) to resolve allegations of illegal collection of children’s data in violation of the Children’s Online Privacy Protection Act of 1998 (“COPPA”).

To register for the Musical.ly app, users provide their email address, phone number, username, first and last name, short bio, and a profile picture. In addition to allowing users to create music videos, the Musical.ly app provides a platform for users to post and share the videos publicly. The app also had a feature whereby a user could discover a list of other users within a 50-mile radius with whom the user could connect and interact.

The FTC’s complaint alleged that Musical.ly was operating within the purview of COPPA in that (i) the Musical.ly app was “directed to children” and (ii) Musical.ly had actual knowledge that the company was collecting personal information from children. Specifically, the complaint alleged that the app was “directed to children” because the music library includes songs from popular children’s movies and songs popular among children and tweens. Furthermore, the FTC asserted that Musical.ly had actual knowledge that children under the age of 13 were registered users of the app because: (i) in December 2016, a third party publicly alleged in an interview with the cofounder of Musical.ly, Inc. that seven of the app’s most popular users appeared to be children under age 13; (ii) many users self-identify as under 13 in their profile bios or provide school information indicating that they are under the age of 13; and (iii) since at least 2014, Musical.ly received thousands of complaints from parents of children under the age of 13 who were registered users of the app.
Continue Reading

Last week, the French data privacy authority fined Google €50 million (about $57 million) for what it called “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” The Commission Nationale de L’informatqiue et des Libertés (CNIL) said that it began its investigation of Google on June 1, 2018 after receiving complaints from two different digital rights advocacy groups on May 25 and May 28, 2018, right when the GDPR was entering into force. In response, the CNIL set out to review the documents available to a user when creating a Google account during Android configuration. Upon that review, the CNIL found two alleged violations of the GDPR, including: (1) a lack of transparency and specificity about essential information such as the purpose of the data processing and the categories and data retention periods of personal data used for personalizing advertisements; and (2) lack of valid consent for ads personalization.

The first alleged violation feeds the second alleged violation here, as the CNIL said users’ consent to ads personalization could not be sufficiently informed when the information presented to them was dispersed over several documents requiring “sometimes up to 5 or 6 actions.” Thus, it isn’t that Google does not provide enough information, but that it does not present the information in one place for the about 20 services that are being offered. And the CNIL stated that the purposes of processing are too vague, meaning a user cannot tell if Google is relying on his or her consent or Google’s own legitimate interests as the legitimate basis of processing. Last, the CNIL found certain of Google’s ads personalization options were pre-checked, although GDPR views unambiguous consent as coming only from an affirmative action such as checking a non-pre-checked box, and that Google’s non-pre-checked boxes for accepting its Privacy Policy and Terms of Service were all-or-nothing consents for all processing activities, whereas GDPR requires specific consent for each purpose.
Continue Reading

Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.”
Continue Reading

On December 4, 2018, New York Attorney General Barbara D. Underwood announced a $4.95 million settlement with Oath, Inc. (f/k/a AOL Inc.), a wholly-owned subsidiary of Verizon Communications, Inc., for alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) as a result of its involvement with online behavioral advertising auctions. This settlement represents the largest penalty ever in a COPPA enforcement matter in U.S. history.

Through its investigation, the New York Attorney General’s Office discovered that AOL collected, used, and disclosed personal information of website users under the age of 13 without parental consent in violation of COPPA. Specifically, the company was charged with having “conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13.” The New York Attorney General found that AOL operated several ad exchanges and permitted clients to use its display ad exchange to sell ad space on COPPA-covered websites, despite the fact that the exchange was not capable of conducting a COPPA-compliant auction that involved third-party bidders. AOL was charged with having knowledge that these websites were subject to COPPA because evidence demonstrated that: (i) several AOL clients had provided AOL with notice that their websites were subject to COPPA and (ii) AOL had conducted a review of the content and privacy policies of client websites and had designated certain websites as being child-directed. Additionally, the New York Attorney General charged AOL with having placed ads through other exchanges in violation of COPPA.   Specifically, whenever AOL participated and won an auction for ad space on a COPPA-covered website, AOL ignored any information it received from an ad exchange indicating that the ad space was subject to COPPA and collected information about the website users to serve a targeted advertisement to the users.
Continue Reading

A little more than six months after that day in May when privacy policy updates flooded our inboxes and the GDPR came into force, a new study of small business owners in the UK has found that many people and businesses remain essentially “clueless” about the law and its requirements. Commissioned by Aon, the study found that nearly half of the 1,000 small business owners polled are confused about the privacy and security requirements of the law, which could lead many businesses to be in breach of the GDPR without even realizing it. Some examples of potential violations reported by the businesses included paper visitor books logging all visitors to the business and viewable to subsequent visitors, training materials featuring full details of real-life case studies, the use of personal devices by employees for work purposes, and inadequate storage and disposal of paper records. The study also found that business owners were not clear on what constitutes a data breach – thinking the term did not apply to paper records or personal data that was mistakenly posted or sent to the wrong person by email or fax – nor were they clear on the notification requirements, either to the UK’s data protection authority, the Information Commissioner’s Office (“ICO”), or to affected individuals. These small business owners should avail themselves of the ICO’s recent insight into its GDPR enforcement approach from earlier this month, which indicates that ignorant non-compliance likely won’t be looked at favorably.
Continue Reading

The Commerce Department’s Bureau of Industry and Security (“BIS”) recently published an advanced notice of proposed rulemaking asking for public comment on criteria to identify “emerging technologies that are essential to U.S. national security,” for example because they have potential intelligence collection applications or could provide the United States with a qualitative intelligence advantage.

BIS is the federal agency that primarily oversees commercial exports. Over the summer, Congress passed the Export Control Reform Act of 2018 and authorized BIS to establish appropriate controls on the export of emerging and foundational technologies. Although by no means exclusive or final, BIS has proposed an initial list of areas that may become “emerging technologies,” including artificial intelligence/machine learning technology, brain-computer interfaces, and advanced surveillance technology, such as faceprint and voiceprint technologies. If BIS ultimately determines a technology will be subject to export controls, it will likely receive a newly-created Export Control Classification Number on BIS’s Commerce Control List and would require a license before export to any country subject to a U.S. embargo, including arms embargos (e.g., China).
Continue Reading

A few months ago we posted an update on the California Consumer Privacy Act, a mini-GDPR that contains serious privacy ramifications for the U.S. privacy landscape. Likely in response to the upcoming 2020 go-live for the California law, various groups have noticed an uptick in lobbying directed at the passage of a federal privacy law