Photo of Alexander Cox

Alexander Cox is involved in a wide range of privacy issues, but focuses on cybersecurity issues and how to communicate opaque technical information and challenges with clients and regulators.  In order to help clients comply with the patchwork of privacy laws,  Alex combines his technical and legal knowledge to create user-friendly compliance tools that put privacy issues into practice.  In his downtime Alex is a computer enthusiast who enjoys tinkering with various operating system/hardware configurations, donating his computing resources to various distributed computing projects. Alex's complete biography can be found here.

On or around July 17, 2015, UCLA Health suffered a cyberattack that affected approximately 4.5 million individuals’ personal and health information.  A week later, the Regents of the University of California were hit with a series of class action suits related to the breach.  After four years of litigation, the matter is coming to a close.  On June 18, 2019, the court will finally determine whether the settlement reached by the parties is fair, reasonable, and adequate.  At present, the total cost of the settlement may exceed $11 million.  This settlement is just one example of how a privacy incident can embroil an organization in costly litigation for years after the initial incident and underlines the benefits of implementing secure systems and procedures before an incident occurs.

The proposed settlement will require UCLA to provide two years of credit monitoring, identity theft protection, and insurance coverage for affected persons.  UCLA will also set aside $2 million to settle claims for any unreimbursed losses associated with identity theft.  UCLA will spend an additional $5.5 million plus any remaining balance on the $2 million claims budget towards cybersecurity enhancements for the UCLA Health Network.  In total, there would be $7.5 million dollars set aside to reimburse claims and enhance security procedures.  However, UCLA must also cover the up-to $3.4 million in fees and costs of the class action plaintiffs’ attorneys.
Continue Reading

Back in 2008, Illinois became the first state to pass legislation specifically protecting individuals’ biometric data. Following years of legal challenges, some of the major questions about the law are about to be resolved (hopefully). Two major legal challenges, one now at the Illinois Supreme Court and another with the Court of Appeals for the Ninth Circuit, seek to clarify the foundational issues that have been a battleground for privacy litigation — standing and injury. To understand the stakes, Illinois’ Biometric Information Privacy Act requires companies who obtain a person’s biometric information to: (1) obtain a written release prior to their information being stored and collected; (2) provide notice that their information is being stored and collected; (3) state how long the information will be stored and used; and (4) disclose the specific purpose for its storage and use. The law further provides individuals with a private right of action. However, in order to trigger that private right, an individual must be “aggrieved.”
Continue Reading

A few months ago we posted an update on the California Consumer Privacy Act, a mini-GDPR that contains serious privacy ramifications for the U.S. privacy landscape. Likely in response to the upcoming 2020 go-live for the California law, various groups have noticed an uptick in lobbying directed at the passage of a federal privacy law

On October 18, 2018, the Food and Drug Administration (“FDA”) released draft guidance outlining its plans for the management of cybersecurity risks in medical devices. Commenters now have until March 17, 2019, to submit comments to the FDA and get their concerns on the record. More information about submitting comments can be found at the end of this post.

This FDA guidance revision will replace existing guidance released in 2014, which as you can see, includes recommendations, but does not attempt to classify devices. The recent draft guidance takes a more aggressive posture and separates devices into those with a Tier 1 “Higher Cybersecurity Risk” and those with a Tier 2 “Standard Cybersecurity Risk.”

Tier 1 devices are those that meet the following criteria:

1) The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet; and

2) A cybersecurity incident affecting the device could directly result in harm to multiple patients.

Tier 2 devices are any medical device that does not meet the criteria in Tier 1.

The FDA has varying guidance for devices depending on the Tier of the device. The FDA provides guidance for Tier 1 and Tier 2 devices on applying the NIST Cybersecurity Framework, providing appropriate cybersecurity documentation, and adhering to labeling recommendations.


Continue Reading

Effective January 1, 2020, California will require manufacturers of “connected devices” to equip those devices with reasonable security features. An example of a reasonable security feature (provided in the bill) would be to assign each device a unique password or to prompt the user to generate a password on setup.

This new law follows a

Just last month, the National Institute of Standards and Technology (“NIST”), in concert with the National Cybersecurity Center of Excellence (“NCCoE”), jointly published a behemoth guide to securing Electronic Health Records (“EHR”) on mobile devices.

The guide is a reaction to the growing number of issues with EHR in the mobile application context, as healthcare

Members of Shipman & Goodwin’s Privacy and Data Protection team join their health law colleagues in explaining how health centers can protect their client data as health care transforms with the use of tools like patient portals and telemedicine in the breakout session The Digital Era: Ensuring Data Privacy in the Age of Transformation.

A major trigger for passing the new California privacy law was the recent Cambridge Analytica scandal, wherein Facebook allowed Cambridge Analytica to gather large quantities of personal information from users without their consent and in many cases in conflict with Facebook’s own privacy policies. “California Consumer Privacy Act” Section 2 (g). These ill-gotten data were then harnessed to create a database for targeted advertisements, which Cambridge Analytica sold access to for various purposes.

Today, Facebook relies on user data to sell targeted ads that constitute the near-entirety of Facebook’s bottom line, as shown here. Facebook’s use of consumer data to target ads is their core business and California’s new law appears to take aim directly at the heart by providing consumers a “right to delete” that could require Facebook and others to delete both directly gathered personal information and the indirect inference based personal information they rely on to target ads.


Continue Reading