Please join Shipman & Goodwin attorneys Bill Roberts and Alfredo Fernández for this complimentary CLE webinar which will discuss the unique threats presented by personnel with legitimate access to controlled company information and how to proactively mitigate risks of data misuse. Topics to include:

  • Insiders vs. intruders vs. knuckleheads
  • Motivators, risk factors and indicators
  • Technical and non-technical assessment factors
  • Understanding vulnerable categories of data
  • International factors
  • Best practices in a breach scenario
  • Proactive steps to protect the company
When: July 17, 2019, 12:00 PM – 1:00 PM EDT

Where: Webinar

REGISTER NOW!

Continued Legal Education (CLE)

This CLE program has been approved in accordance with the requirements of the New York CLE Board for a maximum of 1.0 credit hour, of which 1.0 can be applied toward the Professional Practice requirement. This program is appropriate for both newly admitted and experienced attorneys.

Neither the Connecticut Judicial Branch nor the Commission on Minimum Continuing Legal Education approves or accredits CLE providers or activities. It is the opinion of Shipman & Goodwin that this activity qualifies for one hour toward your annual CLE requirement in Connecticut, including zero hour(s) of ethics/professionalism, but is subject to change based on actual instruction/attendance time.

Last Friday, OCR issued a new fact sheet that outlines the ten circumstances where a Business Associate would have direct liability under HIPAA.

  1. Failure to provide the Secretary with records and compliance reports; cooperate with complaint investigations and compliance reviews; and permit access by the Secretary to information, including protected health information (PHI), pertinent to determining compliance.
  2. Taking any retaliatory action against any individual or other person for filing a HIPAA complaint, participating in an investigation or other enforcement process, or opposing an act or practice that is unlawful under the HIPAA Rules.
  3. Failure to comply with the requirements of the Security Rule.
  4. Failure to provide breach notification to a covered entity or another business associate.
  5. Impermissible uses and disclosures of PHI.
  6. Failure to disclose a copy of electronic PHI (ePHI) to either the covered entity, the individual, or the individual’s designee (whichever is specified in the business associate agreement) to satisfy a covered entity’s obligations regarding the form and format, and the time and manner of access under 45 C.F.R. §§ 164.524(c)(2)(ii) and 3(ii), respectively.
  7. Failure to make reasonable efforts to limit PHI to the minimum necessary to accomplish the intended purpose of the use, disclosure, or request.
  8. Failure, in certain circumstances, to provide an accounting of disclosures.
  9. Failure to enter into business associate agreements with subcontractors that create or receive PHI on their behalf, and failure to comply with the implementation specifications for such agreements.
  10. Failure to take reasonable steps to address a material breach or violation of the subcontractor’s business associate agreement.

This list is a helpful resource for Business Associates looking to review existing compliance programs. While many of these points will be well understood by Business Associates, some will require a detailed analysis to make sure the Business Associate is compliant. For example, compliance with the Security Rule can differ based on both the scale and types of activities of a Business Associate. A robust compliance program will address all ten points and provide guidance for a Business Associate in ambiguous circumstances.

Our Take:

By providing an explicit list of direct risks, OCR is signaling to Business Associates that an acceptable compliance program must cover the above ten points. Considering the specificity of OCR’s fact sheet, take a moment and review your existing policies to confirm that they address the ten points. If they do not, we recommend taking action to address any holes in your existing program.

In what can be seen as growing unease with the use of facial recognition and other biometric identification, at least by the government, the San Francisco Board of Supervisors voted Tuesday to ban city agencies’ use of facial recognition software, including law enforcement agencies. The move, the first for a U.S. city, came as part of an ordinance that added public oversight to the city’s procurement and deployment of surveillance technology more broadly. While the ordinance does nothing to regulate private individuals’ or companies’ development, use or sale of facial recognition technology, privacy advocates are nevertheless praising the move as a stand against growing government surveillance, and the opening salvo in the regulation of what some see as a currently flawed and potentially discriminatory technology. Continue Reading San Francisco Bans City’s Use of Facial Recognition Technology

On or around July 17, 2015, UCLA Health suffered a cyberattack that affected approximately 4.5 million individuals’ personal and health information.  A week later, the Regents of the University of California were hit with a series of class action suits related to the breach.  After four years of litigation, the matter is coming to a close.  On June 18, 2019, the court will finally determine whether the settlement reached by the parties is fair, reasonable, and adequate.  At present, the total cost of the settlement may exceed $11 million.  This settlement is just one example of how a privacy incident can embroil an organization in costly litigation for years after the initial incident and underlines the benefits of implementing secure systems and procedures before an incident occurs.

The proposed settlement will require UCLA to provide two years of credit monitoring, identity theft protection, and insurance coverage for affected persons.  UCLA will also set aside $2 million to settle claims for any unreimbursed losses associated with identity theft.  UCLA will spend an additional $5.5 million plus any remaining balance on the $2 million claims budget towards cybersecurity enhancements for the UCLA Health Network.  In total, there would be $7.5 million dollars set aside to reimburse claims and enhance security procedures.  However, UCLA must also cover the up-to $3.4 million in fees and costs of the class action plaintiffs’ attorneys. Continue Reading UCLA Cyberattack, Four Years Later

Shipman & Goodwin attorney William Roberts will be joined by attorney Marco Mello Cunha, from the Brazilian firm, Tess Advogados, as they discuss data privacy and security issues and analyze global trends in Brazil and the United States, two of the world’s top ten economic leaders.  Today’s businesses collect, transfer and store a wide range of data on a daily basis. These records often contain vast amounts of sensitive and personal information, which, if lost or misused, would create significant business risk. To be successful in the modern environment, businesses need to know how to use, protect and transfer their data in an efficient and compliant manner.

Topics include:

  • An overview of data privacy laws in Brazil and the United States (including Brazil’s new general data protection law which will become effective in 2020)
  • Understanding global trends and cross-border transfer of data
  • Taking a proactive approach to privacy data security issues

Glenn Cunningham, Shipman & Goodwin Partner and Chair of the Board of Directors of Interlaw, will provide an introduction to the program and how the Interlaw relationship allows a global network of member firms to offer clients high quality legal advice via a single point of contact.

When: May 8, 2019, 2:00 PM – 3:00 PM EDT
Where: Webinar

Last week, the Supreme Court remanded a privacy class action settlement to the Ninth Circuit over concerns about the named plaintiffs’ standing. Specifically, the Court ordered the Ninth Circuit to conduct a Spokeo analysis to determine whether any of the three named plaintiff’s suffered a concrete injury as a result of Google’s alleged violation of the Stored Communications Act. As a brief reminder, the Court held in Spokeo v. Robbins in 2015 that a technical or procedural violation of a statute is insufficient to meet the “concrete injury” requirement of Article III standing absent actual harm to the plaintiff. Even in cases where Congress has created a private right of action for plaintiffs to pursue violations of a statute, the Court held that does not mean the plaintiff has automatically suffered actual harm or an actual injury due to a statutory violation. In the case at bar, the Court said it could not rule on the validity of the class action settlement before these standing issues presented by Spokeo were addressed by the Ninth Circuit, which issues it also declined to decide.

In another branch of government, freshman Representative Katie Porter highlighted the Spokeo standard without naming it last month in a hearing of the Financial Services Committee, and also seemed to call its conclusion into question. During a round of questioning of a CEO facing a data breach class action lawsuit, Rep. Porter asked him why the company’s lawyers were arguing in court filings that the data breach did not cause harm to consumers, when the CEO himself was clearly uncomfortable with the idea of sharing his own personal information with the Committee. Continue Reading Congress, SCOTUS and the States Consider Harm

The U.S. Department of Health and Human Services (“HHS”) recently released a publication entitled “Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients,” which sets forth a “common set of voluntary, consensus-based, and industry-led guidelines, best practices, methodologies, procedures, and processes” to improve cybersecurity in the health care and public health sector. This publication was developed by a task group consisting of more than 150 health care and cybersecurity experts from the public and private sectors and focuses upon the “five most prevalent cybersecurity threats and the ten cybersecurity practices to significantly move the needle for a broad range of organizations” in the health care industry.

The five cybersecurity threats addressed in the publication are: (i) e-mail phishing attacks; (ii) ransomware attacks; (iii) loss or theft of equipment or data; (iv) insider, accidental or intentional data loss; and (v) attacks against connected medical devices that may affect patient safety.

The publication recognizes that cybersecurity recommendations will largely depend upon an organization’s size. Therefore, the publication is broken up into two separate technical volumes that are intended for IT and IT security professionals: (i) Technical Volume 1, which discusses ten cybersecurity practices for small health care organizations and (ii) Technical Volume 2, which discusses ten cybersecurity practices for medium-sized and large health care organizations. Specifically, the ten cybersecurity practices described in the Technical Volumes are as follows: Continue Reading HHS Warns Health Care Organizations of Cybersecurity Threats

The popular social media app, Muscial.ly (now known as TikTok), which allows users to make videos of themselves lip syncing to songs, recently entered into a record $5.7 million settlement with the Federal Trade Commission (“FTC”) to resolve allegations of illegal collection of children’s data in violation of the Children’s Online Privacy Protection Act of 1998 (“COPPA”).

To register for the Musical.ly app, users provide their email address, phone number, username, first and last name, short bio, and a profile picture. In addition to allowing users to create music videos, the Musical.ly app provides a platform for users to post and share the videos publicly. The app also had a feature whereby a user could discover a list of other users within a 50-mile radius with whom the user could connect and interact.

The FTC’s complaint alleged that Musical.ly was operating within the purview of COPPA in that (i) the Musical.ly app was “directed to children” and (ii) Musical.ly had actual knowledge that the company was collecting personal information from children. Specifically, the complaint alleged that the app was “directed to children” because the music library includes songs from popular children’s movies and songs popular among children and tweens. Furthermore, the FTC asserted that Musical.ly had actual knowledge that children under the age of 13 were registered users of the app because: (i) in December 2016, a third party publicly alleged in an interview with the cofounder of Musical.ly, Inc. that seven of the app’s most popular users appeared to be children under age 13; (ii) many users self-identify as under 13 in their profile bios or provide school information indicating that they are under the age of 13; and (iii) since at least 2014, Musical.ly received thousands of complaints from parents of children under the age of 13 who were registered users of the app. Continue Reading Fines for COPPA Violations Continue to Trend Upward

On March 1, 2017, the New York State Department of Financial Services’ (“DFS”) first-in-nation Cybersecurity Regulations, designed to protect consumers and financial institutions from cyber-attacks, went into effect (the “Regulations”). See, 23 NYCRR Part 500. The “first-in-nation” nature of the Regulations is extremely important to note: the Regulations apply not only to what is referred to in the Regulations as a “Covered Entity” based in New York, but also to those that merely do business in New York. The Regulations also do not just cover financial institutions, but any business entity that is covered by the banking law, insurance law, or financial services laws. As such, the impact of the Regulation is wide-sweeping. On August 22, 2017 we published an alert relating to, and providing an overview, of the Regulations and on and February 6, 2018 and August 28, 2018 we published follow-ups highlighting the next round of disclosures required under the Regulations. Shipman & Goodwin LLP Data Privacy Team members Bill Roberts and Damien Privitera also conducted a CLE webinar – Compliance Checkup: NY DFS Cybersecurity Regulations – on August 7, 2018, which can be accessed hereContinue Reading NYSDFS Upcoming Deadlines Fast Approaching: Next Key Dates are February 15, 2019 and March 1, 2019

Last week, the French data privacy authority fined Google €50 million (about $57 million) for what it called “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” The Commission Nationale de L’informatqiue et des Libertés (CNIL) said that it began its investigation of Google on June 1, 2018 after receiving complaints from two different digital rights advocacy groups on May 25 and May 28, 2018, right when the GDPR was entering into force. In response, the CNIL set out to review the documents available to a user when creating a Google account during Android configuration. Upon that review, the CNIL found two alleged violations of the GDPR, including: (1) a lack of transparency and specificity about essential information such as the purpose of the data processing and the categories and data retention periods of personal data used for personalizing advertisements; and (2) lack of valid consent for ads personalization.

The first alleged violation feeds the second alleged violation here, as the CNIL said users’ consent to ads personalization could not be sufficiently informed when the information presented to them was dispersed over several documents requiring “sometimes up to 5 or 6 actions.” Thus, it isn’t that Google does not provide enough information, but that it does not present the information in one place for the about 20 services that are being offered. And the CNIL stated that the purposes of processing are too vague, meaning a user cannot tell if Google is relying on his or her consent or Google’s own legitimate interests as the legitimate basis of processing. Last, the CNIL found certain of Google’s ads personalization options were pre-checked, although GDPR views unambiguous consent as coming only from an affirmative action such as checking a non-pre-checked box, and that Google’s non-pre-checked boxes for accepting its Privacy Policy and Terms of Service were all-or-nothing consents for all processing activities, whereas GDPR requires specific consent for each purpose. Continue Reading Google Fined by French Regulators for GDPR Gaps