California Consumer Privacy Act: A Rapid Q&A

DWT’s Privacy and Security Team provides an initial take (with further analysis to follow) of the recently enacted privacy measure in California that may have significant ramifications for the use and development of datasets necessary to build robust AI systems. Known formally as the California Consumer Privacy Act of 2018, the measure creates extensive notice, opt-out/opt-in, access, and erasure rights for consumers vis-à-vis businesses that collect their personal information, as well as a private right of action in the case of a data breach. The opt-out procedures appear to permit differential pricing for consumers that may choose to opt-out, but imposes ambiguous constraints on the discounts and financial incentives that might be permissible as incentives against opt-out. If the Act spurs widespread opt-out (or erasure), the data that is commonly used to build AI systems could become much less representative of the larger population – which could, in turn, lead to unintended bias in some of these AI systems.


Tech Firms Move To Put Ethical Guard Rails Around AI

Wired explores how Microsoft, Facebook and Google and other technology firms are creating ethical processes and principles to guide the development and implementation of AI technology. For example, Microsoft has created an internal ethics board to help navigate ethical issues arising in the use of AI and to advise other business units on such issues, such as when it helped improve the company’s facial recognition service  analyzing faces without unintended biases. Some in academia are now advocating for the hiring of AI ethics officers and review boards within leading companies, even as these same academics argue that some form or governmental oversight and standards are necessary.


Microsoft is Creating an Oracle for Catching Biased AI Algorithms

The MIT Technology Review dives into Microsoft’s development of a tool to automatically identify bias in a range of different AI algorithms, thereby addressing the risk that bias could become automated and deployed at scale, the same risks highlighted by DWT’s Robin Nunn blog focusing on how the financial services industry may address concerns of bias in that sector.


New York City Announces Task Force to Find Biases in Algorithms

New York City Mayor DeBlasio announced the creation of an Automated Decision Systems Task Force which will explore how New York City uses algorithms. The task force, the first of its kind in the U.S., will work to develop a process for reviewing “automated decision systems” through the lens of equity, fairness and accountability. The Task Force arises from the City’s adoption, in December 2017, of a law to examine how city agencies use algorithms to make decisions, and how agencies may address instances where people are harmed by agency automated decision systems. The newly created Automated Decision System Task Force will fulfill the mandate of the new law and is expected to issue a report in December 2019.