Some Members of U.K.’s House of Lords Argue that Cambridge Analytica scandal ‘highlights need for AI regulation’
Certain members of the British House of Lords have argued that the Cambridge Analytica investigation in the United States has brought to light the need to address how data is being used in private sector AI systems, reports The Guardian. A committee of House of Lords has put forth five ethical principles to guide AI regulation. The committee has expressed particular concern for “data monopolies” “with such a grip on the data sources that they can build better AI than anyone else.”
A growing body of research has demonstrated that algorithms and other types of software can be discriminatory, yet the vague nature of these tools makes it difficult to implement specific regulations. Determining the existing legal, ethical and philosophical implications of these powerful decision-making aides, while still obtaining answers and information, is a complex challenge.
The European Union’s executive body, the EC, has taken a first pass at drawing up a strategy to respond to the myriad socio-economic challenges around artificial intelligence technology — including setting out steps intended to boost investment, support education and training, and draw up an ethical and legal framework for steering AI developments by the end of the year.
Venture Beat takes on trend in the healthcare industry to increase efficiencies and decrease costs through use of AI. The article specifically dives into how AI can help with “Population health management,” “Evidence-based medicine,” and “Medication research and discovery.” However, the article also recognized certain hurdles including digital-privacy compliance—a topic previously addressed by DWT partner Rebecca Williams in her article “Privacy Please: HIPAA and Artificial Intelligence – Part I.”