Tuesday, April 25, 2017

How future policy and regulations will challenge AI

I recently wrote about how radical the incorporation of artificial intelligence (AI) to cybersecurity will be. Technological revolutions are however frequently not as rapid as we think. We tend to see specific moments, from Sputnik in 1957 to the iPhone in 2007, and call them “game changing” – without appreciating the intervening stages of innovation, implementation and regulation, which ultimately result in that breakthrough moment. What can we therefore expect from this iterative and less eye-catching part of AI’s development, looking not just at the technological progress, but its interaction with national policy-making process?

I can see two overlapping, but distinct, perspectives. The first relates to the reality that information and communication technology (ICT) and its applications develop faster than laws. In recent years, examples of social media and/or ride hailing apps have seen this translate into the following regulatory experience:

  1. Innovation: R&D processes arrive at one or many practical options for a technology;
  2. Implementation: These options are applied in the real world, are refined through experience, and begin to spread through major global markets;
  3. Regulation: Governments intervene to defend the status quo or to respond to new categories of problem, e.g. cross-border data flows;
  4. Unanticipated consequences: Policy and technology’s interaction inadvertently harms one or both, e.g. the Wassenaar’s impact on cybersecurity R&D.

AI could follow a similar path. However, unlike e-commerce or the shared economy (but like nanotechnology or genetic engineering) AI actively scares people, so early regulatory interventions are likely. For example, a limited focus on using AI in certain sectors, e.g. defense or pharmaceuticals, might be positioned as more easily managed and controlled than AI’s general application. However, could such a limit really be imposed, particularly in the light of potential for transformative creative leaps that AI seems to promise? I say that would be unlikely – resulting in yet more controls. Leaving aside the fourth stage of unknown unknowns of unanticipated consequences, the third phase, i.e. regulation, would almost inevitably run into trouble of its own by virtue to having to legally define something as unprecedented and mutable as AI. It seems to me, therefore, that even the basic phases of AI’s interaction with regulation could be fraught with problems for innovators, implementers and regulators.

The second, more AI-specific perspective is driven by the way its capabilities will emerge, which I feel will break down into three basic stages:

  1. Distinction: Creation of smarter sensors;
  2. Direction: Automation of human-initiated decision-making;
  3. Delegation: Enablement of entirely independent decision-making.

Smarter sensors will come in various forms, not least as part of the Internet of Things (IoT), and their aggregated data will have implications for privacy. 20th century “dumb lenses” are already being connected to systems that can pick out number plates or human faces but truly smart sensors could know almost anything about us, from what is in our fridge and on our grocery list, to where we are going and whom we will meet. It is this aggregated, networked aspect of smarter sensors that will be at the core of the first AI challenge for policy-makers. As they become discriminating enough to anticipate what we might do next, e.g. in order to offer us useful information ahead of time, they create an inadvertent panopticon that the unscrupulous and actively criminal can exploit.

Moving past this challenge, AI will become able to support and enhance human decision-making. Human input will still be essential but it might be as limited as a “go/no go” on an AI-generated proposal. From a legal perspective, mens rea or scope of liability might not be wholly thrown into confusion, as a human decision-maker remains. Narrow applications in certain highly technical areas, e.g. medicine or engineering, might be practical but day-to-day users could be flummoxed if every choice had unreadable but legally essential Terms & Conditions. The policy-making response may be to use tort/liability law, obligatory insurance for AI providers/users, or new risk management systems to hedge the downside of AI-enhanced decision-making without losing the full utility of the technology.

Once decision-making is possible without human input, we begin to enter the realm of speculation.  However, it is important to remember that there are already high-frequency trading (HFT) systems in financial markets that operate independent of direct human oversight, following algorithmic instructions. The suggested linkages between “flash crash” events and HFT highlight, nonetheless, the problems policy-makers and regulators will face. It may be hard to foresee what even a “limited” AI might do in certain circumstances, and the ex-ante legal liability controls mentioned above may seem insufficient to policy-makers should a system get out of control, either in the narrow sense of being out of the control of those people legally responsible for it, or in the general sense of it being out of control of anybody.

These three stages would suggest significant challenges for policy-makers, with existing legal processes losing their applicability as AI moves further away from direct human responsibility. The law is, however adaptable, and solutions could emerge. In extremis we might, for example, be willing to add to the concept of “corporate persons” with a concept of “artificial persons”. Would any of us feel safer if we could assign legal liability to the AIs themselves and then sue them as we do corporations and businesses? Maybe.

In summary then, the true challenges for AI’s development may not exist solely in the big ticket moments of beating chess masters or passing Turing Tests. Instead, there will be any number of roadblocks caused by the needs of regulatory and policy processes systems still rooted in the 19th and 20th centuries. And, odd though this may sound from a technologist like me, that delay might be a good thing, given the potential transformative power of AI.

 



from Paul Nicholas

No comments:

Post a Comment