VC funding flowing into artificial intelligence start-ups:
Over the past 10 years, the US has become host to the most important hubs for artificial intelligence (AI) innovation and Silicon Valley is one of them. The driving force for this development is venture capital flowing almost limitlessly into US start-ups situated in different verticals where AI is seen as having transformative potential. These verticals range from cross-sector process automation technology and autonomous vehicles over tools for speech, image and facial recognition, to translation, automated trading as well as to detection of cybercrime and diseases. While in 2010 an average early-stage financing round for an AI start-up was in the region of $5m the check size has meanwhile more than doubled, growing beyond $12m since 2017.
According to sector specific industry reports, of the ten most funded AI start-ups worldwide six are US companies whereas the remaining four are Chinese companies.
See here for examples of the most funded US AI companies:
- CYLANCE, raised $297m;
- INSIDESALES.COM, raised $251m;
- C3IoT, raised $228;
- Lemonade, raised $180m;
- sentient technologies, raised $174m and
- anki, raised $182m.
California hosts the majority of these top funded AI companies and in 2018 Silicon Valley just staged new AI champions coming off most recent mega financing rounds: Softbank’s Vision Fund invested $550m into San Jose based Automation Anywhere, a software company dedicated to robotic process automation technology, and a further $375m into Mountain View based Zume Pizza, a company using robots and AI to prepare, cook, and deliver pizzas.
Trends and issues:
Notably the US and the Silicon Valley have meanwhile lost their undisputed predominance in this sector given the rise of Chinese AI companies. China is going head to head in providing a uniquely funded ecosystem for AI innovation. Pony.ai, a self-driving car start-up which raised $214m in 2018 is based in Silicon Valley and in China and serves as a testimony of this development. And Europe, maybe with the exception of Romania – think of Romania grown unicorn UiPath which raised $153m in 2018 -, is lagging behind.
It remains to be seen whether recent US law reforms, namely the Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA) implemented in 2018, will slow down AI innovation in the US. Other factors like trade tensions between the US and China will always play in. The reform’s objective is inter alia to further restrict the ability of foreign investors to invest, even with only a minority stake, into US businesses relevant to critical technology if relating to national security. Relevant investments and sales are reviewed by the Committee on Foreign Investment (CIFIUS). The investment risk review is extended to include non-controlling investments in emerging and foundational technologies, again if relating to national security. It seems likely that AI and robotics businesses may fall within this latter category in individual cases.
And while mega investments in AI technologies continue to push various AI solutions into commerce, society and legislators are still seeking to fully grasp the issues that go with it, let alone developing solutions to deal with these. These are political and social issues, e.g. a predicted loss of millions of jobs in the driving economy caused by autonomous driving vehicles, and the risk of replicating discrimination, but of course, also legal issues.
Rule-makers’ responses to AI issues:
Interestingly, while on 16 February 2017 EU Parliament recommended to the EU Commission to harmonize the EU regulatory framework for AI and requested it to deliver no less than a proposal for a new EU directive on civil law rules on robotics, the US and Silicon Valley seem to have some, but certainly only less concrete regulatory responses to the fundamental issues posed by AI.
On September 7, 2018 the California legislator expressed its support of the 23 Asilomar AI Principles as guiding values for the development of AI and of related public policy. These 23 non-enforceable principles which are meant to promote the safe and beneficial development of AI stem from a collaboration between AI researchers, economists, legal scholars, ethicists, and philosophers in Asilomar, California in January 2017.
If you compare the rule-makers’ responses to a key issue of AI, that is how to allocate liability for damages caused by robots, you will be surprised how much more differentiated EU Parliament’s views are on the AI liability dilemma compared to the views of California’s legislator. On this issue EU Parliament considers that
“in principle, once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater … [its] autonomy, and the longer a robot’s training, the greater the responsibility of its trainer should be;…” and that “… at least at the present stage the responsibility must lie with a human and not a robot”.
EU Parliament further suggests to reduce the complexity of allocating responsibility by introducing an obligatory insurance scheme (see reference above, under para 56. and 57.)
With its support of the 23 Asilomar AI Principles the California legislator limits itself to state that “Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse and actions, with a responsibility and opportunity to shape those implications (see reference above, principle 9).
AI learning path and individual ownership:
The pattern reminds us of what happened to the data economy driven by online search engines and social media. Silicon Valley provides the capital and the talent to bring a thriving data economy to life and the EU seeks to discipline its excesses within its territory by implementing GDPR.
And will we see this again on AI? Silicon Valley lets the creative forces of its powerful, venture backed ecosystem bring the AI economy to life, lagging behind in regulation, while the EU is more concerned with setting the regulatory scene, lagging behind in innovation.
The outcome of both of these forces interacting will hopefully produce great results for our global society. But as long as the rules applicable to AI technology are not yet carved in stones of national legislations, the more important it is for all those applying it in business to take on individual ownership. It will be on manufacturers, programmers, owners and users of AI technology to tackle the issues that go with it by developing bespoke contractual solutions to fairly allocate among them such risk which is manageable by agreement.