AI algorithms are only as good as their human creators, and early examples of missteps have already raised concerns over cultural and racial bias and discrimination. This New York Times piece on Paul Allen and his initiative to teach “common sense” to machines illustrates a related concern with current AI technology: although neural networks and other machine learning tools excel at performing defined tasks, the technology has not yet matured to the point where these systems exhibit the ability to use reason to solve otherwise simple problems.
This article from the Robot Report reveals serious concerns with the antiquated framework (developed in the 70s) that regulators in the US Food and Drug Administration (FDA) use to review new AI-powered medical devices and software: “It was the last question of the night, and it hushed the entire room. …”
Law360 reports on efforts to use the CFAA, privacy and competition concerns to challenge the use of bots to scrape data from public sites, raising important policy questions about assembling the data sets that power AI.
Early AI missteps have already stirred regulators’ interests in intervening in the market. The Wall Street Journal reports on a few of the current efforts to create self-governance and ethical standards for AI and early efforts in the EU and US to impose regulatory constraints.
Ever wonder how all of those cutting edge machine learning tools are incorporated into commercial operations? Forbes article explains that auto insurance provider, Progressive Insurance, is utilizing Microsoft’s Cognitive Services to create its Flo Chatbot, a customer service chatbot.