Early data models lacked the volume of clean data taught itself how to identify an image of a cat. It was an required to perfect their data models and effectively astounding achievement for AI — and one that wouldn’t “learn.” Only recently, with the surge of data readily have been possible just a few years earlier, without easy available via the internet, do the models have access access to those millions of thumbnail images. to the data they need to get smarter. In 2009, Stanford University computer scientist Andrew Ng and Google But there was another limiting factor: processing power. Fellow Jeff Dean led a Google research team to create a In the earliest days of computing, machines filled entire massive “neural network” modeled after the human brain, rooms in university buildings. Improved ability to put more comprising thousands of processors and more than 1 transistors on integrated circuits meant processing capacity billion connections. Then, they fed the machine random doubled every two years (thank Gordon Moore and his images of cats, pulled from millions of online videos. By handy law for that observation), which packed more power identifying commonalities and filtering the images through into smaller boxes, taking computers out of universities and its brainlike neural network, the machine essentially businesses and putting them in consumers’ hands. “F eed enough cat photos into a neural net, and it can learn to identify a cat. Feed it enough photos of a cloud, and it can learn to identify a cloud.” — Wired, January 2016: “Artificial Intelligence Finally Entered Our Everyday World” AI for CRM: Everything You Need to Know 10
AI for CRM Page 9 Page 11