L’intelligence artificielle est omniprésente : histoire
Notre blog Inside Axon, qui est mis à jour tous les mercredis, propose des messages rédigés par les cadres d’Axon. This week's post was written by Todd Basche, Axon's EVP of Worldwide Product. Revenez la semaine prochaine pour découvrir d’autres expériences et conseils de la part des dirigeants d’Axon.
In 1950 English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence”, which was perhaps the very beginning of what would become the field of Artificial Intelligence (“AI”). For the next 40 years, computer scientists began to pursue the creation of an intelligent software. Most computer scientists thought the way to make software intelligent was to write code to handle each and every possibility that the program might encounter. The “knowledge”, imparted by the programmer who coded up each and every case, worked fine for something simple like checkers, but the approach quickly proved to be a dead end and a bit of a dark period in the 70's and 80's.
Later in the 1990's, the approach that scientists took to machine learning began to shift from knowledge-driven to data-driven. This was a huge “aha!” moment in the field of AI. Turned out, there is an infinite amount of “data” in the world and if we could figure out how to use all that data to train a machine, we would be on to something. Scientists begin creating programs for computers to analyze large amounts of data and to “learn” from the results. In 2006, Geoffrey Hinton, the Canadian cognitive psychologist and computer scientist, coined the term “deep learning” and we were off and running.
Very early on, Google realized the power of large data sets and set out to make AI that could understand human speech. But how could they get a gigantic data set of millions of different humans speaking millions of different phrases required to train the AI? Well, smart people as they are, Google created a free telephone line in 2007 called Goog-411 (800-GOOG-411). This was a free service that competed with the regular information hotline (411), but with Goog-411 you spoke to the AI and asked for the number of, let's say, “Safeway on El Camino.”
It was very cool and futuristic and it generally gave you the the correct answer. In fact, it would also text the number back to you, or, even more amazingly, it would just connect you. And while we all thought it was such a great service, Google was actually using all those billions of calls and human voices from around the country to build a huge, unprecedented data set (i.e. big data) to teach their AI about understanding human speech. By 2010, Big Data had arrived.
Today the phrase Artificial Intelligence still conjures up images of Sci-Fi movies with robots that look like humans, and that all seems very far in the future. In reality, AI is a technology we are all interacting with at some level every single day. It is already part of the fabric of our lives and will only become even more ingrained over the coming years. The irony is that the better AI gets, the less futuristic it will seem, the more invisible it will become, and ultimately not only will we use it in every aspect of our lives, but we will come to depend on it.
Google uses AI to weed out spam in your email. The traffic app Waze and Google Maps use AI to find the shortest routes and avoid traffic. These remarkable technologies were incredible the first time we used them, and quite quickly they faded into the background until we got used to using them every day without much thought. This is what happens with technology over time — through repeated use we come to depend on it without even thinking about it.
As discussed above, speech recognition has gotten pretty darn smart through training and large data sets. The technology has progressed from the ability to understand a single spoken word to the latest class of home and phone assistants. Alexa, OK Google, Siri, and Cortana are the current round of products using AI for speech recognition. These products go way beyond the ability to recognize each word or phrase — they have evolved to understand the question the person is asking, and to generate an answer that we, the human, are satisfied with.
Today those devices are fun to talk to, but you can easily tell you are not interacting with another person. Alexa often does not know how to answer my questions and some of her answers are quite funny. Imagine for a moment that Alexa is not just a little better than today but is 100X better! At that point Alexa will be a better listener than any human and it is likely you will not be able to tell you are interacting with a computer. And given the exponential nature of change and evolution in this field, Alexa will be 100X smarter in only 7 years. As technology evolves and is integrated into our home and our cars, we will be conversing with AI machines all day, and as the novelty wears off this too will become technology we use without a second thought.
Healthcare uses AI extensively today and that use is also growing exponentially, from looking for drug interactions to assisting in the reading of X-Rays and MRI scans. Through machine learning, these programs get smarter and more accurate with every scan they read. Think of what it takes to be an experienced radiologist. In the course of a day a radiologist may read 50 scans, so if they did that every day 7 days a week over the course of a 20 year career, they would have knowledge gained from looking at 2.5 million scans!
Radiologists gain more experience with every scan they read. The same is true for AI. AI machine learning algorithms can digest and learn from that number of scans in seconds, so it's easy to predict that in the next few years, AI specializing in radiology will be better at reading scans than any human on earth. Once a software starts to learn, it learns faster and through much larger data sets than would be possible for any human.
Self-driving cars are of course another example of AI in our daily lives. Anyone who has seen a Tesla driving down the highway just had a moment of interacting with AI. It is not uncommon to see a Google self-driving car driving through traffic on regular neighborhood streets in the Bay Area in California. They stop at red lights and stop signs and when a pedestrian walks in front of the car, and they do this all without any human interaction. The Google cars do not even have a steering wheel. I often pull in front of them and hit my brakes just to see how well they drive, and I can report they are amazing. And this is not a hypothetical “someday”. This is happening right now every day.
Consider the fact that technology does not get “a little better” but gets 100x better, and you can see how an AI driver will soon be a better driver than a human. And very quickly, we will adapt to that technology. In a few years we will summon a car with an app and it will take us to our destination without any human interaction, all through evolved AI. Children born today will likely not even have a drivers' license when they grow up. This will be their norm and of course they will depend on the AI drivers.
Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.” I'm sure that will be true the first time you encounter a self-driving car and tell the AI driver where you want to go. But when you're watching a movie on the ride home and telling the AI about your day, you won't even be thinking about it. The same goes for our technologies. The Axon Network has become ubiquitous on police officers’ utility belts, and we’re all the safer for it.
Welcome to the future. 4-18-2107
To read about Axon's AI initiatives, check out our Axon AI pageand our recent AI-related press releases: LAPD to Use Artificial Intelligence to Analyze Body Camera Footage,Former Uber Senior Tech Lead Mojtaba Solgi Joins Seattle Axon AI Team, TASER Makes Two Acquisitions to Create "Axon AI".