Ticker

6/recent/ticker-posts

Header Ads Widget

The Future of Artificial Intelligence

Hey everyone, welcome to the Future of AI. We’ve covered a lot of ground together,from the basics of neural networks to game playing, language modeling, and algorithmicbias. We’ve even experimented with code in labs! And as we’ve been learning about differentparts of artificial intelligence as a field, there have been a couple themes that keepcoming up.. First, AI is in more places than ever before. The machine learning professor Andrew Ng saysthat “Artificial Intelligence is the New Electricity.”



This is a pretty bold claim, but lots of governmentsare taking it seriously and planning to grow education, research, and development in AI. China’s plan alone calls for over 100 billionU.S. dollars in funding over the next 10 years. Second, AI is awesome. It can help make our lives easier and sortof gives us superpowers. Who knows what we can accomplish with thehelp of machine learning and AI? And third, AI doesn’t work that well yet. I still can’t ask my phone or any “smart”device to do much, and we’re far away from personal robot butlers.

So what’s next? What’s the future of AI? INTRO One way to think about the future of AI isto consider milestones AI hasn’t reached yet. Current soccer robots aren’t quite readyto take on human professionals, and Siri still has a lot of trouble understanding exactlywhat I’m saying. For every AI system, we can try and list whatabilities would take the current technology to the next level. In 2014, for example, the Society of AutomotiveEngineers attempted to do just that for self-driving cars.

They defined five levels of automation. For each additional level, they expected that the AI controlling the car can do more without human help. At level 1, cruise control automatically acceleratesand decelerates to keep the car at a constant speed, but everything else is on the humandriver. At level 3, the car is basically on its own. It’s driving, monitoring its surroundings,navigating, and so on… but a human driver will need to take over if something goes wrong,like really bad weather or a downed powerline.

And at level 5, the human driver can justsit back, have a smoothie, and watch Crash Course AI while the car takes them to workthrough rush-hour traffic. And obviously, we don’t have cars with thetechnology to do all this yet. But these levels are a way to evaluate howfar we’ve come, and how far our research still has to go. We can even think about other AIs using “levelsof automation.” Like, for example, maybe we have level 1 AIassistants right now that can set alarms for us, but we still need to double-check theirwork.

But what are levels 2 through 5? What milestones would need to be achievedfor AI to be as good as a human assistant? What would be milestones for computer visionor recommender systems or any of the other topics in this course? We’d love to read your ideas in the comments! Sometimes it’s useful to think about differentkinds of AI on their own as we make progress on each very difficult problem. But sometimes people try and imagine an ultimateAI for all applications: an Artificial General Intelligence or AGI.

To understand why there’s such an emphasison being “general,” it can be helpful to remember where all this AI stuff firststarted. For that, let’s go to the Thought Bubble. Alan Turing was a British mathematician whohelped break the German enigma codes during World War II, and helped define the mathematicaltheory behind computers.

In his paper “Computing Machinery and Intelligence”from 1950, he introduced the now-famous “Turing Test”, or “The Imitation Game.” Turing proposed an adaptation of a guessinggame. In his version, there’s an “interrogator”in one room, and a human and a machine in the other.

The interrogator talks to the hidden playersand tries to figure out which is a human and which is a machine. Turing even gave a series of talking points,like: Please write me a sonnet on the subject ofthe Forth Bridge. Add 34,957 and 70,764. Do you play chess? I have K at K1 and no other pieces. You have only K at K6 and R at R1. It’s your move. What do you play? The goal of The Imitation Game was to testa machine’s intelligence about any human thing, from math to poetry.

We wouldn’t just judge how “real” arobot’s fake human skin looks. As Turing put it: “We do not wish to penalizethe machine for its inability to shine in beauty competitions, nor to penalise a manfor losing in a race against an aeroplane.” This idea suggests a unified goal for AI,an artificial general intelligence. But over the last 70 years, AI researchersfocused on subfields like computer vision, knowledge representation, economic markets,planning, and so on.

Thanks, Thought Bubble! And even though we’re not sure if an ArtificialGeneral Intelligence is possible many communities are doing interdisciplinary research, andmany AI researchers are taking baby steps to combine specialized subfields. This involves projects like teaching a robotto understand language, or teaching an AI system that models the stock market to readthe news and better understand market fluctuations.

To be clear, most of AI is still science fiction…we’re nowhere near Blade Runner, Her, or any similar movies. Before we get too excited about combiningeverything we’ve built to achieve AGI, we should remember that we still don’t knowhow to make specialized AIs for most problems. Some subfields are making progress more quicklythan others and we’re seeing AI systems pop up in lots of places with awesome potential. To understand how AI might be able to changeour lives, AI Professors Yolanda Gil and Bart Selman put together the Computing ResearchAssociation’s AI Roadmap for the next 20 years.

They predict AI reducing healthcare costs,personalizing education, accelerating scientific discoveries, helping national defense, andmore. Part of the reason they expect so much progressis that more people than ever (including us!) are learning how to build AI systems. And all of these problems have lots of datato train new algorithms.

It used to be hard to collect training data,going to libraries to copy facts and transcribe books. But now, a lot of data is already digital. If you want to know what’s happening onthe other side of the planet, you can download newspapers or grab tweets from the TwitterAPI. Interested in hyperlocal weather prediction? You can combine free data from the weatherservice with personal weather stations to help know when to water your plants.

And if you feed that data into a robot gardner,you could build a fully-automated weather-knowing plant-growing food-making garden! Maker communities around the globe are combiningdata, AI, and cheap hardware to create the future and personalize AI technologies. While imagining an AI/human utopia is exciting,we have to be realistic, too. In many industries, automation doesn’t onlyenhance human activities, it can replace humans entirely.

Truck, delivery, and tractor drivers are someof the most common jobs in the US as of 2014. If self-driving vehicles revolutionize transportationin the near future, will all those people lose their jobs? We can’t know for sure, but Gödel Prizewinning Computer Science Professor Moshe Vardi points out that this is already the trendin some industries. For example, U.S. manufacturing output willlikely keep rising, but manufacturing jobs have been decreasing a lot.

Plus, computers use energy, and that meanswe’re not getting any benefits from AI for free. Massive amounts of machines running thesealgorithms can have a substantial carbon footprint. On top of that, as we’ve discussed, youhave to be pretty careful when it comes to trusting AI systems because they often endup with all kinds of biases you may not want. So we have to consider the benefits of massiveAI deployment with the costs. In a now-famous story from a few years ago,Target figured out a woman was pregnant based on her shopping history, and they sent her maternity coupons.

But she was still in high school, so her familysaw the mail, even though she hadn’t told them. Do we want our data being used like this,and potentially revealing personal details? Or what about the government. Should it be allowed to track people withfacial recognition installed on cameras at intersections? When we provide companies location data fromour phones we could help them build better traffic models so we can get to places faster. Cities could improve bus routes, but it alsomeans … someone … is … always … watching you. AI could also track your friends and family,where you shopped, ate, and who you hung out with.

If statistics have shown that people who leavehome late at night are more likely to commit a crime… and an AI knows you left (eventhough it’s just for some late night cookie dough), should it call the police to watchyou — just in case? Sooo, we can go down any number of scary thoughtexperiments. And there’s a lot to consider when it comesto the future of AI.

AI is a really new tool and it’s great thatso many people have access to it, but that also means there are very few laws or protectionsabout what they can and can’t do. Innovations in AI have awesome potential tomake positive changes, but there are also plenty of risks, especially if the technologyadvances faster than the average person’s understanding of it.

It’s probably the most accurate to say thatthe future is… complicated. And the most important thing we can do isbe educated and involved in AI as the field changes. Which we’re doing right now! In Crash Course AI labs, we used some of thesame machine learning technologies that the biggest companies use in their products, andthat universities rely on for cutting edge research.

So when we see a company or government rollingout a new technology, we know what questions to ask: Where did they get their data? Is this even a situation where we want AIto help humans? Is this the right tool to use? What privacy are we giving up for this coolnew feature? Is anyone auditing this model? Is this AI really doing what the developershoped it would?.

Post a comment

0 Comments