Artificial Intelligence & Personhood: Crash Course Philosophy #23

0
(0)

Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing Test, and John Searle’s response to the Turing Test, the Chinese Room. Hank also tries to figure out one of the more personally daunting questions yet: is his brother John a robot?

Get your own Crash Course Philosophy mug from DFTBA: http://store.dftba.com/products/crashcourse-philosophy-mug

The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV

All other images and video either public domain or via VideoBlocks, or Wikimedia Commons, licensed under Creative Commons BY 4.0: https://creativecommons.org/licenses/by/4.0/

Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios

Crash Course Philosophy is sponsored by Squarespace.
http://www.squarespace.com/crashcourse

Want to find Crash Course elsewhere on the internet?
Facebook – http://www.facebook.com/YouTubeCrashC
Twitter – http://www.twitter.com/TheCrashCourse
Tumblr – http://thecrashcourse.tumblr.com
Support CrashCourse on Patreon: http://www.patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids

Similar Posts:

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

4 Comments

  1. The only danger about A.I. is if we teach it to be moral and to calculate social welfare for us, the day we do that is that day we perfect stupidity because we are automatically not on the list for moral existence.. there is no eco system that relies on us. It's Teaching a.i. social welfare that will be the end of us, we must leave that decision making in our hands only, even if we want to prevent polital corruption by replacing it with A.I.. everything thing else we can use a.i. for will be just fine and improve our lives very much, we can program it to make decision about anything except morality. And if you think that teaching it morality AND the will to survive will fix that, then get ready for the matrix. We have to set water tight boundaries with a.i. . If we want to have super intellectual decision access we should hold off until we are able to modify our own brains, but first with the exception that DNA can be screened before birth for every person who is born so that no illnesses can be inherited so that we can eliminate psychopathy.

Leave a Reply

Your email address will not be published. Required fields are marked *