Sehr interessanter Podcast über künstliche Intelligenz

Ich habe gerade ein nettes Interview von Lex Fridman mit Yann LeCun gesehen, das ich mit euch teilen möchte.

Es ist sehr interessant, vor allem Yann’s Gedanken über AGI, von denen er denkt, dass sie nicht möglich sein werden und als Human-Level-Intelligenz bezeichnet werden sollten, mit der Begründung, dass selbst wir als Menschen eine spezielle Art von Intelligenz und keine allgemeine Intelligenz haben.

The hardware is built in many ways to support … the locality of the real world. Yes. That’s specialization!

Er spricht auch über selbst überwachtes Lernen, sowie über die Bedeutung von KI Modellen, die aus der realen Welt lernen sollen und eine Frage wie diese richtig beantworten können: -Was ist die Ursache für den Wind?

Ich werde mir diese 2 Filme ansehen müssen (2001: A Space Odyssey und Her), über die Yann spricht, wenn ich Zeit habe, aber jetzt, da ich gerade mit dem Training meiner 3 DL Modelle fertig bin, muss ich die Ergebnisse analysieren und dokumentieren.

Later Edit:

Ich möchte diesen Facebook Beitrag von Pawel Cisio mit euch mitteilen:

I’ve listened to a recent podcast with Yann LeCun (one of the fathers of AI, most commonly known for Convolutional Neural Networks – their application to optical character recognition).

After the session, I collected a bunch of key takeaways , which I decided to share as a brief overview of the 76 minutes. If you’re interested in getting into details, feel free to listen to the full discussion attached in this post.

 humans, in fact, don’t have a „general intelligence“ themselves; humans are more specialised than we like to think of ourselves
— Yann doesn’t like the term AGI (Artificial general intelligence), as it assumes human intelligence is general
— our brain is capable of adjusting to things because we can imagine tasks that are outside of our comprehension
— there is an infinite amount of things we’re not wired to perceive, such as we think of gas behaviour as a pure equation PV = nRT
—— when we reduce the volume, the temperature goes up, the pressure goes up (for perfect gas at least), but that’s still a tiny, tiny number of bits compared to the complete information of the state of the entire system, which would give us the position and moment of every molecule
 to create AGI (Human Intelligence), we need 3 things (for each you can find examples)
1) the first one is an agent that learns predictive models that can handle uncertainty
2) the second one is some kind of objective function that you need to minimise (or maximise)
3) and the third one is a process that can find the right sequence of actions needed in order to minimise the objective function (using the predictive learned models of the world)
 to test AGI, we should ask a question like „what is the cause of wind? If she (system) answers that it’s because the leaves on the tree are moving and it creates wind, she’s on to something „. In general, these are questions that reveal the ability to do
— common sense reasoning about the world
— some causal inference
 first AGI would act like a 4-year-old kid
 AI which will read all the world’s text, might still not have enough information for applying common sense. It needs some low-level perception of the world, like a visual or touch perception
— common sense will emerge from
—— a lot of language interaction
—— watching videos
—— interacting in virtual environments/real world
 we’re not going to have autonomous intelligence without emotions, like fear (anticipation of bad things that can happen to you)
— it’s just deeper biological stuff
 unsupervised learning as we think of is still mostly self-supervised learning, but there is a definitely a hope to reduce human input
 the most surprising thing about deep learning
— you can build gigantic neural nets, train them on relatively small amounts of data with the stochastic gradient descent, and it works!
—— that said, every deep learning textbook is wrong by saying that you need to have a fewer number of parameters, and if you have a non-convex objective function, you have no guarantee of convergence
— therefore, the model can learn anything if you have
—— huge number of parameters
—— non-convex objective function
—— data somehow very relative to the number of parameters
 neural networks can be made to reason
 in the brain, there are 3 types of memory
1) memory of the state of your cortex (disappears in ~20 seconds)
2) shorter-term (hippocampus). You remember the building structure or what someone said a few minutes ago. It’s needed for a system capable of reasoning
3) longer-term (stored in synapses)
 Yann: „You have these three components that need to act intelligently, but you can be stupid in three ways (objective predictor, a model of the world, policymaker)
— you can be stupid because
—— your model of the world is wrong
—— your objective is not aligned with what you are trying to achieve (in humans it’s called being a psychopath)
—— you have the right world model and the right objective, but you’re unable to find the right course of action to optimise your objective given your model
— some people who are in charge of big countries have actually all of these three wrong (it’s known which ones)
 AI wasn’t as popular in the 1990s as the code was hardly open sourced, and it was quite hard to code things in Fortran and C. It was also very hard to test the algorithm (weights, results)
 math in deep learning has more to do with cybernetics and electrical engineering than math in computer science
— nothing in machine learning is exact; it’s more the science of sloppiness
— in computer science, there is enormous attention to detail, every index and so on
 Sophia (robot) isn’t as scary as we think (we think she can do way more than she can)
— we’re not gonna have a lot of intelligence without emotions

Leave a Comment

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahren Sie mehr darüber, wie Ihre Kommentardaten verarbeitet werden .