5 minute summaries

1 quote, 3 ideas & 1 question from each episode

__________

Podcast cover

#255 — The Future of Intelligence

Making Sense with Sam Harris

9 Jul 2021

58mins

Owltail Summaries

1 quote, 3 ideas & 1 question from each episode

_________

#255 — The Future of Intelligence

9 Jul 2021

58mins

Quote

"It is the ability to make predictions about the future that is the crux of intelligence."

Ideas

1

Everything requires a reference frame or point for us to understand.
Our brain subconsciously predicts what we are about to experience before we experience it.

Using the coffee cup example, if you go and grab a cup, your brain will predict what it's going to feel. When you move your hands around a coffee cup, your brain is noting your finger's location relative to the cup, and then predicting what you're going to feel.

If you go and grab what you think is a cup, and it's in fact a hologram, your brain will immediately become confused and surprised.

This idea itself isn't novel, but what we discovered is that these reference frames exist in how we arrange all our knowledge of the world.

All of our thinking and knowledge also requires the same reference frame that we have when it comes to touching a coffee cup.

1

Everything requires a reference frame or point for us to understand.
Our brain subconsciously predicts what we are about to experience before we experience it.

Using the coffee cup example, if you go and grab a cup, your brain will predict what it's going to feel. When you move your hands around a coffee cup, your brain is noting your finger's location relative to the cup, and then predicting what you're going to feel.

If you go and grab what you think is a cup, and it's in fact a hologram, your brain will immediately become confused and surprised.

This idea itself isn't novel, but what we discovered is that these reference frames exist in how we arrange all our knowledge of the world.

All of our thinking and knowledge also requires the same reference frame that we have when it comes to touching a coffee cup.

2

Intelligence doesn't need to come with desire and motives.
The Neocortex, which is only part of the brain, has parts where it just processes information and understands the world, without any intent.

It's like a map, powerful if used properly but with no desires of its own.

There is no alignment problem if what we do is replicate intelligence only as opposed to the entire human brain to create intelligence.

Any Artificial General Intelligence (AGI) we build is not going to spontaneously produce new goals or motivations, and therefore we won't have alignment issues that conflict with AGI's.

2

Intelligence doesn't need to come with desire and motives.
The Neocortex, which is only part of the brain, has parts where it just processes information and understands the world, without any intent.

It's like a map, powerful if used properly but with no desires of its own.

There is no alignment problem if what we do is replicate intelligence only as opposed to the entire human brain to create intelligence.

Any Artificial General Intelligence (AGI) we build is not going to spontaneously produce new goals or motivations, and therefore we won't have alignment issues that conflict with AGI's.

3

There are many ways to do things wrong or badly with Artificial intelligence, but for me, none that are existential.
My definition of existential threats is that we lose control of the systems, not that they are smarter than us and can understand things we can't.

We think that AGI is going to be like a human where our motivation shifts and changes.

But I think it's going to be much more like having a really smart computer; where unless you put some bad things in it, it's not going to create goals that haven't been put in by us.

3

There are many ways to do things wrong or badly with Artificial intelligence, but for me, none that are existential.
My definition of existential threats is that we lose control of the systems, not that they are smarter than us and can understand things we can't.

We think that AGI is going to be like a human where our motivation shifts and changes.

But I think it's going to be much more like having a really smart computer; where unless you put some bad things in it, it's not going to create goals that haven't been put in by us.

Questions

1

Can you think of something that you only understand through a reference point relative to something else?

1

Can you think of something that you only understand through a reference point relative to something else?

What else is in the episode

1

The complex processes going on within our brains

1

The complex processes going on within our brains

2

Why brains are hard to study compared to other parts of the body

2

Why brains are hard to study compared to other parts of the body

3

Why we shouldn't be worried about Artificial General Intelligence

3

Why we shouldn't be worried about Artificial General Intelligence

Who is Jeff Hawkins?

1

The founder of Palm Computing, which built a line of personal digital assistants and mobile phones. He has since turned to work on neuroscience full-time, founding the Redwood Center for Theoretical Neuroscience in and Numenta a machine intelligence company.

1

The founder of Palm Computing, which built a line of personal digital assistants and mobile phones. He has since turned to work on neuroscience full-time, founding the Redwood Center for Theoretical Neuroscience in and Numenta a machine intelligence company.

Related episodes

Explore More