Joe Asks John Carmack "How Close Are We to Artificial Intelligence?"

JRE Clips
28 Aug 201906:06

TLDRIn this transcript from The Joe Rogan Experience, the guest discusses the potential timeline for achieving artificial general intelligence (AGI). They express optimism, suggesting that signs of AGI could emerge within a decade, despite many scientists estimating it will take longer. The conversation delves into the comparison between the human brain's neurons and computer memory, as well as the potential of current supercomputers for AI tasks. The guest also shares skepticism about quantum computing's direct impact on AI, highlighting its potential risks to encryption methods and its current limited practical applications.

Takeaways

  • 🤖 Optimism about AGI: The speaker is optimistic about the potential for artificial general intelligence (AGI) within a decade, despite varying expert opinions.
  • 🧠 Neuron Comparison: The speaker compares the complexity of the human brain's neurons and connections to current computer memory and processing capabilities, suggesting a possible convergence in the future.
  • 🔄 Materialism and Simulation: As a strict materialist, the speaker believes that the human mind can be simulated, implying a potential for AGI.
  • 💻 Supercomputers and AI: The speaker suggests that some of today's supercomputers might already be capable of supporting AI work, given the right algorithms and training.
  • 🎮 Gaming Computers: The speaker highlights that overclocked gaming computers can outperform expensive supercomputers for certain applications, which could have implications for AI development.
  • 🚀 Top 500 Computers: The speaker reflects on the historical use of top 500 computers, noting a shift from traditional supercomputing to more practical applications in AI.
  • 🔧 Engineering and Hardware: The speaker emphasizes the importance of engineering in finding useful applications for new hardware, even if the applications are not immediately obvious.
  • 🌐 Quantum Computing: The speaker admits a lack of expertise in quantum computing but expresses skepticism about its direct usefulness for AI tasks, focusing more on its potential for breaking cryptography.
  • 🔒 Encryption and Security: The speaker raises concerns about the potential negative consequences of quantum computing, such as breaking encryption and security protocols.
  • 🌟 Quantum Supremacy: The speaker contemplates the implications of achieving quantum supremacy, which could lead to significant security risks rather than advancements in AI or other beneficial technologies.
  • 🧩 Cryogenic Computing: The speaker acknowledges that quantum computing is still largely confined to large labs with specialized equipment, indicating that it may not be widely accessible for practical applications yet.

Q & A

  • What is Joe Rogan's perspective on the timeline for achieving artificial general intelligence (AGI)?

    -Joe Rogan is optimistic and believes that there might be unclear signs of AGI within a decade, although he tends to underestimate the time it takes for technological advancements.

  • What is the general consensus among scientists working on AGI regarding its timeline?

    -The majority of scientists think AGI is at least a few decades away, with some holdouts believing it's impossible to achieve.

  • How does Joe Rogan's materialist view contribute to his belief in the possibility of AGI?

    -As a strict materialist, Rogan believes that the human mind is just the body in action, suggesting that it should be possible to simulate it in some way.

  • What is the comparison between the human brain's neurons and computer memory and processing time?

    -The human brain has around 85 billion neurons with approximately 10,000 connections each. When compared to computer memory and processing capabilities, Rogan suggests that the curves might cross within a decade, indicating the potential for AGI.

  • Why does Joe Rogan think that not all biological systems are necessary for AGI?

    -Rogan points out that for tasks like visual processing, fewer computer transistors are needed compared to neurons, suggesting that some biological complexity may not be essential for AGI.

  • What is Joe Rogan's opinion on the usefulness of the top 500 supercomputers for AI work?

    -Rogan initially thought these supercomputers were not very useful for practical applications, but he now believes they could be remarkably useful for AI tasks involving general matrix multiplications.

  • How does Joe Rogan view the potential of quantum computing in relation to AI and Moore's Law?

    -Rogan is skeptical about the direct usefulness of quantum computing for most AI tasks. He acknowledges that quantum computing might be more relevant for breaking cryptography and encryption methods, but he sees more downsides than benefits in this application.

  • What are Joe Rogan's thoughts on the potential negative consequences of quantum computing?

    -Rogan expresses concern that quantum computing could lead to the breaking of encryption methods, which would have serious security implications, such as impersonating public keys and undermining secure communications.

  • How does Joe Rogan feel about his lack of expertise in quantum computing?

    -Rogan admits he is not an expert and feels he should learn more about it, especially since he believes in the importance of understanding how to apply new technologies usefully.

  • What is Joe Rogan's definition of engineering?

    -Rogan defines engineering as figuring out how to achieve desired outcomes with the available resources, and he believes that even with new hardware, he can usually find a useful application.

Outlines

00:00

🤖 Optimism and Estimates on AGI

The speaker discusses their optimistic view on the development of artificial general intelligence (AGI), acknowledging that while they often underestimate the time it takes for technological advancements, they believe that signs of AGI could be visible within a decade. They mention that many scientists disagree, estimating it could take several more decades or may not be possible at all. The speaker, being a materialist, believes that the human mind can be simulated, comparing the brain's neurons and connections to computer memory and processing capabilities. They suggest that current supercomputers might already be capable of supporting AI work, despite their initial skepticism about their practicality for specific tasks. The speaker also touches on the potential of quantum computing, expressing uncertainty about its direct usefulness for AI but acknowledging its potential in breaking cryptographic methods.

05:02

🔒 Quantum Computing's Impact on AI

The speaker reflects on their limited knowledge of quantum computing and expresses a desire to learn more about it. They discuss the potential risks of quantum computing, such as breaking encryption and posing a threat to cybersecurity. The speaker is skeptical about the immediate benefits of quantum computing for AI, as it does not seem to offer a solution to the current bottlenecks in AI development. They also mention that quantum computing is still in the domain of large laboratories with specialized equipment, which is why it has not been a priority for them.

Mindmap

Keywords

💡Artificial General Intelligence (AGI)

AGI refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. In the video, the speaker discusses the potential for AGI development within a decade, highlighting the ongoing debate among scientists about the timeline for achieving such a milestone.

💡Optimism

Optimism is a positive outlook or hope for the future. The speaker identifies as an optimist, which influences their belief in the potential for rapid advancements in AI. This perspective is contrasted with the more cautious views of other scientists who predict a longer timeline for AGI development.

💡Materialism

Materialism is the philosophical belief that everything, including the human mind, can be explained in terms of physical matter and its interactions. The speaker's materialist stance underpins their confidence in the eventual simulation of human intelligence through AI, as they see no inherent barrier to replicating the physical processes of the brain.

💡Neurons and Connections

Neurons are the basic units of the nervous system, and connections refer to the synapses between them. The human brain has approximately 85 billion neurons with complex interconnections. The speaker uses these figures to compare the potential computational power of the brain to that of current computer systems, suggesting that the development of AGI might be closer than some scientists believe.

💡Supercomputers

Supercomputers are the most powerful class of computers, capable of performing complex calculations at high speeds. The speaker discusses the historical context of supercomputers and their evolution into systems that are now more accessible and useful for AI applications, indicating a shift in the computing landscape that could facilitate AGI development.

💡Gaming Computers

Gaming computers are high-performance machines designed for playing video games. The speaker points out that, contrary to common belief, gaming computers can often outperform supercomputers for certain tasks, suggesting that the fastest way to process single-threaded applications may not always be through the most expensive hardware.

💡AI Systems

AI systems are computational frameworks designed to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. The speaker mentions AI systems as large, football field-sized arrays of GPUs and CPUs, which are used for processing complex AI tasks, indicating the scale of computational resources required for advanced AI development.

💡Algorithms

Algorithms are step-by-step procedures for calculations or problem-solving. In the context of AI, algorithms are crucial for training and improving AI models. The speaker suggests that the development of the right algorithms, along with appropriate training schedules and hardware, is a key factor in achieving AGI.

💡Quantum Computing

Quantum computing is a type of computing that uses quantum bits (qubits) to perform operations on data. It has the potential to solve certain problems much faster than classical computers. The speaker expresses skepticism about the direct applicability of quantum computing to AI tasks, noting that its primary use cases seem to be in cryptography and not necessarily in accelerating AI development.

💡Moore's Law

Moore's Law is the observation that the number of transistors on a microchip doubles approximately every two years, leading to exponential growth in computing power. The speaker discusses the possibility of quantum computing breaking the limitations imposed by Moore's Law, although they express doubt about the immediate benefits of quantum computing for AI.

Highlights

Joe Rogan discusses the potential timeline for achieving artificial general intelligence (AGI).

The speaker is an optimist but tends to underestimate the time it takes for technological advancements.

As a programmer, the speaker often underestimates by 50% compared to others who overestimate by 100%.

There are varying opinions among scientists about when AGI will be achieved, with estimates ranging from a few decades to never.

The speaker is a strict materialist and believes that simulating the human mind is possible.

The brain has approximately 85 billion neurons and trillions of connections, which can be compared to computer memory and processing capabilities.

The speaker suspects that some of the brain's complexity may not be necessary for AGI, as seen in the efficiency of biological systems like vision processing.

Government supercomputers, ranked in the top 500, may already be useful for AI work, contrary to previous beliefs.

Overclocked gaming computers are often faster for single-threaded applications than expensive supercomputers.

The speaker's experience with supercomputers for specific tasks, like map building, was initially underwhelming.

AI systems today are large arrays of GPUs and CPUs, which were once thought to be less useful for certain tasks.

The speaker believes that with the right algorithms and training, AGI might be possible on current systems, but it will take time to develop these algorithms.

Quantum computing is not considered directly useful for most AI tasks, but it could potentially break cryptography and encryption methods.

The speaker expresses concern about the potential negative consequences of quantum computing, such as breaking encryption and impersonating public keys.

Quantum computing may not have a significant positive impact on tasks like video encoding or AI, according to the speaker's current understanding.

The speaker acknowledges the importance of learning about new technologies but admits that quantum computing has not been a priority due to its current domain in specialized labs.