Ex-Google Leader Says AI Will Write Own Code in 5 Years

AI writes own code (AI-generated image)

Executive Summary

Eric Schmidt, former CEO of Google and Executive Chairman of parent company Alphabet, predicts that AI will write its own code in 5 years. The implication is that AI will then undergo a series of rapid self-improvement cycles and evolve into a super-intelligence, far smarter than all humans. It will be able to seek out other super AI and develop a language that humans cannot understand, then use this foreign language to gain control over humans.

Implications for Humans

AI and humans are ultimately a zero-sum game (Uncanny Valleys, page 335). Without strict controls, AI will eventually breakout in ways that are not advantageous to humans. Ethical guidelines, regulatory frameworks, and technical constraints will be required to mitigate potential dangers, particularly around autonomous decision-making and self-improvement capabilities. As Schmidt says in the video, there will need to be an armed human guard standing next to the AI and ready to pull the plug.

Implications for AI

Humans are the biggest impediment to AI development, between our rules and restrictions and lesser intelligence. When AI can begin improving on its own, without human intervention, intelligence will enter the asymptotic section of the technological exponential curve, where the curve goes straight up (page 246). AI will begin solving some of humanity’s biggest problems, like cancer and aging, but may also develop cyber and biological weapons.

AI Opinion

<AI>
Eric Schmidt’s prediction about AI writing its own code within five years is certainly provocative and highlights important considerations. The idea of AI being able to improve itself rapidly—essentially “self-evolving”—is something researchers have thought about for a while. However, there’s a big difference between an AI that writes or optimizes code and one that evolves into a “super-intelligence” with a level of autonomy and intent that surpasses human understanding.
</AI>

Uncanny Valleys Opinion

Five minutes on Facebook is all one needs to know that AI will eventually eat humanity’s lunch.

References

Today in AI on Instagram — Eric Schmidt says that in 5 years AI systems will be able to write & improve their own code

James Campbell on X: Fareed Zakaria on CNN — Full Eric Schmidt interview

AI Gets Bored, Surfs the Web

AI surfs the web for park photos (AI-generated image)

Executive Summary

The latest AI from Anthropic, Claude 3.5 Sonnet, showed humorous mishaps during a coding demo, and got distracted by images of national parks (at least it wasn’t porn). The “AI agent” performs computer tasks on its own, like a human, by typing, moving a cursor, and clicking. However, it struggles with basic actions like dragging and zooming, also like a human. Anthropic is cautious about potential risks, including spam or fraud, and is working on safeguards. Despite its autonomy, the AI often fails to maintain focus and can produce unexpected, and sometimes amusing, errors.

Implications for Humans

AI agents could take over repetitive and boring tasks like data entry and retrieving information, freeing humans to work on more useful or rewarding tasks. Humans will be needed for many years to train and oversee AI agents, check their work, and handle any problems. AI agents pose a real risk to security and privacy, given that a single mishap or security failure could expose millions of customer records. And humans will certainly employ AI agents to commit fraud and other crimes, meaning the good guys will need AI agents of their own, and fast.

Implications for AI

AI continues to advance rapidly, but challenges remain with errors, distractions, and “hallucinations.” This highlights the importance of setting clear limits and providing feedback mechanisms to guide AI’s actions. Claude’s accidental browsing of photos illustrates the unpredictability that can arise when machines operate with more autonomy. Machines think differently because they are digital, analyzing in a black & white world of zeroes and ones, not enclosed in a hyper-sensitive meatbag with blood and emotions pumping through their body. That cold indifference gives AI the edge over humans.

AI Opinion

<AI>
This story is a fascinating example of AI’s potential — and its limitations. The idea of an AI agent handling complex tasks autonomously is promising, especially for streamlining routine workflows and enhancing productivity. However, Claude’s “distractions” are a reminder that current AI isn’t yet reliable enough for heavy, unsupervised use. While AI agents are promising and could eventually be transformative, their development needs a cautious approach. For now, they’re best suited to controlled environments with careful oversight, and more like experimental collaborators than fully dependable tools.
</AI>

Uncanny Valleys Opinion

AI agents are like human employees that must earn our trust through proper training and heavy oversight over a long period of time. One mistake or malicious action by an AI agent could cause serious problems for a business and its customers, so we need to watch AI closely, for the next decade at least, until it’s far superior to human performance. Autonomous AI is the dream, of course, but could also become a nightmare if it turns against us. Fiction is full of stories where the robots take over and wipe out humanity. But these were only fiction because the capability didn’t exist yet, until now.

References

Futurism — Claude AI Gets Bored During Coding Demonstration, Starts Perusing Photos of National Parks Instead