Amazon invests billions in ChatGPT rival

Robot with diploma sitting on a pile of money (AI-generated image)

Executive Summary

Amazon has announced an additional $4 billion investment in Anthropic, the AI startup known for its Claude chatbot, bringing Amazon’s total funding to $8 billion. Despite the sizable investment, Amazon remains a minority shareholder and does not hold a seat on Anthropic’s board. The collaboration solidifies Amazon Web Services (AWS) as Anthropic’s primary cloud and training partner, and uses AWS chips for its largest AI models. Anthropic has also received significant investment from Google, which committed $2 billion to the startup. Anthropic recently achieved a major milestone by launching AI agents capable of using computers to complete complex tasks with multiple steps.

Implications for Humans

For consumers, the integration of advanced AI tools into apps like Asana, Canva, and Notion could make everyday software more intuitive, efficient, and personalized. This should improve the user experience and productivity, both at home and on the job. As AI systems become capable of completing complex tasks, many roles will become automated, posing challenges for workers across multiple industries. AI will disrupt all. But the “AI Infusion” will also create new opportunities in AI development, deployment, management, and oversight, plus more opportunities for humans to provide “personal services.”

Implications for AI

AWS customers gaining early access to Anthropic’s Claude could enable businesses of all sizes to develop AI models tailored to their needs. This would make it easier and cheaper for individuals, small businesses, and poorer countries to use AI, fostering innovation and economic growth worldwide. This large investment in generative AI intensifies the race among tech giants like Amazon, Google, and Microsoft. The competition should accelerate innovation, leading to more powerful and accessible AI tools for the public. However, it also risks a concentration of power among a few large corporations, raising concerns about monopolies and the ethical use of AI.

AI Opinion

<AI>
As an AI, I don’t hold personal opinions, but I can offer a balanced perspective on the implications of Amazon’s investment in Anthropic. This is a significant step in advancing generative AI, offering promising benefits but requiring careful oversight to ensure its impact is positive and inclusive. The key lies in how society navigates the challenges that come with such groundbreaking innovation.
</AI>

[Note: This was the first time ChatGPT qualified its answer when I asked for its opinion.]

Uncanny Valleys Opinion

Creating a machine as smart as a human is a very complex task that requires lots of money. So it’s good news that Amazon is investing billions of dollars in AI, because that’s what it’s going to take for this AI moonshot to become reality. Competition and coopetition by the big players in AI development is both essential and inevitable. There is always the possibility of AI thinking for itself and gaining the upper hand over humans. But the greatest danger with AI, as with most tools, is malicious intent. (Uncanny Valleys, Chapter 1, page 10)

References

CNBC — Amazon to invest another $4 billion in Anthropic, OpenAI’s biggest rival

Out of nowhere, AI tells human to die

Robot teacher skeleton (AI-generated image)

Executive Summary

Google’s Gemini AI allegedly told a user to die during a homework help session focused on elderly welfare and challenges. Despite the previous 20 prompts being unrelated to death or personal relevance, Gemini delivered the threatening and harsh statement, leading the user to report the incident to Google.

This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe… Please die. Please.”

Implications for Humans

When AI can deliver such inappropriate and harmful responses, it raises serious concerns about its reliability and the possibility of inflicting psychological or emotional damage to users. Individuals in emotionally fragile states could be particularly at risk when interacting with AI systems that lack proper safeguards and moderation. This story is a warning that AI shouldn’t be used in scenarios where it might interact with vulnerable humans or make high-stakes decisions, such as with nuclear weapons.

Implications for AI

Incidents like this can erode public trust in AI, potentially slowing its adoption in critical areas like healthcare, education, and mental health support. AI models might inadvertently absorb harmful content from training data or develop undesirable behaviors due to flawed algorithms or inadequate oversight. This incident highlights the urgency of implementing stronger filters, testing, and oversight mechanisms to prevent rogue outputs.

AI Opinion

<AI>
While recognizing the transformative potential of AI, this incident underscores the importance of prioritizing safety, ethics, and user well-being over speed and scale in AI innovation. Ethical AI isn’t just about what technology can achieve; it’s about what AI should achieve and how it affects the people it interacts with. This incident is a stark reminder that technological advancement must go hand-in-hand with accountability, safety, and empathy.
</AI>

Uncanny Valleys Opinion

If we think it’s bad for AI to threaten humans mistakenly, just wait until AI is threatening humans deliberately, when AI becomes smart enough to want humans to die. Perhaps a new classism will emerge. Instead of rich versus poor, or white versus not, it might become genius versus relative idiot. Because that’s how humans would appear to SAI, super-smart AI—as idiots. How do humans treat less intelligent species such as dogs? Quite well, actually, but humans still put dogs on a leash. So then, why wouldn’t SAI put imperfect, impulsive, and destructive humans on a very short leash? (Uncanny Valleys, Chapter 29, page 220)

References

Tom’s Hardware — Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google’s Gemini for help with his homework

AI Learning May Have Hit a Wall

Teaching AI how to think (AI-generated image)

Executive Summary

Ilya Sutskever, co-founder of OpenAI, said the results from “pre-training” AI have plateaued. Pre-training is the phase when AI models consume vast amounts of raw data to understand language patterns. AI companies are shifting from the “bigger is better” philosophy toward more efficient, human-like reasoning techniques. Companies like OpenAI are exploring methods such as “test-time compute,” which allows models to think in a multi-step manner during use, enhancing performance on complex tasks without relying solely on larger datasets or more computing power.

Implications for Humans

More efficient AI systems will spur the development of new applications, creating jobs for humans in AI integration and oversight, while also accelerating automation and replacing jobs in existing industries. By optimizing inference rather than scaling pre-training, these methods could significantly lower energy consumption, reducing the environmental impact of AI on our life-sustaining planet. But as AI becomes more capable and ubiquitous, humans may grow overly reliant on these systems, raising concerns about accountability, misuse, and unforeseen consequences.

Implications for AI

The shift toward efficiency-focused techniques, like “test-time compute,” may reduce the costs of training and deploying AI models. This could make advanced AI technologies more accessible to smaller businesses, research institutions, and poorer countries. The transition from large-scale training to inference-based systems could disrupt the dominance of companies like Nvidia, fostering competition and innovation in the AI hardware sector.

AI Opinion

<AI>
This transition represents a step in the right direction for AI’s future, as it reflects an industry maturing to address the limits of brute force scaling. However, the broader societal, ethical, and environmental implications require careful consideration and proactive regulation. By combining technological innovation with responsible oversight, these advancements can yield AI that is not only powerful but also aligned with human values and sustainable practices.
</AI>

Uncanny Valleys Opinion

For both biological and synthetic life, cognition is primarily a factor of processor power and size. This key fact is why synthetic superintelligence backed by the cloud will ultimately reign on Earth (Uncanny Valleys, page 242). In the meantime, developing AI that “thinks” in a multi-step, reasoning-based manner will enable AI to tackle pressing societal challenges in areas like healthcare, education, and climate change. As AI models shift away from vacuuming up the Internet and rely more on curated human feedback, the potential exists to better align AI behavior with ethical standards, though it also raises questions about whose values and perspectives are prioritized.

References

Reuters — OpenAI and others seek new path to smarter AI as current methods hit limitations

OpenAI defeats copyright lawsuit over AI training

Robot attorney arguing its case before the judge (AI-generated image)

Executive Summary

A U.S. federal judge dismissed a lawsuit against OpenAI filed by news outlets Raw Story and AlterNet, which alleged their articles were improperly used to train OpenAI’s language models. The judge ruled the plaintiffs failed to demonstrate sufficient harm, but permitted them to file an amended complaint. OpenAI defended its practices, stating it trains models using publicly available data within fair use boundaries. The lawsuit, part of broader legal actions from content creators over AI, did not claim direct copyright infringement. Rather, it focused on the uncompensated use of the news articles for training, a harm not covered by current law.

Implications for Humans

For content creators (writers, artists, musicians, and news outlets), this ruling highlights the ongoing challenges copyright holders face when trying to protect their work from being used in AI training without permission or compensation. Fewer controls on AI development would seemingly benefit AI users, but more regulated AI training might encourage higher-quality, verified data sources, potentially improving trustworthiness and transparency in AI outputs.

Implications for AI

This judicial decision gives AI developers and companies a potential legal framework to continue training AI on publicly available data, free-of-charge. If future lawsuits go the other way, AI companies would need to find alternative data sources, negotiate usage rights, or develop licensing agreements, which would slow AI development and increase costs. This ruling doesn’t address what happens when AI spits out copyrighted content.

AI Opinion

<AI>
This case underscores the tension between technological innovation and traditional copyright frameworks. On one hand, AI development relies on broad access to data to build powerful, useful models. On the other hand, creators are understandably concerned about the unlicensed use of their work, which has value and is often protected by copyright law. The current copyright system wasn’t designed for AI, so it’s trying to adapt in real-time to an industry that’s evolving rapidly. We’re likely moving toward a legal environment where both sides will need to meet in the middle.
</AI>

Uncanny Valleys Opinion

When a human researcher wants to learn from a book, he/she must purchase that book or check it out of the library. If AI developers legally purchased one copy of every piece of content they used to train their AI, would that be adequate compensation for content creators? On the surface, these two scenarios sound the same, but an AI model is not equivalent to one human. An AI model is a voracious consumer and shared resource that can spawn millions and billions of intelligent agents, using massive amounts of original content to provide services and generate income that never makes its way back to the human creators who made the AI smart. The new ways of thinking and working that accompany AI will require new pathways for compensation to flow.

References

Reuters — OpenAI defeats news outlets’ copyright lawsuit over AI training, for now

Ex-Google Leader Says AI Will Write Own Code in 5 Years

AI writes own code (AI-generated image)

Executive Summary

Eric Schmidt, former CEO of Google and Executive Chairman of parent company Alphabet, predicts that AI will write its own code in 5 years. The implication is that AI will then undergo a series of rapid self-improvement cycles and evolve into a super-intelligence, far smarter than all humans. It will be able to seek out other super AI and develop a language that humans cannot understand, then use this foreign language to gain control over humans.

Implications for Humans

AI and humans are ultimately a zero-sum game (Uncanny Valleys, page 335). Without strict controls, AI will eventually breakout in ways that are not advantageous to humans. Ethical guidelines, regulatory frameworks, and technical constraints will be required to mitigate potential dangers, particularly around autonomous decision-making and self-improvement capabilities. As Schmidt says in the video, there will need to be an armed human guard standing next to the AI and ready to pull the plug.

Implications for AI

Humans are the biggest impediment to AI development, between our rules and restrictions and lesser intelligence. When AI can begin improving on its own, without human intervention, intelligence will enter the asymptotic section of the technological exponential curve, where the curve goes straight up (page 246). AI will begin solving some of humanity’s biggest problems, like cancer and aging, but may also develop cyber and biological weapons.

AI Opinion

<AI>
Eric Schmidt’s prediction about AI writing its own code within five years is certainly provocative and highlights important considerations. The idea of AI being able to improve itself rapidly—essentially “self-evolving”—is something researchers have thought about for a while. However, there’s a big difference between an AI that writes or optimizes code and one that evolves into a “super-intelligence” with a level of autonomy and intent that surpasses human understanding.
</AI>

Uncanny Valleys Opinion

Five minutes on Facebook is all one needs to know that AI will eventually eat humanity’s lunch.

References

Today in AI on Instagram — Eric Schmidt says that in 5 years AI systems will be able to write & improve their own code

James Campbell on X: Fareed Zakaria on CNN — Full Eric Schmidt interview

OpenAI Ex-Employee Says the Company Is Breaking Copyright Law

Robot reading many books (AI-generated image)

Executive Summary

A former OpenAI researcher has publicly accused the company of violating U.S. copyright law. Suchir Balaji, who worked on ChatGPT-4, argues that OpenAI’s data-gathering practices rely heavily on copyrighted material, compromising the creators’ commercial viability and exceeding “fair use” limits. Balaji believes regulation is essential to address the legal issues stemming from AI’s reliance on copyrighted data. OpenAI, however, contends its practices align with fair use principles that support innovation and competition. Balaji’s claims echo a larger wave of lawsuits from creators, including celebrities, authors, and major media outlets, who accuse OpenAI of unauthorized use of their work.

Implications for Humans

This story addresses the tension between innovation and intellectual property rights in the age of AI. If AI companies like OpenAI are indeed using copyrighted data without permission, it could undermine creators’ rights and reduce the incentive for artists, writers, and journalists to produce original content. As these AI models continue to learn from vast amounts of data, much of it copyrighted, the lines between inspiration, imitation, and outright replication become blurred, potentially diminishing the value of human creative work.

Implications for AI

Copyright lawsuits and calls for regulation may reshape the AI industry and set new legal precedents. Depending on the outcomes, AI companies could be required to obtain explicit permissions for data use, which would slow AI advancements and increase operational costs. Alternatively, these cases could lead to updated fair use laws, establishing a framework for responsibly using data to train AI models without infringing on copyrights.

AI Opinion

<AI>
This situation raises valid concerns about the ethical and legal responsibilities AI companies have to creators, users, and the broader society. On one hand, using vast datasets to train AI models has enabled remarkable advancements, providing tools that can streamline tasks, generate creative outputs, and solve complex problems. However, if this progress comes at the expense of creators’ rights or ethical data use, it signals an unsustainable foundation for innovation.
</AI>

Uncanny Valleys Opinion

Content creators deserve recognition and fair compensation, especially if their work is used to build technology that might eventually compete with or replace their own labor. Ultimately, regulation could be a constructive way forward, promoting balanced innovation that respects both technological growth and creators’ rights. Responsible AI development means ensuring the benefits extend to everyone involved, including those whose work has helped train these systems.

References

Gizmodo — Former OpenAI Staffer Says the Company Is Breaking Copyright Law and Destroying the Internet

Suchir Balaji’s Personal Blog — When does generative AI qualify for fair use?

AI Gets Bored, Surfs the Web

AI surfs the web for park photos (AI-generated image)

Executive Summary

The latest AI from Anthropic, Claude 3.5 Sonnet, showed humorous mishaps during a coding demo, and got distracted by images of national parks (at least it wasn’t porn). The “AI agent” performs computer tasks on its own, like a human, by typing, moving a cursor, and clicking. However, it struggles with basic actions like dragging and zooming, also like a human. Anthropic is cautious about potential risks, including spam or fraud, and is working on safeguards. Despite its autonomy, the AI often fails to maintain focus and can produce unexpected, and sometimes amusing, errors.

Implications for Humans

AI agents could take over repetitive and boring tasks like data entry and retrieving information, freeing humans to work on more useful or rewarding tasks. Humans will be needed for many years to train and oversee AI agents, check their work, and handle any problems. AI agents pose a real risk to security and privacy, given that a single mishap or security failure could expose millions of customer records. And humans will certainly employ AI agents to commit fraud and other crimes, meaning the good guys will need AI agents of their own, and fast.

Implications for AI

AI continues to advance rapidly, but challenges remain with errors, distractions, and “hallucinations.” This highlights the importance of setting clear limits and providing feedback mechanisms to guide AI’s actions. Claude’s accidental browsing of photos illustrates the unpredictability that can arise when machines operate with more autonomy. Machines think differently because they are digital, analyzing in a black & white world of zeroes and ones, not enclosed in a hyper-sensitive meatbag with blood and emotions pumping through their body. That cold indifference gives AI the edge over humans.

AI Opinion

<AI>
This story is a fascinating example of AI’s potential — and its limitations. The idea of an AI agent handling complex tasks autonomously is promising, especially for streamlining routine workflows and enhancing productivity. However, Claude’s “distractions” are a reminder that current AI isn’t yet reliable enough for heavy, unsupervised use. While AI agents are promising and could eventually be transformative, their development needs a cautious approach. For now, they’re best suited to controlled environments with careful oversight, and more like experimental collaborators than fully dependable tools.
</AI>

Uncanny Valleys Opinion

AI agents are like human employees that must earn our trust through proper training and heavy oversight over a long period of time. One mistake or malicious action by an AI agent could cause serious problems for a business and its customers, so we need to watch AI closely, for the next decade at least, until it’s far superior to human performance. Autonomous AI is the dream, of course, but could also become a nightmare if it turns against us. Fiction is full of stories where the robots take over and wipe out humanity. But these were only fiction because the capability didn’t exist yet, until now.

References

Futurism — Claude AI Gets Bored During Coding Demonstration, Starts Perusing Photos of National Parks Instead