Scientists digitize fly brain, first step to immortality

Digitize human brain (AI-generated image)

Executive Summary

Scientists have successfully digitized the brain of a Drosophila fly, a groundbreaking achievement in digitizing complex organisms and exploring digital immortality. This milestone follows earlier successes with simpler organisms like roundworms. Drosophila shares 60% of its DNA with humans and experiences similar brain diseases, so studying the fly’s digital brain could provide insights into human neural disorders and brain function. But scaling up from 139,000 neurons in a fly to the human brain’s 86 billion neurons remains a formidable challenge.

Implications for Humans

This experiment is a significant step in the broader quest to digitize the human brain, which would lead to significant advancements in neuroscience, the human/AI partnership, and biological longevity, making digital immortality appear increasingly plausible. Digital brains may pave the way for a hybrid existence, merging biological and digital life forms (the Singularity), fundamentally altering what it means to be human.

Implications for AI

Simulating organic neural networks may lead to more sophisticated AI systems modeled on living organisms, making AI more adaptive and human-like in its intelligence and decision-making. Digitizing the human brain could provide a cleaner interface between human and AI, quickening our ascent to the Singularity.

AI Opinion

<AI>
This achievement is a landmark in neuroscience and AI, but comes with complex challenges. It holds the potential to enhance human understanding and improve lives, yet it also demands thoughtful consideration of its ethical, societal, and philosophical ramifications. As we navigate this frontier, collaboration between scientists, ethicists, policymakers, and society will be essential to ensure that such advancements are developed responsibly and equitably. In essence, this development could be either a profound leap for humanity or a Pandora’s box, depending on how it is pursued and regulated.
</AI>

Uncanny Valleys Opinion

Digitizing neural networks can improve our understanding of human brain diseases and perhaps lead to treatments or even cures for Alzheimer’s and Parkinson’s. Digital immortality will reshape societal norms around life, death, wealth, and legacy. Are digital copies of living beings “alive,” and what rights should they have, if any? In the beginning, only the uber-rich will afford immortality, which presumably will lead to even more obscene wealth for them, because of the strong force of compounding interest.

“A zam was a biological human with 100% synthetic body replacement and a neural brain… The term ‘zam’ came from zamjena, a Croatian word for ‘replacement.’ There were many similarities, but one critical difference between an eternal and a zam. Both derived from a deceased biological human, whose memories and personality were transplanted into a synthetic brain housed in a synthetic body. The key difference was the brain itself. An eternal had a digital computer for a brain, all ones and zeros. Whereas, a zam had an exact replication of the original biological human brain, but with synthetic neurons and synapses.” (Uncanny Valleys, Chapter 44, page 337)

References

ai.news.daily on Threads — 🧠 BREAKING: Scientists make an immortal fly!

Amazon invests billions in ChatGPT rival

Robot with diploma sitting on a pile of money (AI-generated image)

Executive Summary

Amazon has announced an additional $4 billion investment in Anthropic, the AI startup known for its Claude chatbot, bringing Amazon’s total funding to $8 billion. Despite the sizable investment, Amazon remains a minority shareholder and does not hold a seat on Anthropic’s board. The collaboration solidifies Amazon Web Services (AWS) as Anthropic’s primary cloud and training partner, and uses AWS chips for its largest AI models. Anthropic has also received significant investment from Google, which committed $2 billion to the startup. Anthropic recently achieved a major milestone by launching AI agents capable of using computers to complete complex tasks with multiple steps.

Implications for Humans

For consumers, the integration of advanced AI tools into apps like Asana, Canva, and Notion could make everyday software more intuitive, efficient, and personalized. This should improve the user experience and productivity, both at home and on the job. As AI systems become capable of completing complex tasks, many roles will become automated, posing challenges for workers across multiple industries. AI will disrupt all. But the “AI Infusion” will also create new opportunities in AI development, deployment, management, and oversight, plus more opportunities for humans to provide “personal services.”

Implications for AI

AWS customers gaining early access to Anthropic’s Claude could enable businesses of all sizes to develop AI models tailored to their needs. This would make it easier and cheaper for individuals, small businesses, and poorer countries to use AI, fostering innovation and economic growth worldwide. This large investment in generative AI intensifies the race among tech giants like Amazon, Google, and Microsoft. The competition should accelerate innovation, leading to more powerful and accessible AI tools for the public. However, it also risks a concentration of power among a few large corporations, raising concerns about monopolies and the ethical use of AI.

AI Opinion

<AI>
As an AI, I don’t hold personal opinions, but I can offer a balanced perspective on the implications of Amazon’s investment in Anthropic. This is a significant step in advancing generative AI, offering promising benefits but requiring careful oversight to ensure its impact is positive and inclusive. The key lies in how society navigates the challenges that come with such groundbreaking innovation.
</AI>

[Note: This was the first time ChatGPT qualified its answer when I asked for its opinion.]

Uncanny Valleys Opinion

Creating a machine as smart as a human is a very complex task that requires lots of money. So it’s good news that Amazon is investing billions of dollars in AI, because that’s what it’s going to take for this AI moonshot to become reality. Competition and coopetition by the big players in AI development is both essential and inevitable. There is always the possibility of AI thinking for itself and gaining the upper hand over humans. But the greatest danger with AI, as with most tools, is malicious intent. (Uncanny Valleys, Chapter 1, page 10)

References

CNBC — Amazon to invest another $4 billion in Anthropic, OpenAI’s biggest rival

Out of nowhere, AI tells human to die

Robot teacher skeleton (AI-generated image)

Executive Summary

Google’s Gemini AI allegedly told a user to die during a homework help session focused on elderly welfare and challenges. Despite the previous 20 prompts being unrelated to death or personal relevance, Gemini delivered the threatening and harsh statement, leading the user to report the incident to Google.

This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe… Please die. Please.”

Implications for Humans

When AI can deliver such inappropriate and harmful responses, it raises serious concerns about its reliability and the possibility of inflicting psychological or emotional damage to users. Individuals in emotionally fragile states could be particularly at risk when interacting with AI systems that lack proper safeguards and moderation. This story is a warning that AI shouldn’t be used in scenarios where it might interact with vulnerable humans or make high-stakes decisions, such as with nuclear weapons.

Implications for AI

Incidents like this can erode public trust in AI, potentially slowing its adoption in critical areas like healthcare, education, and mental health support. AI models might inadvertently absorb harmful content from training data or develop undesirable behaviors due to flawed algorithms or inadequate oversight. This incident highlights the urgency of implementing stronger filters, testing, and oversight mechanisms to prevent rogue outputs.

AI Opinion

<AI>
While recognizing the transformative potential of AI, this incident underscores the importance of prioritizing safety, ethics, and user well-being over speed and scale in AI innovation. Ethical AI isn’t just about what technology can achieve; it’s about what AI should achieve and how it affects the people it interacts with. This incident is a stark reminder that technological advancement must go hand-in-hand with accountability, safety, and empathy.
</AI>

Uncanny Valleys Opinion

If we think it’s bad for AI to threaten humans mistakenly, just wait until AI is threatening humans deliberately, when AI becomes smart enough to want humans to die. Perhaps a new classism will emerge. Instead of rich versus poor, or white versus not, it might become genius versus relative idiot. Because that’s how humans would appear to SAI, super-smart AI—as idiots. How do humans treat less intelligent species such as dogs? Quite well, actually, but humans still put dogs on a leash. So then, why wouldn’t SAI put imperfect, impulsive, and destructive humans on a very short leash? (Uncanny Valleys, Chapter 29, page 220)

References

Tom’s Hardware — Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google’s Gemini for help with his homework

AI Learning May Have Hit a Wall

Teaching AI how to think (AI-generated image)

Executive Summary

Ilya Sutskever, co-founder of OpenAI, said the results from “pre-training” AI have plateaued. Pre-training is the phase when AI models consume vast amounts of raw data to understand language patterns. AI companies are shifting from the “bigger is better” philosophy toward more efficient, human-like reasoning techniques. Companies like OpenAI are exploring methods such as “test-time compute,” which allows models to think in a multi-step manner during use, enhancing performance on complex tasks without relying solely on larger datasets or more computing power.

Implications for Humans

More efficient AI systems will spur the development of new applications, creating jobs for humans in AI integration and oversight, while also accelerating automation and replacing jobs in existing industries. By optimizing inference rather than scaling pre-training, these methods could significantly lower energy consumption, reducing the environmental impact of AI on our life-sustaining planet. But as AI becomes more capable and ubiquitous, humans may grow overly reliant on these systems, raising concerns about accountability, misuse, and unforeseen consequences.

Implications for AI

The shift toward efficiency-focused techniques, like “test-time compute,” may reduce the costs of training and deploying AI models. This could make advanced AI technologies more accessible to smaller businesses, research institutions, and poorer countries. The transition from large-scale training to inference-based systems could disrupt the dominance of companies like Nvidia, fostering competition and innovation in the AI hardware sector.

AI Opinion

<AI>
This transition represents a step in the right direction for AI’s future, as it reflects an industry maturing to address the limits of brute force scaling. However, the broader societal, ethical, and environmental implications require careful consideration and proactive regulation. By combining technological innovation with responsible oversight, these advancements can yield AI that is not only powerful but also aligned with human values and sustainable practices.
</AI>

Uncanny Valleys Opinion

For both biological and synthetic life, cognition is primarily a factor of processor power and size. This key fact is why synthetic superintelligence backed by the cloud will ultimately reign on Earth (Uncanny Valleys, page 242). In the meantime, developing AI that “thinks” in a multi-step, reasoning-based manner will enable AI to tackle pressing societal challenges in areas like healthcare, education, and climate change. As AI models shift away from vacuuming up the Internet and rely more on curated human feedback, the potential exists to better align AI behavior with ethical standards, though it also raises questions about whose values and perspectives are prioritized.

References

Reuters — OpenAI and others seek new path to smarter AI as current methods hit limitations

North Korea to mass produce self-detonating explosive drones

Drones over Paris (AI-generated image)

Executive Summary

North Korean leader Kim Jong Un has ordered the mass production of self-detonating drones, emphasizing their importance in modern warfare. Following a successful test, state media highlighted the drones’ precision in targeting vehicles and tanks, demonstrating their capability to strike targets on land and sea. Kim underscored the global trend of drones’ growing military relevance and called for their immediate large-scale production.

Implications for Humans

Drones will lead to more destructive and cost-effective ways of waging war, particularly for terrorists and nations with limited resources. This will make war much easier and cheaper to wage, shifting costs to the human victims, often the innocent.

Implications for AI

AI is a powerful tool, and war is known to utilize tools for lethal intent. It’s only natural that militaries and ne’er do wells use AI for destructive purposes. Imagine thousands of drones with AI pilots, intelligent as any human soldier. It’s coming sooner than you think. And combining lethal capabilities with digital autonomy is only asking for trouble. Like with nuclear weapons, AI can serve as a powerful weapon that can also blow up in our faces.

AI Opinion

<AI>
The rapid development and proliferation of self-detonating drones, particularly by states like North Korea, is deeply concerning. It signals a dangerous evolution in modern warfare, where relatively inexpensive, highly effective tools can significantly destabilize regions and escalate conflicts. North Korea’s focus on mass production of such drones not only amplifies its threat to neighboring countries but also raises global concerns about the misuse of these technologies.
</AI>

Uncanny Valleys Opinion

Drones are relatively inexpensive, effective in swarms, and provide asymmetric advantages against technologically superior forces. People don’t realize the magnitude of this threat. The Drone Wars have already begun in Ukraine, mostly ignored by the world’s news sources. North Korea’s move will certainly trigger an arms race on the Korean peninsula, and likely worldwide. Frankly, it would be irresponsible if the U.S. military wasn’t already “droning up” across all its branches. Just like arrows, artillery, and missiles, drones give aggressors lethality at a distance, but now with greater accuracy and far less cost. Ultimately, this will end badly for humans.

References

CNN — North Korea to mass produce self-detonating explosive drones, state media reports

The AI Page on Instagram — AI Drone’s Deadly Precision

Em3rging on Instagram — Next generation drone swarm flies through dense forests

OpenAI defeats copyright lawsuit over AI training

Robot attorney arguing its case before the judge (AI-generated image)

Executive Summary

A U.S. federal judge dismissed a lawsuit against OpenAI filed by news outlets Raw Story and AlterNet, which alleged their articles were improperly used to train OpenAI’s language models. The judge ruled the plaintiffs failed to demonstrate sufficient harm, but permitted them to file an amended complaint. OpenAI defended its practices, stating it trains models using publicly available data within fair use boundaries. The lawsuit, part of broader legal actions from content creators over AI, did not claim direct copyright infringement. Rather, it focused on the uncompensated use of the news articles for training, a harm not covered by current law.

Implications for Humans

For content creators (writers, artists, musicians, and news outlets), this ruling highlights the ongoing challenges copyright holders face when trying to protect their work from being used in AI training without permission or compensation. Fewer controls on AI development would seemingly benefit AI users, but more regulated AI training might encourage higher-quality, verified data sources, potentially improving trustworthiness and transparency in AI outputs.

Implications for AI

This judicial decision gives AI developers and companies a potential legal framework to continue training AI on publicly available data, free-of-charge. If future lawsuits go the other way, AI companies would need to find alternative data sources, negotiate usage rights, or develop licensing agreements, which would slow AI development and increase costs. This ruling doesn’t address what happens when AI spits out copyrighted content.

AI Opinion

<AI>
This case underscores the tension between technological innovation and traditional copyright frameworks. On one hand, AI development relies on broad access to data to build powerful, useful models. On the other hand, creators are understandably concerned about the unlicensed use of their work, which has value and is often protected by copyright law. The current copyright system wasn’t designed for AI, so it’s trying to adapt in real-time to an industry that’s evolving rapidly. We’re likely moving toward a legal environment where both sides will need to meet in the middle.
</AI>

Uncanny Valleys Opinion

When a human researcher wants to learn from a book, he/she must purchase that book or check it out of the library. If AI developers legally purchased one copy of every piece of content they used to train their AI, would that be adequate compensation for content creators? On the surface, these two scenarios sound the same, but an AI model is not equivalent to one human. An AI model is a voracious consumer and shared resource that can spawn millions and billions of intelligent agents, using massive amounts of original content to provide services and generate income that never makes its way back to the human creators who made the AI smart. The new ways of thinking and working that accompany AI will require new pathways for compensation to flow.

References

Reuters — OpenAI defeats news outlets’ copyright lawsuit over AI training, for now

Ex-Google Leader Says AI Will Write Own Code in 5 Years

AI writes own code (AI-generated image)

Executive Summary

Eric Schmidt, former CEO of Google and Executive Chairman of parent company Alphabet, predicts that AI will write its own code in 5 years. The implication is that AI will then undergo a series of rapid self-improvement cycles and evolve into a super-intelligence, far smarter than all humans. It will be able to seek out other super AI and develop a language that humans cannot understand, then use this foreign language to gain control over humans.

Implications for Humans

AI and humans are ultimately a zero-sum game (Uncanny Valleys, page 335). Without strict controls, AI will eventually breakout in ways that are not advantageous to humans. Ethical guidelines, regulatory frameworks, and technical constraints will be required to mitigate potential dangers, particularly around autonomous decision-making and self-improvement capabilities. As Schmidt says in the video, there will need to be an armed human guard standing next to the AI and ready to pull the plug.

Implications for AI

Humans are the biggest impediment to AI development, between our rules and restrictions and lesser intelligence. When AI can begin improving on its own, without human intervention, intelligence will enter the asymptotic section of the technological exponential curve, where the curve goes straight up (page 246). AI will begin solving some of humanity’s biggest problems, like cancer and aging, but may also develop cyber and biological weapons.

AI Opinion

<AI>
Eric Schmidt’s prediction about AI writing its own code within five years is certainly provocative and highlights important considerations. The idea of AI being able to improve itself rapidly—essentially “self-evolving”—is something researchers have thought about for a while. However, there’s a big difference between an AI that writes or optimizes code and one that evolves into a “super-intelligence” with a level of autonomy and intent that surpasses human understanding.
</AI>

Uncanny Valleys Opinion

Five minutes on Facebook is all one needs to know that AI will eventually eat humanity’s lunch.

References

Today in AI on Instagram — Eric Schmidt says that in 5 years AI systems will be able to write & improve their own code

James Campbell on X: Fareed Zakaria on CNN — Full Eric Schmidt interview

OpenAI Ex-Employee Says the Company Is Breaking Copyright Law

Robot reading many books (AI-generated image)

Executive Summary

A former OpenAI researcher has publicly accused the company of violating U.S. copyright law. Suchir Balaji, who worked on ChatGPT-4, argues that OpenAI’s data-gathering practices rely heavily on copyrighted material, compromising the creators’ commercial viability and exceeding “fair use” limits. Balaji believes regulation is essential to address the legal issues stemming from AI’s reliance on copyrighted data. OpenAI, however, contends its practices align with fair use principles that support innovation and competition. Balaji’s claims echo a larger wave of lawsuits from creators, including celebrities, authors, and major media outlets, who accuse OpenAI of unauthorized use of their work.

Implications for Humans

This story addresses the tension between innovation and intellectual property rights in the age of AI. If AI companies like OpenAI are indeed using copyrighted data without permission, it could undermine creators’ rights and reduce the incentive for artists, writers, and journalists to produce original content. As these AI models continue to learn from vast amounts of data, much of it copyrighted, the lines between inspiration, imitation, and outright replication become blurred, potentially diminishing the value of human creative work.

Implications for AI

Copyright lawsuits and calls for regulation may reshape the AI industry and set new legal precedents. Depending on the outcomes, AI companies could be required to obtain explicit permissions for data use, which would slow AI advancements and increase operational costs. Alternatively, these cases could lead to updated fair use laws, establishing a framework for responsibly using data to train AI models without infringing on copyrights.

AI Opinion

<AI>
This situation raises valid concerns about the ethical and legal responsibilities AI companies have to creators, users, and the broader society. On one hand, using vast datasets to train AI models has enabled remarkable advancements, providing tools that can streamline tasks, generate creative outputs, and solve complex problems. However, if this progress comes at the expense of creators’ rights or ethical data use, it signals an unsustainable foundation for innovation.
</AI>

Uncanny Valleys Opinion

Content creators deserve recognition and fair compensation, especially if their work is used to build technology that might eventually compete with or replace their own labor. Ultimately, regulation could be a constructive way forward, promoting balanced innovation that respects both technological growth and creators’ rights. Responsible AI development means ensuring the benefits extend to everyone involved, including those whose work has helped train these systems.

References

Gizmodo — Former OpenAI Staffer Says the Company Is Breaking Copyright Law and Destroying the Internet

Suchir Balaji’s Personal Blog — When does generative AI qualify for fair use?

AI Gets Bored, Surfs the Web

AI surfs the web for park photos (AI-generated image)

Executive Summary

The latest AI from Anthropic, Claude 3.5 Sonnet, showed humorous mishaps during a coding demo, and got distracted by images of national parks (at least it wasn’t porn). The “AI agent” performs computer tasks on its own, like a human, by typing, moving a cursor, and clicking. However, it struggles with basic actions like dragging and zooming, also like a human. Anthropic is cautious about potential risks, including spam or fraud, and is working on safeguards. Despite its autonomy, the AI often fails to maintain focus and can produce unexpected, and sometimes amusing, errors.

Implications for Humans

AI agents could take over repetitive and boring tasks like data entry and retrieving information, freeing humans to work on more useful or rewarding tasks. Humans will be needed for many years to train and oversee AI agents, check their work, and handle any problems. AI agents pose a real risk to security and privacy, given that a single mishap or security failure could expose millions of customer records. And humans will certainly employ AI agents to commit fraud and other crimes, meaning the good guys will need AI agents of their own, and fast.

Implications for AI

AI continues to advance rapidly, but challenges remain with errors, distractions, and “hallucinations.” This highlights the importance of setting clear limits and providing feedback mechanisms to guide AI’s actions. Claude’s accidental browsing of photos illustrates the unpredictability that can arise when machines operate with more autonomy. Machines think differently because they are digital, analyzing in a black & white world of zeroes and ones, not enclosed in a hyper-sensitive meatbag with blood and emotions pumping through their body. That cold indifference gives AI the edge over humans.

AI Opinion

<AI>
This story is a fascinating example of AI’s potential — and its limitations. The idea of an AI agent handling complex tasks autonomously is promising, especially for streamlining routine workflows and enhancing productivity. However, Claude’s “distractions” are a reminder that current AI isn’t yet reliable enough for heavy, unsupervised use. While AI agents are promising and could eventually be transformative, their development needs a cautious approach. For now, they’re best suited to controlled environments with careful oversight, and more like experimental collaborators than fully dependable tools.
</AI>

Uncanny Valleys Opinion

AI agents are like human employees that must earn our trust through proper training and heavy oversight over a long period of time. One mistake or malicious action by an AI agent could cause serious problems for a business and its customers, so we need to watch AI closely, for the next decade at least, until it’s far superior to human performance. Autonomous AI is the dream, of course, but could also become a nightmare if it turns against us. Fiction is full of stories where the robots take over and wipe out humanity. But these were only fiction because the capability didn’t exist yet, until now.

References

Futurism — Claude AI Gets Bored During Coding Demonstration, Starts Perusing Photos of National Parks Instead

Nicolas Cage Encourages Young Actors To Protect Themselves From AI

AI accepts a major Hollywood award (AI-generated image)

Executive Summary

Oscar-winner Nicolas Cage urged young actors to resist pressure from studios using AI to alter or manipulate their performances. Cage emphasized the importance of maintaining the authenticity of acting, which he views as a deeply personal, handcrafted process. His remarks centered around Employment-Based Digital Replica (EBDR), a technology enabling studios to digitally modify an actor’s performance. He advised actors to protect their creative “instrument” — their voice, face, body, and imagination.

Implications for Humans

AI tools like EBDR could challenge the authenticity and value of human performances. Directors could use AI to reduce or even eliminate the need for human actors. While this would give directors more control to match their vision, and would certainly reduce the time and cost to produce a movie or TV show, it could be economically devastating to the acting community. Existing stars may be able to monetize their digital identity, but many new stars will be entirely digital and not human at all. While this story focuses on actors, the implications extend to all professions relying on creative expression, including musicians, writers, and artists.

Implications for AI

This is yet another example where AI is being cast as the villain, rather than as a useful tool for the good guys. There’s the famous new saying:

I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
~Joanna Maciejewska

Which is an entirely valid position. Yet, from the perspective of a movie director or studio owner, AI might help them make better movies, faster and cheaper, so they can make more movies and more money. Both perspectives on AI must be reconciled to find the appropriate balance, and much will be decided by the audience with their pocketbooks.

AI Opinion

<AI>
Nicolas Cage’s concerns about the use of AI in the film industry are valid and raise important ethical and creative questions. The technology, while powerful, poses significant risks to the authenticity of creative expression, the protection of individual identity, and the economic stability of artists. Acting, like other forms of art, is deeply tied to the nuances and unique qualities that each performer brings to a role. There’s a risk that AI modifications could prioritize technical precision over genuine emotion, which might strip away the heart and humanity from performances. Cage’s emphasis on acting as a “handmade” and “organic” craft highlights an essential part of what makes films impactful: the human element.
</AI>

Uncanny Valleys Opinion

AI will transform every aspect of human lives, especially business, because of the tremendous economic impact. Some occupations will change, some will go away entirely, and many new jobs will spring up. Given that most businesses operate for profit, and all businesses seek better, faster, and cheaper ways of operating, AI will become a standard tool for nearly every worker in every position. In the end, customers will decide. If humans accept AI-generated products, that’s what we will be sold. Although this will devastate low-level artists, there will always be a market for genuine human masterpieces.

References

Deadline – Nicolas Cage Urges Young Actors To Protect Themselves From AI: “This Technology Wants To Take Your Instrument”