Weekly Piece of Future #130
From Neuromorphic Chips to Self-Improving Systems and Cancer-Fighting Molecules
Hey there, fellow future-addicts!
Welcome to this week's edition of Rushing Robotics! This week brings a diverse mix of advances in AI, robotics, and biotechnology. From brain-inspired computing that rivals the complexity of a macaque’s neural network, to AI uncovering new principles in plasma physics, to medical innovations that could reshape treatment and diagnosis, the pace of development remains steady and significant.
🤯 Mind-Blowing
In China, researchers have developed Darwin Monkey, a neuromorphic computing system with more than two billion neurons and over one hundred billion synapses—approaching the capacity of an actual macaque brain. At Emory University, a custom AI model has identified previously overlooked forces in plasma physics, providing a more accurate picture of this complex state of matter. Singapore’s Hierarchical Reasoning Model is matching the performance of large language models on complex reasoning challenges while training on just a thousand examples. Meta reports early signs of AI systems improving themselves, a development that could influence future approaches to superintelligence. And in robotics, a new e-textile enables mechanical hands to detect pressure, movement, and slippage, allowing them to adjust their grip with far greater precision.
🔊 Industry Insights & Updates
Google DeepMind has released Genie 3, its most advanced simulation model, capable of generating interactive environments from text prompts. OpenAI’s GPT-5 introduces a unified model architecture with stronger reasoning, fewer errors, and a 256K token context window. Apple is reportedly developing an in-house AI “answer engine” that could enhance or replace existing search functions in its products. Robotics company OpenMind has launched FABRIC, a system enabling humanoid robots to share contextual knowledge across languages and settings.
🧬 BioTech
A sugar molecule discovered in deep-sea bacteria has been shown to trigger a specific type of cancer cell death known as pyroptosis, suggesting new therapeutic possibilities. MIT scientists have developed a sound-powered microscope that can image brain tissue at depths five times greater than previous methods without damaging cells. Researchers in South Korea have designed an ultrasound-based wireless charging system that can fully recharge implantable medical devices in under two hours, without requiring surgery.
💡 Products/Tools of the Week
Google Labs has introduced Opal, a no-code platform for building AI-driven applications using natural language and visual tools. Coze Studio offers a development environment for AI agents that combines low-code and no-code capabilities. Snaptrude is a browser-based BIM platform with AI-assisted design and real-time documentation updates. vBots is an AI assistant for insurance agencies that automates administrative work and significantly reduces manual processing time.
🎥 Video Section
LimX Dynamics showcases the agility of its robot Oli. PNDbotics presents its humanoid robots Adam and Adam-U at WAIC 2025. MIRMI demonstrates climate-optimized building methods using robotics.
This week’s breakthroughs show just how quickly the future is unfolding. From brain-inspired computing to smarter robotics, we’re witnessing progress that could reshape entire industries. The next wave of innovation is closer than ever - stay hungry, stay futurish!
🤯 Mind-Blowing
Engineers in China have introduced a new generation of brain-inspired computing technology that replicates the workings of a macaque monkey’s brain. Known as Darwin Monkey, the system is said to support more than two billion spiking neurons and over one hundred billion synapses, a neuron count that approaches that of an actual macaque brain. Operating under typical conditions, it consumes roughly 2,000 watts of power. Developed by the State Key Laboratory of Brain-Machine Intelligence at Zhejiang University in the eastern province of Zhejiang, Darwin Monkey is the first neuromorphic brain-like computer in the world to be based entirely on dedicated neuromorphic chips. The machine incorporates 960 Darwin 3 neuromorphic computing chips, representing the third generation of brain-like neural processing units, and is arranged into fifteen blade-style neuromorphic servers. Its chip array can process more than two billion pulsed neurons and over one hundred billion synapses, enabling the system to integrate advanced cognitive capabilities with sensory functions such as vision, hearing, language, and learning.
Scientists at Emory University have developed a custom AI neural network that has overturned faulty assumptions that shaped plasma theory for years, including one about the electric charge of particles. Unlike most AI research, which typically focuses on predicting outcomes or refining datasets, the team in Atlanta trained its model to discover new physics. They accomplished this by supplying the AI with experimental data from dusty plasma—a rare and exotic state of matter consisting of hot, electrically charged gas filled with microscopic dust particles. The AI responded with strikingly accurate descriptions of unusual forces that had long puzzled researchers, providing fresh insights into interactions within this chaotic medium. This breakthrough not only corrects long-standing misconceptions in plasma physics but also demonstrates that AI can uncover entirely new physical laws. The approach could transform the study of complex systems, from living cells to industrial materials, by revealing interactions that traditional methods might miss.
A Singapore-based startup, Sapient Intelligence, has unveiled a breakthrough AI architecture that delivers up to 100 times faster reasoning than large language models (LLMs), while needing only 1,000 training examples. Called the Hierarchical Reasoning Model (HRM), the system is designed to replicate the brain’s two-track approach to thinking—combining slow, deliberate planning with fast, intuitive decision-making. In tests, HRM has matched and, in some cases, dramatically outperformed LLMs on complex reasoning challenges, despite being far smaller and more data-efficient. Its ability to achieve such results with minimal data and memory could make it an attractive choice for enterprise AI systems operating under tight resource constraints.
Meta says it is beginning to witness the early stages of AI systems capable of improving themselves. “Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight,” CEO Mark Zuckerberg wrote in a policy paper outlining the company’s vision for the future of advanced AI. He notes that, in the years ahead, artificial intelligence will not only enhance existing systems but also enable the creation of ideas and technologies that are unimaginable today. The challenge, he adds, lies in deciding what goals superintelligence should serve. According to researchers, this transition toward self-optimizing AI could accelerate the journey toward superintelligence and reshape the dynamics of AI development.
Researchers at the University at Buffalo have created a new electronic textile that allows robotic fingers to sense pressure, slippage, and movement in a way similar to human skin. This so-called E-textile enables robots to adjust their grip in real time, tightening or loosening as needed for the task at hand. The advancement marks a substantial leap in robotic dexterity, giving machines the ability to handle objects with a level of care and precision that was previously difficult to achieve. Its potential applications range from assembly lines and medical procedures to advanced prosthetics, where sensitivity and fine motor control are crucial. By integrating this technology, robots could become far more adept at working safely and effectively alongside humans.
🔊 Industry Insights & Updates
Google DeepMind has unveiled Genie 3, its most advanced world simulation model so far, capable of creating interactive and dynamic virtual environments directly from text prompts. These worlds can be explored in real time at 720p resolution and 24 frames per second, maintaining visual and interactive consistency for several minutes. The release is the product of years of DeepMind research in training AI agents within simulated environments for applications spanning gaming, robotics, and open-ended learning. Compared with its predecessors, Genie 1 and Genie 2, the new system delivers substantial improvements in realism and supports real-time navigation through its generated spaces. Models like Genie 3 are regarded as an important step toward artificial general intelligence, offering AI agents the ability to experience varied, open-ended environments where they can learn both how the world changes and how their own actions influence those changes.
OpenAI has announced the release of GPT-5, its most powerful AI model yet, marking a major step forward for ChatGPT, the API, and developer tools. The upgrade delivers a more intelligent, safer, and more personalized experience for users across the board. Instead of asking people to switch between different models, GPT-5 introduces a unified system that automatically delivers the best version of ChatGPT for any prompt. This approach makes the model faster, more accurate, and dramatically better at real-world tasks, whether that means producing clear writing, generating clean and functional code, or offering dependable health-related insights. Among the key improvements are sharper reasoning with far fewer hallucinations, safer completions that produce clearer and more useful replies, stronger coding abilities including frontend design, and more capable writing tools designed for professional workflows. GPT-5 also delivers OpenAI’s best performance yet for health-related guidance. Paid subscribers can now customize chat colors, and the model comes with a set of pre-configured personalities including Cynic, Robot, Listener, and Nerd. Pro users gain integrations with Gmail, Google Calendar, and Contacts, while everyone benefits from upgrades to voice interactions with adaptive tone and expanded availability. A unified Voice Mode will soon be available to all users. Developers are receiving major new capabilities as well, such as free-form function calling, verbosity controls, and a massive 256K token context window.
Apple appears to be moving toward the creation of its own AI “answer engine,” a streamlined alternative to ChatGPT capable of responding to open-ended questions using information gathered from across the web. The initiative is reportedly led by a group inside the company known as Answers, Knowledge, and Information, or AKI. According to reports, the team’s work could take the form of a standalone application or be integrated into existing Apple products such as Siri, Safari, and other core services. Bloomberg’s Mark Gurman notes that Apple is actively hiring for this team, with job postings seeking candidates experienced in search algorithms and engine development. Although Apple has already incorporated ChatGPT into Siri, plans for a more personalized, AI-enhanced version of the voice assistant have been delayed multiple times. The company’s search ambitions could also be influenced by Google’s recent antitrust defeat, which might force changes to the longstanding search deal between the two tech giants.
Startup OpenMind is approaching robotics from a software-first perspective with OM1, a hardware-agnostic operating system designed for humanoid robots. The company has now launched FABRIC, a protocol that lets robots verify identities and exchange contextual information with one another. This capability could significantly speed up how machines adapt to new languages, environments, and collaborative scenarios, making human–robot interaction more fluid. OpenMind has secured $20 million in funding to advance open, decentralized systems that link intelligent machines together, with the goal of making robotic intelligence accessible on a global scale. The OM1 platform acts as a universal brain, running on any robotic hardware and enabling cross-manufacturer cooperation, much like an Android equivalent for robotics. Unlike older robotics stacks developed before modern AI, OM1 has been built from the ground up for adaptive, intelligent behavior.
🧬 BioTech
A team of scientists has identified a sugar molecule from deep-sea bacteria that can annihilate cancer cells through a striking mechanism. The natural compound, produced by ocean-dwelling microbes, forces cancer cells into a fiery self-destruction process known as pyroptosis—an inflammatory form of programmed cell death. In laboratory experiments and studies with mice bearing liver cancer, the substance not only halted tumor growth but also spurred the immune system into action. The discovery, detailed in The FASEB Journal, could inspire a new generation of cancer therapies derived from marine sugars. Researchers purified a long-chain sugar, or exopolysaccharide, from these deep-sea microbes and showed its ability to trigger pyroptosis as an anti-tumor strategy.
The world’s first sound-powered microscope is capable of imaging brain tissue at depths five times greater than previous techniques, all without altering the cells. By harnessing light to trigger sound waves, researchers have developed a system that penetrates far deeper into the brain, enabling sharper and more detailed visualization. While traditional light-based microscopy can map cortical structures with precision, it struggles to maintain resolution when targeting deeper regions such as the hippocampus. This limitation is even more pronounced when attempting to observe molecular activity within single cells—an essential factor in studying brain function and disease. MIT scientists and engineers have addressed this challenge by integrating ultrafast light pulses with sound-based detection. The resulting microscope surpasses current depth limits without the need for dyes, chemicals, or genetic engineering. The team anticipates that this breakthrough will significantly advance neuroscience research and open new possibilities for surgical applications.
Researchers at the Daegu Gyeongbuk Institute of Science and Technology have created an ultrasound-based wireless charging system that can recharge implantable medical device batteries in less than two hours—without surgery. Designed with dual piezoelectric layers, the system can deliver power faster than any previous method, even when tissue separates the charger from the implant. The growing global demand for implantable devices has raised questions about their long-term usability and patient safety, making such innovations critical. In this design, the first piezoelectric layer converts incoming ultrasound waves into electricity, while the second layer captures residual ultrasound energy to generate additional power. Together, these layers produce more than 20 percent greater efficiency compared to conventional harvesters. Simulations guided the harvester’s design before fabrication, and the final device electrically connected both layers for maximum output. In testing, it achieved a power density of 497.47 milliwatts per square centimeter in water, producing a total output of 732.27 milliwatts—enough to fully recharge a 140mAh battery in just 1.7 hours.
💡Products/tools of the week
Opal is a no-code AI application development tool created by Google Labs, enabling users to design custom AI-driven apps using natural language instructions and visual editing features. By interpreting user-provided descriptions, the platform automatically generates functional mini-applications, powered by Google’s advanced AI models like Gemini and Imagen. No coding expertise is needed. Users can construct complex, multi-step processes that incorporate various AI functions such as text interpretation, image creation, and data analysis. These workflows can then be adjusted and refined visually within an easy-to-navigate interface. A key advantage of Opal lies in its tight integration with the broader Google ecosystem, its capability to produce shareable web links for public access or collaboration, and its appeal to users without technical backgrounds.
Coze Studio is a comprehensive platform for developing AI agents, streamlining the process of building, testing, and deploying AI-driven applications with user-friendly visual tools. It blends no-code and low-code functionality with cutting-edge language model support, enabling users of all skill levels to construct advanced AI agents without requiring deep programming expertise. The platform includes robust features like built-in prompt engineering, Retrieval-Augmented Generation (RAG), seamless plugin integration, and visual orchestration for workflow design. These capabilities make AI agent development accessible and efficient for everyone—from solo innovators to enterprise-level teams.
Snaptrude is a web-based Building Information Modeling (BIM) platform that transforms architectural design through the integration of AI-driven automation and intelligent parametric modeling. Accessible directly through the browser, the platform allows architects and designers to quickly generate building concepts, while AI features automatically handle documentation, area calculations, and real-time bill of quantities. As the design progresses, Snaptrude ensures that all related data remains up to date, fostering seamless, real-time collaboration across devices. AI-assisted recommendations further streamline design decisions and workflows. With built-in machine learning tools that support interoperability with industry-standard software like Revit and AutoCAD, Snaptrude has become a vital solution for forward-thinking design professionals.
vBots is a purpose-built AI assistant tailored for insurance brokers and agencies, designed to automate time-consuming administrative tasks within daily operations. Using cutting-edge artificial intelligence, vBots efficiently manages essential insurance functions such as direct bill reconciliation, handling notices of cancellation, retrieving documents, managing policy renewals, and cleaning up client data—all with minimal need for human oversight. The system seamlessly integrates with agency management platforms, adapting to existing workflows and acting as a virtual team member capable of processing tasks with 99% accuracy. By reducing manual workloads by as much as 90%, insurance professionals using vBots report saving over 100 hours each week, achieving full ROI in under three months, and freeing up their teams to focus on client service and business growth.