Weekly Piece of Future #160
From Visual Cortex to Robotic Fingers and Cancer-Killing Bacteria
Hey there, fellow future-addicts!
Welcome to this week's edition of Rushing Robotics! This week, researchers decoded a mouse's visual experience directly from its neurons. A robotic hand handled a raspberry without bruising it. A surgeon in London removed a tumor from a patient in Gibraltar — 2,400 kilometers away — in real time. And Germany broke ground on the world's largest gym for humanoid robots.
🤯 Mind-Blowing
UCL researchers reconstructed 10-second video clips purely from mouse brain signals, using single-cell recordings from the visual cortex — no fMRI required. Meanwhile, UT Austin's FORTE robotic hand grabbed potato chips and raspberries without damage, matching human receptor response speeds for the first time. Rhoda AI's FutureVision system lets robots predict what's about to happen in their environment every few hundred milliseconds and act on it instantly. Germany's TUM RoboGym is set to become the world's largest humanoid robot training center. And Lawrence Berkeley Lab turned thermodynamic noise — normally the enemy of computing — into a functional AI asset that could slash inference energy costs dramatically.
🔊 Industry Insights & Updates
China's XGSynBot unveiled the Z1, a wheeled humanoid that switches tools in under six seconds and is built for real industrial grime from day one. AIKO Energy launched its Gen 3 ABC solar panels in Australia, peaking at 545W with less than 0.35% annual degradation after the first year. Lucid Motors revealed Lunar, a steering-wheel-free robotaxi concept built on its upcoming Midsize platform, with Uber already in advanced deployment talks. And the US Department of Energy approved the safety documentation for the MARVEL microreactor, clearing the path for the first nuclear physics trials at Idaho National Laboratory.
🧬 BioTech
A RIKEN team in Japan identified the missing third cell type needed to grow a complete, cycling hair follicle in the lab — follicles that produced hair, cycled repeatedly, and even connected to host nerves after transplantation. At Baylor University, researchers bonded the toxin saporin onto Listeria bacteria, using the pathogen's natural ability to invade cells as a drug delivery vehicle to kill colorectal cancer cells from the inside. And in a historic telesurgery milestone, surgeon Prokar Dasgupta successfully removed a patient's prostate from 2,400 km away, guided by the Toumai Robotic System with just 48 milliseconds of lag.
💡 Products/Tools of the Week
Triall is an AI verification platform that runs three independent large language models in parallel, stress-testing each other's outputs through blind peer review and adversarial refinement — delivering a single evidence-tagged verdict with confidence scores for legal, technical, and research professionals who can't afford a hallucination. Hebbrix gives AI agents genuine long-term memory through a 3-tier architecture and 5-layer hybrid search, retrieving context in under 50ms and plugging into any LLM stack via OpenAI-compatible APIs. Image Describer turns any uploaded image into SEO captions, WCAG-compliant alt text, and Midjourney-ready prompts in seconds, with batch processing built in. And Dashtera is a GPU-accelerated dashboarding platform that visualizes billions of data points in real time with millisecond-level updates — no code required.
🎥 Video Section
Figure's Helix 02 tidies a living room with an ease that genuinely unsettles. DEEP Robotics built a robot horse that moves with a fluidity that has no business existing yet. And BIGAI's OmniXtreme robot delivers agility so smooth it looks like CGI — except it isn't.
The most exciting thing about this moment isn't any single breakthrough — it's that all of them are happening at once. Brains being decoded, robots learning to feel, surgeons crossing continents, and atoms being split in microreactors the size of a room. The next decade won't just look different — it will be unrecognizable. Stay hungry, stay futurish!
🤯 Mind-Blowing
Videos were reconstructed from mouse brain signals alone by researchers at University College London, marking a significant step forward in neural decoding. Dr. Joel Bauer, principal investigator at the Sainsbury Wellcome Centre at UCL, led the team in using single-cell recordings from the visual cortex — rather than fMRI machines — to build a dynamic neural encoding model that predicts how individual neurons respond to specific video frames. The model also factored in the mouse's physical state, including movements and pupil dilation, to better reflect how internal conditions shape perception. Reconstructed clips ran to 10 seconds, and their accuracy improved as more individual neurons were tracked.
Grip sensitivity matching human hands was demonstrated by FORTE, a new robotic hand built at the University of Texas at Austin that successfully handled fragile items like potato chips and raspberries without damage. Siqi Shang, the doctoral candidate who led the research, said the key innovation is the use of 3D-printed fingers modeled on the fin-ray effect from fish fins, with internal air channels that function as pressure sensors and deliver real-time force feedback. Shang's team tested FORTE on 31 different objects and achieved a 91.9 percent grasping success rate, while the system accurately flagged 93 percent of slip events without a single false alarm. Assistant professor Lillian Chin noted that FORTE's sensors respond at speeds comparable to human hand receptors — a benchmark no prior robotic gripper had reached.
A robot AI that predicts physical motion from video and uses those predictions to guide machines in real time was unveiled by Rhoda AI as the company emerged from stealth. Called FutureVision, the system runs a Direct Video Action model that generates short video forecasts of what will happen next in the physical environment — every few hundred milliseconds — and immediately translates those forecasts into robot movements. Rhoda CEO Jagdeep Singh said the breakthrough is that robots no longer need exhaustive pre-programming for every scenario; instead, FutureVision gives them the ability to anticipate and adapt continuously. In a production trial, a robot guided by FutureVision completed a component-processing task in under two minutes per cycle with no human assistance.
A massive robotics facility is set to open in Germany, built to become the world's largest training center for humanoid robots. Known as TUM RoboGym, the center is a joint venture between the Technical University of Munich and NEURA Robotics, a firm based in Metzingen, with a combined investment of approximately €198 million — €128 million of which comes from NEURA Robotics alone. Located near Munich Airport at the TUM Convergence Centre, the facility will span around 25,000 square feet and house hundreds of robots learning everyday tasks from human trainers. David Reger, founder and CEO of NEURA Robotics, stated that the competitive advantage in intelligent robotics is no longer hardware, but high-quality training data.
Thermal noise, the random electron movement that conventional computers spend enormous energy suppressing, has been turned into a computing asset by researchers at Lawrence Berkeley National Laboratory. The team, including Molecular Foundry scientist Whitel and researcher Corneel Casert, created a framework that enables thermodynamic computers to handle nonlinear AI tasks — something previously out of reach for the field. Casert trained the system by running evolutionary simulations across 96 GPUs on the Perlmutter supercomputer, screening more than a trillion noisy trajectories using a genetic algorithm. Once deployed as physical hardware, the approach promises dramatically lower energy use for AI inference compared to conventional processors.
🔊 Industry Insights & Updates
Switching tools in under six seconds, the Z1 humanoid robot was introduced by China’s XGSynBot as a direct answer to the gap between agile robotics and genuinely tough industrial environments. Revealed at the company’s “More Than One Answer” launch event in both Silicon Valley and Beijing, the Z1 is a wheeled humanoid equipped with the world’s inaugural Modular-End-Effector Quick Change System and in-house XG-High-Performance Joint Modules. XGSynBot’s CEO described the robot as a “blue-collar worker” built for oil-stained factory conditions from the first day of deployment, not a showpiece confined to controlled settings. Alongside the Z1, XGSynBot announced the STARFIRE ecosystem to open its hardware interfaces to third-party developers and progressively release proprietary SDKs to the broader research community.
Releasing its most advanced panel to date, AIKO Energy on March 11 introduced the Gen 3 ABC 60-Cell module in Australia, a product designed to squeeze more power from limited suburban rooftops. The module's Infinite ABC technology eliminates metal grid lines from the front surface, creating a pure black light-absorbing panel that peaks at 545W — enough to push a typical 660m² commercial factory roof from 100kW to 107kW using the same footprint as before. Degradation is minimal: just 1 percent in the first year and 0.35 percent annually thereafter, ensuring over 90.6 percent power retention after three decades. Bywater of AIKO Energy said the uniform format and installation process across all project types allows installers to scale their businesses more efficiently.
A two-seat robotaxi with no steering wheel and no pedals was unveiled by Lucid Motors at its investor day event in New York. Named Lunar, the concept is built on Lucid's upcoming Midsize electric vehicle platform — the same architecture that will underpin two consumer SUVs, the Lucid Cosmos and Lucid Earth, with vehicles starting under $50,000. Lucid is in advanced negotiations with Uber to deploy vehicles from this platform at scale, with Uber CEO Dara Khosrowshahi calling Lucid "a vital strategic partner" for rolling out autonomous vehicles globally. Interim Lucid CEO Marc Winterhoff said the company is applying greater scale and cost discipline while preserving its core technology DNA.
Safety approval for the MARVEL microreactor's foundational documentation was granted by the US Department of Energy, clearing the path for the first nuclear physics trials at Idaho National Laboratory. The compact reactor uses sodium-potassium cooling and is engineered to deliver between 85 and 100 kilowatts of thermal output alongside approximately 20 kilowatts of electrical power, making it one of the smallest grid-capable nuclear units ever designed. INL's Alla Ab-Jade described the approved safety framework as a reusable model for advanced nuclear developers seeking to speed up their own programs. The risk-informed approach used by MARVEL has already begun to influence other DOE-backed projects, including the Molten Chloride Reactor Experiment and VALKR.
🧬 BioTech
Scientists in Japan have cracked a long-standing puzzle in hair regeneration by identifying the third essential cell type needed to grow a complete, cycling hair follicle in a lab. The RIKEN team — including Takashi Tsuji, Koh-ei Toyoshima, and Miho Ogawa — showed that dermal mesenchymal cells marked PDGFRα+/Sca1+/CD34high+ act as the structural scaffolding of the growing follicle, differentiating into dermal sheath cells that physically wrap around and pull the follicle downward as it grows. Without this population, engineered follicles formed hair bulbs but could never elongate into full follicles — the equivalent of building a foundation without ever raising the walls. With all three populations in place in a 3D skin model, the follicles produced hair, cycled repeatedly, and even connected to host nerves and muscles after transplantation.
Colorectal cancer cells were killed more effectively than ever before in mouse models by a bacteria-based drug delivery system developed at Baylor University and published in Cell Chemical Biology. Professor Michael VanNieuwenhze and his team — including doctoral students Wyatt Paulishak and Jianan Lyu and a collaborator from Texas Tech University Health Sciences Center — chemically bonded the toxin saporin onto the surface of Listeria monocytogenes, a bacterium already famous for its ability to slip inside human cells undetected. Once Listeria invades a tumor cell, the saporin is released into the cytoplasm, where it shuts down the cell's protein-making machinery and kills it. VanNieuwenhze described the approach plainly: "hook saporin on the surface of a bug, let the bug get delivered into the cell, and use chemistry inside the cell to release saporin to kill it."
Remote surgery reached a new milestone when Prokar Dasgupta, head of robotic surgery at The London Clinic, successfully removed a patient's prostate from 2,400 kilometers away. Paul Buxton, a 62-year-old cancer patient at St Bernard's Hospital in Gibraltar, had originally expected to travel to the UK for treatment but instead took part in a telesurgery trial. Dasgupta guided the Toumai Robotic System in real time through a secure high-speed network provided by Presidio, achieving a lag of only 48 milliseconds between his London console and the robot in Gibraltar. A local surgical team was present at Buxton's side throughout the operation.
💡Products/tools of the week
Launched to address the growing demand for auditable AI outputs, Triall is an AI-driven verification platform that runs three independent large language models in parallel, subjecting each model's answers to blind peer-review, stress-testing, and adversarial refinement by the others. The system then performs convergence analysis and devil's-advocate checks, topped by web-based claim verification, before delivering a single evidence-tagged verdict complete with confidence and over-compliance risk scores. Professionals in legal, technical, strategic, and research fields turn to Triall when a single model's output is simply not reliable enough, as its multi-model workflow is specifically designed to catch correlated hallucinations and leave a transparent reasoning trail that solo AI systems cannot match.
Hebbrix is a persistent memory platform that gives AI agents and applications long-term, queryable context through a 3-tier memory architecture spanning short, mid, and long-term storage. The platform combines a 5-layer hybrid search — covering vector similarity, BM25, knowledge-graph traversal, time-decay, and ONNX reranking — with a knowledge graph engineered for temporal reasoning and contradiction detection, all at sub-50ms retrieval speeds. Developers can plug Hebbrix into any LLM stack via OpenAI-compatible APIs to build chatbots, agents, and AI apps that remember, reason, and personalize over time, without constructing custom state-management or memory systems from scratch.
Released for creators, marketers, and accessibility teams who need image copy without the manual effort, Image Describer is an AI-powered tool that analyzes uploaded images and produces ready-to-use natural-language outputs instantly. Using computer vision and language models, it detects objects, scenes, mood, style, and composition to deliver short or detailed descriptions, WCAG-compliant alt text, SEO-friendly captions, OCR-extracted text, and image-to-prompt formats for Midjourney and Stable Diffusion. Image Describer supports batch processing and interactive image chat, giving teams a fast, scalable way to generate copy, prompts, and accessibility tags without touching a single line of manual description work.
Built for engineering, finance, IoT, healthcare, and other data-intensive domains, Dashtera is a GPU-accelerated, real-time dashboarding platform that visualizes massive, high-frequency datasets — from millions to billions of points — with smooth 2D and 3D charts and millisecond-level updates. The platform connects to SQL and streaming sources, supports file imports, and lets teams build and share interactive dashboards without writing a single line of code. Dashtera also includes anomaly detection, data-transformation tools, and ML/AI workflow integration to surface automated insights and alerts wherever ultra-low-latency observability is critical.





