Weekly Piece of Future #145
From Transneurons to DNA Nanorobots and Gemini 3
Hey there, fellow future-addicts!
Welcome to this week's edition of Rushing Robotics—where we take a lightning‑fast tour through the most game‑changing research, industry moves, and biotech breakthroughs that are reshaping tomorrow. From brain‑inspired chips to autonomous CAD, from sonic water‑harvesters to DNA‑powered nanorobots, we’ve gathered the stories that will keep you on the edge of innovation.
🤯 Mind-Blowing
From a single chip that mimics the firing patterns of three distinct brain cells to a bandage‑like patch that turns a screen into a tactile canvas, this section showcases the wildest innovations pushing the boundaries of perception, robotics, and design automation. We’ll unpack how MIT’s “transneuron” could enable robots to think on the fly, how VoxeLite brings haptic textures to VR, and how an AI that watches you CAD could become your next creative co‑pilot. Each story illustrates how hardware and software are converging to blur the line between silicon and biology.
🔊 Industry Insights & Updates
Get the latest pulse on what the market and major players are moving. From Google’s Gemini 3 that redefines reasoning and multimodality, to NVIDIA’s Blackwell‑based supercomputers powering Japan’s next‑gen AI hub, this section tracks product launches, benchmark milestones, and strategic partnerships that are shaping the competitive landscape. We’ll also highlight breakthrough optical and exascale computing advances that promise to unlock new levels of AI performance and scientific discovery.
🧬 BioTech
Where biology meets computation, breakthroughs in DNA nanorobots, gene‑editing therapies, and regenerative medicine promise to re‑wire the very fabric of life. Discover how a self‑powered DNA origami machine can perform autonomous tasks, how a microneedle patch is healing hearts in preclinical trials, and how a new class of disease‑agnostic CRISPR editors could democratize genetic therapies. These stories illustrate the rapid convergence of synthetic biology and AI, opening doors to personalized medicine and beyond.
💡 Products/Tools of the Week
We spotlight the newest software that empowers creators and developers to harness AI without writing a single line of code. From Fuser’s visual model‑chaining canvas to Vezlo AI SDK’s code‑aware knowledge base, these tools are turning imagination into reality at unprecedented speed. The section also covers project‑management AI, 3D world generators, and robotics demos that bring tomorrow’s tech into today’s workflows.
🎥 Video Section
Catch the latest robot demos and industry showcases in our video highlights. Watch XPENG’s IRON Robot, Agile ONE, UBTECH’s Walker S2, and Sunday Robotics’ Memo as they push the envelope in humanoid, delivery, and assistive robotics. These videos bring the hardware side of AI to life, offering a glimpse into the physical manifestations of our digital advances.
The frontier of AI is expanding faster than ever, with each breakthrough promising to rewrite the rules of what’s possible. From brain‑inspired chips that could give robots true awareness, to DNA nanorobots that power themselves, the horizon is full of unprecedented opportunity. Stay hungry, stay futurish!
🤯 Mind-Blowing
Engineers have built a transneuron that copies the activity of three different brain cells, bringing robotic perception closer to genuine awareness. The researchers built a single chip‑based neuron that can switch between acting like a visual, motor, or pre‑motor brain cell, matching recorded patterns from macaque neurons with up to 100 % accuracy. The device uses a memristor that physically reconfigures as current flows, allowing it to adjust its firing rate and timing in response to different electrical inputs—behaviour normally achieved by many software‑defined neurons. By feeding the transneuron two signals simultaneously, the team showed it could discriminate the relative timing of the pulses, a key property for processing complex sensory streams. Future plans involve wiring many of these units into a “cortex‑on‑a‑chip” that could give robots real‑time adaptive sensing, low‑power learning, and even direct interface with human nervous tissue. The work, published in Nature Communications, suggests a path toward robots that think and react with a biological‑like flexibility.
A ultra‑thin bandage‑like patch wrapped around a fingertip and turned the invisible surface of a screen into a textured reality. The device, VoxeLite, packs electroadhesive nodes every 1–1.6 mm, enabling 800‑Hz indentation that spans the full frequency range of human touch receptors. Participants wearing the patch correctly detected four directional textures up to 87 % of the time and matched real fabrics like leather and corduroy at 81 % accuracy. Lightweight and skin‑conforming, the patch offers a practical “human‑resolution” haptic interface for VR, accessibility tools, and robotic teleoperation.
The new AI system trained by MIT and Autodesk watches designers interact with CAD software and learns to replicate their steps, turning a simple 2‑D sketch into a fully‑formed 3‑D model by clicking buttons and navigating menus exactly as a human would. By compiling over 41,000 training videos in the VideoCAD dataset—each capturing every mouse click, drag, and selection—a neural network can now operate the software, automate repetitive tasks, and suggest next actions to the user. Early demonstrations show the model building basic shapes and more complex structures like house layouts with minimal input, hinting at a future CAD “co‑pilot” that could lower the learning curve and boost productivity for both novices and seasoned engineers.
A realistic bone‑marrow model was built entirely from human cells, using a hydroxyapatite scaffold that mimics the mineral core of real bone. The scaffold was seeded with induced pluripotent stem cells that were guided to differentiate into bone, blood vessels, nerves, and immune cells, recreating the complex microenvironments—especially the endosteal niche—inside a three‑dimensional structure. The resulting construct, roughly 8 mm across and 4 mm thick, supported continuous human blood formation in the lab for weeks, offering a platform that could reduce or replace animal experiments in hematology research. Although currently too large for high‑throughput drug screens, the model points the way toward patient‑specific marrow tissues that could test therapies in vitro before clinical use.
MIT unveiled an ultrasonic device that can extract drinking‑water from atmospheric moisture in just a few minutes, achieving a 45‑fold increase in efficiency over conventional heating methods. The system uses high‑frequency sound waves to vibrate the sorbent material, forcing water droplets to coalesce and drip into a collection chamber. In laboratory trials the device recovered 70 % of the stored moisture within eight minutes, a stark contrast to the hours required by existing technologies. The breakthrough promises a low‑energy, scalable solution for communities facing water scarcity, especially in arid regions where traditional desalination is impractical.
🔊 Industry Insights & Updates
Google rolled out Gemini 3, its most advanced reasoning model yet, today, claiming it surpasses prior versions on nearly every AI benchmark. The new Gemini 3 Pro scores an Elo of 1501 on the LM Arena leaderboard, achieves 93.8 % on GPQA Diamond, and hits 81 % on MMMU‑Pro, while also improving factual accuracy with 72.1 % on SimpleQA Verified. The model now powers a suite of products, from the Gemini app and AI Mode in Search to Vertex AI and the Antigravity platform, which lets AI agents autonomously use editors, terminals and browsers. A preview of Gemini 3 Deep Think pushes reasoning further, scoring 41 % on Humanity’s Last Exam and 45.1 % on ARC‑AGI‑2 with code execution. Google highlighted the model’s long‑context, multilingual and multimodal strengths for use cases ranging from video lecture analysis to interactive code generation, promising that more enhancements will follow in the Gemini 3 series.
Scientists unveiled a tabletop optical system that completes full tensor operations in a single pass of light, enabling AI computations at the speed of light and with efficiency rivaling GPU‑based processing. By encoding each matrix row with unique phase gradients and employing successive Fourier transforms, the device performs element‑by‑element multiplication and summation simultaneously, achieving mean absolute errors below 0.15 across diverse matrix sizes. Tests on MNIST, Fashion‑MNIST and a 256×9 216 U‑Net style‑transfer network showed predictions matching GPU outputs, while a two‑color wavelength extension demonstrated accurate complex‑valued multiplication.
Exascale supercomputers have just produced the most accurate quantum‑materials simulations to date, leveraging upgraded BerkeleyGW software to model electron interactions with unprecedented precision. The new calculations, run on leadership‑class machines, achieved a record‑breaking performance that pushes the limits of many‑body physics, enabling researchers to predict optical and electronic properties of complex semiconductors and two‑dimensional materials with a level of detail never before possible. The breakthrough demonstrates that exascale computing can resolve subtle excitonic effects and charge‑carrier dynamics that were previously out of reach, opening the door to rapid design of next‑generation photovoltaics, quantum devices, and high‑temperature superconductors. The effort showcases how a combination of algorithmic innovation, efficient parallelization, and massive hardware power can transform materials science, providing a powerful tool for both theoretical discovery and practical engineering.
NVIDIA’s Blackwell‑based systems landed at RIKEN this week, marking the start of Japan’s push toward AI‑accelerated scientific discovery and quantum research. The first machine will deploy 1,600 GPUs on the GB200 NVL4 platform and leverage Quantum‑X800 InfiniBand networking to train large AI models for life‑sciences, materials science, and climate forecasting. The second system, with 540 GPUs, will focus on accelerating quantum‑algorithm development and hybrid simulations, effectively acting as a pre‑quantum supercomputer. Together, these two 2,140‑GPU platforms will serve as development hubs for FugakuNEXT, the nation’s next‑generation supercomputer slated for launch by 2030, and will also support NVIDIA’s floating‑point emulation software that brings modern AI‑optimized GPUs to legacy HPC applications.
🧬 BioTech
A multidisciplinary team of scientists has engineered a DNA‑based nanorobot that stores energy within its own structure and then uses that energy to carry out a sequence of tasks without any external power. The robot is built from reconfigurable DNA origami arrays that can be programmed so each junction behaves like a programmable component—capable of locking a segment, acting as a delay timer, or releasing cargo. By inserting trigger strands that store mechanical strain, the array becomes a tiny battery, allowing it to “wind up” and then perform a multi‑step operation autonomously. The researchers demonstrated this autonomous behaviour by showing that a single junction could open, propagate a mechanical change through the array, and ultimately release a fluorescent cargo. The technology could be adapted for medical applications, such as targeted drug delivery or diagnostic sensing, where the nanorobot could navigate to a specific site, release a therapeutic agent, and signal completion—all powered by the energy stored in its DNA scaffold.
A new microneedle patch, created by Texas A&M researchers, was shown to reduce scar tissue and improve heart function in animal models of myocardial infarction. The biodegradable patch delivers the anti‑inflammatory cytokine interleukin‑4 directly to damaged myocardium, encouraging muscle regeneration instead of fibrotic replacement. In preclinical trials the patch restored contractile strength in nearly 70 % of hearts and shortened recovery time by nearly half compared to untreated controls, suggesting a promising strategy for post‑attack cardiac repair.
A new class of disease‑agnostic gene‑editing therapies was announced today by the precision medicine community, promising to shift the focus of genetic treatment from specific conditions to the underlying pathogenic mutations that cause them. In the latest Inside Precision Medicine report, researchers described how modular CRISPR‑based editors can be reprogrammed to target virtually any deleterious DNA change, enabling a single therapeutic platform to address multiple inherited disorders. Preclinical studies highlighted in the article demonstrated that these editors successfully repaired a diverse panel of disease‑causing mutations in mouse models, restoring normal protein function and improving organ function across several organ systems. The report also outlined the first human safety trials, which have been granted accelerated approval status by regulatory agencies due to the broad applicability and high unmet need for these conditions. Industry experts noted that this technology could reduce development timelines, lower costs, and ultimately broaden access to gene therapies for patients worldwide. The article concluded with a call for continued collaboration between biotech firms, academic laboratories, and payers to refine delivery vectors, enhance editing precision, and establish robust post‑market surveillance for these transformative treatments.
💡Products/tools of the week
Fuser centralizes 150+ AI models and 300+ LLMs on a single canvas, offering a node‑based creative workspace that lets users visually chain models and templates to generate and iterate across text, images, video, audio, and 3D, while also supporting quick prototyping, collaboration, and production output with private API access and curated, best‑in‑class model integrations so designers, artists, and teams can experiment and ship multimodal work without juggling separate apps.
Vezlo AI SDK for Code Knowledge Base transforms your source code into a queryable, LLM‑ready knowledge base by using AST‑based analysis to auto‑generate documentation, create vector embeddings, and power semantic search along with a production API server and WebSocket chat; it also brings an AI response validator that detects hallucinations and verifies model outputs against the KB, enabling developers to build code‑aware AI assistants and onboarding bots that deliver accurate, source‑backed answers, boost developer productivity, and reduce the risk of AI misinformation in technical workflows.
Marble creates high‑fidelity, persistent 3D worlds from simple text prompts, images, video, or basic layouts using multimodal generative models, then lets creators—artists, game designers, architects, and researchers—edit and share those environments; the platform rapidly prototypes, iterates, collaborates on, exports, and publishes immersive worlds without requiring deep 3D modeling skills because the AI automates geometry, materials, and stylistic detail while still allowing manual refinement.
Requisor transforms messy ideas and scattered documents into polished project plans, Kanban boards, and timelines by deploying a Personal AI Project Manager alongside plug‑in AI agents that auto‑generate subtasks, set deadlines, estimate effort, score ROI, and prioritize work intelligently, while its no‑code workflow builder, smart bandwidth and resource planning tools, and seamless integrations with Jira, Asana, Trello, and ClickUp empower solopreneurs, freelancers, and small teams to automate execution, eliminate manual tracking across disparate tools, and concentrate on high‑impact tasks—letting users launch projects in minutes, slash planning overhead, and keep every task organized and on track with AI oversight.






The DNA nanorobot that stores energy in its own structure is wild. Its like having a tiny machine that winds itself up and then does its job without needing any extrnal power source. The idea of using it for targeted drug delivery where it navigates, releases, and signals completion all on stored energy could be huge for precision medcine.