Weekly Piece of Future #162
From Quantum Yields to Humanoid Factories and Brain-Inspired Chips
Hey there, fellow future-addicts!
Welcome to this week's edition of Rushing Robotics! Another week, another batch of breakthroughs that make the future feel uncomfortably close. Solar cells just broke a century-old physics limit, MIT built a wristband that sees through your skin, and Google made AI memory 6x cheaper — all while humanoid robots quietly showed up to work in real factories.
Unlock truly uncensored AI—privately.
Venice.ai gives you instant access to top‑tier video generation (like Veo‑3.1), stunning image creation, and powerful text/code models—all with full privacy (your data never leaves your device).
Get a $10 credit when you register through my link. Upgrade to Venice Pro and apply the credit toward the most advanced models immediately. No filters, no logs—just pure creative freedom.
Claim your $10 credit & start creating!
[Disclosure: I earn a commission if you sign up via this link.]
🤯 Mind-Blowing
One photon, two electrons — Kyushu University shattered the Shockley–Queisser limit with a 130% quantum yield, with a theoretical ceiling of 200% now in sight. MIT's ultrasound wristband maps 22 degrees of hand freedom through your skin to control robots and VR interfaces. Cambridge's brain-inspired memristor chip stores and processes data in the same place, cutting AI energy waste at the source. Microsoft's GroundedPlanBench finally teaches robots to think spatially before acting. And Google's TurboQuant compresses LLM memory by 6x with zero quality loss — no retraining needed.
🔊 Industry Insights & Updates
Renault is deploying 350 humanoid robots across its factories in 18 months. Microsoft and NVIDIA cut nuclear permitting time by 92% with AI. HD Hyundai and Persona AI are bringing bipedal welding robots to shipyards by 2027. And DNA origami just enabled quantum chip emitter placement with 90% accuracy at 13-nanometer precision.
🧬 BioTech
China's YHB-01 surgical robot cut procedure time by 29% with a perfect success rate. Waterloo built a lymphedema sleeve that works for 8 hours untethered. And CytomX's Varseta-M posted a 32% response rate in late-stage colorectal cancer — roughly 3x the current standard — sending its stock up 66% in a single day.
💡 Products/Tools of the Week
Glyde turns any browser workflow into a polished SOP in one click — screenshots, steps, and AI descriptions included. Git AutoReview runs Claude, Gemini, and GPT in parallel to draft PR feedback that only goes live when you approve it. Jentic Mini is an open-source execution layer that handles API auth and credentials so your agents never touch sensitive keys. Agentplace lets teams build, deploy, and chat with task-specific AI agents — for research, HR, lead management, and more — in a single workspace.
🎥 Video Section
AheadForm's Origin F1 real-time interaction demo · San Jose Airport's new AI travel robot · KAIST Humanoid v0.7 field test.
The compounding is accelerating. Every breakthrough this week unlocks three more. Stay hungry, stay futurish!
🤯 Mind-Blowing
Setting a new record of 130% quantum yield — and pointing toward a theoretical maximum of 200% — a joint research effort by Kyushu University and Johannes Gutenberg University Mainz has demonstrated that solar cells need not be limited to a one-photon, one-electron relationship. The result was made possible by singlet fission, a process in which one energetic photon triggers the formation of two triplet excitons, and by a molybdenum-based spin-flip emitter that captures those excitons before the competing FRET energy-transfer mechanism can waste them. Professor Yo Sasaki of Kyushu University described the need for an acceptor that could selectively extract multiplied triplet excitons after fission as the defining challenge the team solved. The collaboration grew from an exchange student connection: Adrian Sauer of JGU Mainz brought expertise in molybdenum compounds to Kyushu, and the two institutions' complementary knowledge enabled the breakthrough. Researchers are now working to move the technology from its current solution-based form into solid-state solar panels, LEDs, and next-generation quantum computing devices.
Demonstrated by MIT engineers, a wearable ultrasound wristband uses internal imaging of the wrist's tendons and muscles to translate human hand movements into robotic and digital actions with 22 degrees of freedom — a capability no existing wearable has matched. The smartwatch-sized device is built around miniaturized "ultrasound stickers" that peer beneath the skin in real time, capturing anatomical changes as fingers move and feeding them to an AI that learned the patterns from thousands of simultaneously recorded camera and ultrasound data points. MIT's Xuanhe Zhao framed the broader significance: "We think this work has immediate impact in potentially replacing hand tracking techniques with wearable ultrasound bands in virtual and augmented reality" In live testing, volunteers used the wristband to perform delicate grips — mimicking holding scissors, a tennis racket, or a pencil — and to zoom and manipulate virtual objects on screen through natural pinching motions. MIT's team views the device as a foundational platform for VR/AR interfaces, robotic surgery training, and humanoid robot dexterity development.
Energy waste in AI hardware — caused by the constant shuttling of data between memory and processors — is directly targeted by a new neuromorphic chip developed by University of Cambridge researchers. Dr. Babak Bakhit and his Cambridge team engineered a hafnium oxide memristor that processes and stores data in the same location, eliminating the heat-generating data transfers that make conventional chips so power-hungry. The device replaces unstable conductive filaments with a smooth interface-switching mechanism, where strontium and titanium form p-n junctions that control electricity with precision across thousands of cycles. Laboratory tests confirmed the chip can replicate spike-timing dependent plasticity, the biological learning rule that governs how neurons reinforce or weaken connections — a feature Dr. Bakhit called essential for hardware that can "learn and adapt, rather than merely store bits." A 700°C production temperature currently blocks integration with standard chip fabrication lines, and the Cambridge team is actively working to bring that figure down.
Tasked with placing four napkins on a couch, a robotic AI repeatedly grabbed the same one — a failure that illustrates exactly the gap Microsoft and academic collaborators set out to close with their new GroundedPlanBench benchmark, published on arXiv. The root problem is architectural: conventional robotic systems plan in language first and attempt spatial grounding second, meaning errors in interpreting ambiguous human instructions — like "top-left napkin" — carry forward and break execution. Microsoft and the team designed GroundedPlanBench to evaluate models that plan and spatially ground simultaneously, with each action in a task linked to a precise image location rather than a text phrase, using over 1,000 tasks drawn from real robot data. Alongside the benchmark, Microsoft and the researchers released V2GP (Video-to-Spatially Grounded Planning), a training method that mines robot task videos to generate more than 40,000 structured plans spanning one to 26 action steps, each step anchored to a specific physical location. The resulting models showed measurably better performance against the benchmark than traditional split systems, and the team identified integration with real-time predictive simulation as the likely next step toward robots that can catch their own mistakes before making them.
Revealed by Google Research, TurboQuant is a two-step AI compression algorithm that reduces the memory footprint of large language model key-value caches by 6x and speeds up attention score computation by 8x on Nvidia H100 accelerators — without degrading output quality. The key-value cache, which Google describes as a "digital cheat sheet" that stores pre-computed vector data to avoid redundant processing, is the primary bottleneck TurboQuant targets; high-dimensional vectors describing complex information inflate the cache and slow performance. Google's first step, PolarQuant, converts standard Cartesian vector coordinates into polar coordinates — reducing each vector from multi-dimensional XYZ values to just a radius and a direction, analogous to replacing "Go 3 blocks East, 4 blocks North" with "Go 5 blocks at 37 degrees." The second step, Quantized Johnson-Lindenstrauss (QJL), applies a 1-bit error-correction layer that reduces residual inaccuracies left by PolarQuant while preserving the essential vector relationships that drive attention scoring. Google tested TurboQuant across long-context benchmarks using Gemma and Mistral open models, reporting perfect downstream results and the ability to quantize the cache to just 3 bits with no additional model training required.
🔊 Industry Insights & Updates
Already operating on the factory floor at Renault's Douai facility in France, the Calvin-40 humanoid robot built by Wandercraft is the centrepiece of Renault Group's plan to roll out 350 humanoid robots across its manufacturing sites within 18 months. The French automaker's deployment targets logistics, material handling, and heavy industrial tasks — areas where the mobility and adaptability of a bipedal robot offer clear advantages over stationary robotic cells. Renault and Wandercraft engineered the Calvin platform specifically for brownfield factories, where existing layouts designed for human workers make it impractical to install the fixed infrastructure that conventional automation requires. The robots rely on AI-driven perception and navigation to move autonomously across production floors, interact with existing machinery, and integrate with digital manufacturing tools such as digital twins and real-time monitoring systems. Renault's initiative signals a broader shift in automotive manufacturing, where humanoid robots are beginning to move from research environments into full-scale, real-world industrial deployment.
Driven by surging AI data center energy demand, Microsoft and NVIDIA have joined forces to build a digital engineering ecosystem that uses AI and simulation to make nuclear reactor deployment faster, cheaper, and safer. The partnership, showcased at CERAWeek 2026 with Aalo Atomics, addresses construction delays that have historically stretched years beyond schedule due to manual regulatory workflows and siloed engineering data. Microsoft and NVIDIA's platform applies generative AI to permitting and licensing, 4D and 5D simulations to construction planning, and AI-driven predictive maintenance sensors to reactor operations — covering the full lifecycle from design approval to long-term uptime management. Aalo Atomics confirmed it achieved a 92% cut in permitting time using the platform, amounting to annual savings of around $80 million, while Idaho National Laboratory has started using the AI tools to automate the production of engineering and safety analysis reports. The entire ecosystem runs on Microsoft Azure, integrating NVIDIA's Omniverse, NeMo, Isaac Sim, and Metropolis with Microsoft's own Generative AI Permitting Accelerator and Planetary Computer.
Signed on March 23 at HD Hyundai's Global R Center in Korea, a new joint development agreement between HD Hyundai, its subsidiaries, and Persona AI sets out to bring bipedal humanoid robots into shipyard welding operations. Persona AI, a US robotics company drawing on NASA-derived technology, will lead the design of a modular humanoid platform featuring a highly dexterous robotic hand capable of operating in confined, unstructured spaces. HD KSOE will build AI-powered welding training systems from live shipyard data, and HD Hyundai Robotics will manage integration across production workflows. The deal follows a May 2025 prototype assessment and targets a phased rollout across multiple shipbuilding facilities starting in 2027. Labor shortages in high-risk industrial jobs, particularly welding, are the driving force behind this push toward smart shipyard automation.
Positioning quantum light emitters on chips has long been held back by imprecision — a problem now addressed by Nanjing University, Skolkovo Institute of Science and Technology, and LMU Munich researchers using DNA origami as a nanoscale programming tool. The team's method embeds thiol molecules into DNA origami triangles, which serve as templates guiding the deposition of MoS2 monolayers into precise single-photon emitter arrays on patterned surfaces. The thiol-MoS2 interaction creates localized exciton-trapping sites responsible for the bright single-photon emission, with the researchers reporting stable optical output well below the threshold required to confirm quantum light sourcing. An overall yield of approximately 90% emitter placement was recorded alongside an average positioning accuracy of 13 nanometers — far beyond what conventional defect-based fabrication can offer. The study, published in Light: Science & Applications, also notes that further tuning is possible by varying the molecules incorporated into the DNA templates, opening doors to hybrid organic-inorganic quantum devices.
🧬 BioTech
A 29% reduction in surgical time and a perfect success rate marked the debut clinical trial of China's YHB-01 surgical robot, designed to assist with cerebral angiography. Researchers at Peking Medical College Hospital tested the system across 50 patients, comparing 25 robotic-assisted procedures with 25 performed manually by a single novice neurosurgeon. The YHB-01 allowed the surgeon to work from a radiation-safe remote console, eliminating the physical strain of manually threading a wire from the thigh to the brain while wearing lead protection. Chinese researchers published the results on January 30 in the Chinese Neurosurgical Journal, reporting no differences between groups in fluoroscopy time, patient radiation dose, or contrast agent dosage. Dr. Zhao Yuan acknowledged the study's limited sample size and emphasized the need for larger trials.
Researchers at the University of Waterloo have created a groundbreaking compression sleeve that transforms lymphedema treatment from a stationary process to a mobile therapy option. The device, which integrates all key components into a unit roughly the size and weight of a smartphone, runs on a rechargeable battery that powers the sleeve for up to eight hours on a single charge. This innovation allows cancer survivors to move freely during therapy sessions, unlike existing systems that cost up to $3,000 and require patients to remain seated during treatment. The team aims to cut costs by simplifying the system and partnering with manufacturers to produce the control unit at scale, with a goal of delivering full therapy at roughly half the cost of current devices.
A masked antibody-drug conjugate developed by CytomX, Varseta-M, just delivered one of the most surprising datasets in colorectal cancer in years — a 32% confirmed response rate in heavily pre-treated late-stage patients who had already failed a median of 3 prior therapies, with ~90% disease control and 7.1 months of progression-free survival, roughly 3x the current standard. The drug's PROBODY masking technology keeps the toxic warhead inert in healthy tissue, only activating inside the tumor microenvironment — a mechanism that finally makes EpCAM druggable, after decades of failed attempts that caused unacceptable toxicity in normal cells. CytomX stock surged 66% in a single session following the Phase 1 data release on March 16, 2026, and with EpCAM expressed across multiple solid tumor types, the platform's implications extend well beyond colorectal cancer.
💡Products/tools of the week
Turning browser work into structured documentation became a one-click operation after Glyde launched its AI-powered Chrome extension for recording workflows and generating SOPs. The extension sits in Chrome's sidebar or as a floating toolbar, records every click, input, and navigation across tabs, and takes automatic screenshots at each step without interrupting the user's flow. Once stopped, the Glyde web app applies AI to segment the session into clear steps, craft contextual descriptions rather than generic captions, add tips and warnings, validate quality, and strip out sensitive data automatically before producing a polished guide. With a free plan covering unlimited recordings and up to 25 published SOPs, and Pro options adding DOCX, Markdown, and direct Notion and Confluence export, Glyde gives teams of any size a fast path from doing work to documenting it.
An AI-powered code review tool built as a VS Code extension, Git AutoReview runs Claude, Gemini, and OpenAI GPT in parallel to generate draft PR feedback for GitHub, GitLab, and Bitbucket. Every suggestion stays in draft until a developer approves, edits, or rejects it, so nothing is ever published without explicit human sign-off. The extension analyzes full project context, runs security scans, verifies Jira acceptance criteria, and provides confidence scores alongside deep-agent reviews. Because it supports BYOK, code is sent only to the team's chosen AI provider, keeping sensitive repositories under full control.
Dealing with real API calls from agents forced developers into a maze of embedded auth snippets, prompt-level secret juggling, and repetitive integration code. In response, Jentic Mini built a self-hosted, open-source execution layer that interposes itself between any agent and the outside world. Under this design, the agent focuses purely on describing what it wants done, while Jentic Mini determines which API suits that request from a massive catalog of more than 10,000 integrations, injects the appropriate credentials at runtime, and relays the call. By keeping all sensitive keys confined to this broker layer and never exposing them inside the agent context, Jentic Mini dramatically simplifies secure, large-scale API orchestration for AI systems.
Powerful task-specific agents moved from concept to daily workflow through a platform that treats them like instantly available AI teammates. Agentplace created a system where users can build agents for research, lead management, product triage, HR support, admin work, and internal help, then deploy them immediately so they can start summarizing documents, routing leads, prioritizing requests, or scheduling meetings. Inside the product, teams design agents quickly with a “vibe code” approach, deploy to a secure cloud environment, and decide whether each agent should be open to everyone, locked to a team, or tightly restricted. A single workspace lets people chat with multiple agents, use voice interactions, and seamlessly switch into an edit mode to refine capabilities as patterns and edge cases emerge. Agentplace underpins all of this with an AI-native architecture that uses skills, a file system as memory, and integrations with MCPs and major model providers.





