Weekly Piece of Future #165
From Printed Neurons to Quantum Chips and Surgical Nanoparticles
Hey there, fellow future-addicts!
Welcome to this week's edition of Rushing Robotics! This week the pace of progress didn't slow down — it accelerated. From artificial neurons that speak the brain's own language to a solar car outshining rooftop panels on a quiet Monday morning, the boundaries between biology, physics, and technology are blurring faster than ever. Whether you're here for the robotics breakthroughs, the quantum leaps, or the biotech revelations, this edition is packed with stories that will make you stop and say: wait, that's already real?
Unlock truly uncensored AI—privately.
While most AI platforms store and analyze your conversations on their servers, Venice keeps everything local to your browser—meaning your private thoughts and queries stay truly private with no server-side storage or data harvesting.
Venice leverages Trusted Execution Environments (TEEs) for end-to-end encrypted processing—ensuring that even during computation, your data remains cryptographically protected from prying eyes.
Get a $10 credit when you register through my link. Upgrade to Venice Pro and apply the credit toward the most advanced models immediately. No filters, no logs—just pure creative freedom.
Claim your $10 credit & start creating!
[Disclosure: I earn a commission if you sign up via this link.]
🤯 Mind-Blowing
Northwestern engineers printed flexible artificial neurons that communicate with living mouse brain tissue as naturally as biological neighbors — using molybdenum disulfide and graphene inks deposited on soft polymer substrates. Aptera's solar EV stunned onlookers by generating nearly three times more power than a home rooftop system on the same morning, thanks to its multi-surface solar cell layout. Google DeepMind dropped Gemini Robotics-ER 1.6, a physical-world reasoning model that pushed instrument reading accuracy from 23% to 93% — and Boston Dynamics wasted no time integrating it into Spot. Meanwhile, the QUASAR-CREATE project is building the world's first open-source post-quantum secure processor, embedding cryptographic protection directly into RISC-V hardware rather than relying on software patches.
🔊 Industry Insights & Updates
Toyota Research Institute confirmed what many suspected: robots trained on diverse, large-scale datasets don't just perform better — they stay better when real-world conditions drift from training. NVIDIA entered the quantum arena with Ising, the first open-source AI model family built to calibrate qubits and decode errors autonomously. Chinese researchers cracked a long-standing materials bottleneck by growing 2D semiconductor films nearly 1,000 times faster than current methods. And Aalto University's Log2Motion tool is now mapping the hidden physical strain behind every tap and swipe on your smartphone.
🧬 BioTech
Stanford researchers used focused ultrasound to trigger light-emitting nanoparticles deep inside living tissue — no surgery, no implants, just steerable light in the brain or gut on demand. Scientists at IBEC and collaborating institutions have for the first time programmed living tissues to fold into precise 3D shapes by placing topological defects at predetermined locations, mimicking nature's own origami. And MIT's Schlau-Cohen lab found that altering the lipid composition of a cell membrane can lock cancer-signaling proteins into a permanently active state — a discovery that could reshape how we think about tumor growth.
💡 Products/Tools of the Week
The builder tools this week are quietly impressive. Fluq gives engineering teams full observability over their AI agent fleets with a single line of setup — no SDK, no code changes, just a waterfall timeline of every LLM call, tool use, and cost. CapiBot brings agent orchestration to businesses at scale, with 280+ pre-configured agents across 12+ professional categories, reachable via Telegram, WhatsApp, Slack, and web. ConceptSeek lets researchers search across YouTube videos, podcasts, and documents by meaning rather than keywords, with every result pinned to an exact timestamp or passage. And AutoScaled closes the loop for B2B sales teams by turning CRM data into personalized, brand-compliant PowerPoint decks — automatically, at scale.
🎥 Video Section
This week's videos showcase robots pushing into new territory — from humanoid arms assembling consumer electronics with surgical precision, to an AI-powered robot challenging LeBron on the court, to a viral clip of a quadruped chasing wild boars through the Polish countryside. The future is surprisingly entertaining.
The road ahead has never looked more electric. Artificial neurons bridging silicon and biology, robots that reason rather than just react, and materials science unlocking chip manufacturing at scales once considered decades away — we are living through a compounding moment in human history. Every week the pieces click together a little faster, and the picture emerging is one of genuine transformation across medicine, energy, computing, and beyond. We can't wait to see what next week brings — and neither should you. Stay hungry, stay futurish!
🤯 Mind-Blowing
Living neurons in mouse brain tissue responded to signals from a printed electronic device exactly as they would to a biological neighbor, in a breakthrough achieved by Northwestern University engineers. Mark Hersam, Walter P. Murphy Professor of Materials Science and Engineering at Northwestern, led the team in developing flexible artificial neurons using a printing technique called aerosol jet printing, depositing inks made from molybdenum disulfide — a semiconductor — and graphene — a conductor — onto soft polymer substrates. The devices produce not just simple on-off pulses but a rich range of spiking patterns including single spikes, tonic firing, and bursting sequences, matching the timing and shape of real neuron voltage spikes. Hersam and Indira M. Raman of Northwestern's Weinberg College confirmed this biological compatibility by testing the devices directly on mouse cerebellar slices, where the artificial signals reliably activated neural circuits in a manner indistinguishable from natural inputs.
At 8:00 AM on April 14, 2026, Aptera's solar electric vehicle was outperforming a residential rooftop solar installation, in a real-world demonstration shared by co-CEO Steve Fambro on X. Fambro, who is also co-founder of Aptera Motors, posted a comparison showing the vehicle generating 363 watts while his home panels produced only 136 watts in the same morning light. The gap is explained by Aptera's multi-surface solar cell layout — panels are placed across the hood, dashboard, and hatch so that at least one section always faces incident sunlight, unlike fixed-angle rooftop systems that lose efficiency during early-morning low-angle conditions. Aptera, now publicly traded on NASDAQ under the ticker SEV, produced its first vehicle from its Carlsbad assembly line on March 3, 2026, and secured a lease extension through March 2028 as it continues its push toward full production.
A new AI model designed to give robots the ability to reason about the physical world — not just follow commands — was released by Google DeepMind on April 14, 2026, under the name Gemini Robotics-ER 1.6. The model, which stands for Embodied Reasoning, is built on Gemini 2.0 Flash and acts as a high-level brain that breaks down complex tasks, plans sequences of actions under physical constraints, and crucially determines when a task has actually been completed. Google DeepMind showed that instrument reading accuracy — the ability to read gauges and sight glasses in industrial settings — jumped from 23 percent in earlier versions to 93 percent when agentic vision is activated. Available now to developers via the Gemini API and Google AI Studio, Gemini Robotics-ER 1.6 also outperforms both its predecessor Gemini Robotics-ER 1.5 and Gemini 3.0 Flash on spatial reasoning, success detection, and physical safety compliance benchmarks.
A new push for quantum-resistant hardware has emerged through the QUASAR-CREATE project. This three-and-a-half-year initiative seeks to create a processor system that can withstand the encryption-breaking power of future quantum computers. The project is a collaboration between German academic institutions, including the Technical University of Munich, and Nanyang Technological University in Singapore, funded by the National Research Foundation. By utilizing the open-standard RISC-V architecture, the consortium is integrating Post-Quantum Cryptography directly into the physical hardware. This shift away from software-only security is intended to remove vulnerabilities and provide a more resilient foundation. Prof. Georg Sigl noted that the open-source nature of the design allows for public audit and independent verification. Additionally, Prof. Gerhard Kramer and Prof. Gwee Bah Hwee highlighted that such secure frameworks are necessary for the sustainable growth of digital communications and data storage.
An upgrade has been integrated into the Spot robot to enable reasoning-driven tasks. Boston Dynamics equipped the quadruped with Google DeepMind’s Gemini Robotics-ER 1.6 to move beyond scripted actions. The system combines vision and language understanding, allowing the robot to perform chores like organizing shoes or walking a dog. While home demos are flashy, the primary focus remains industrial inspection for hazards and gauge reading. This partnership between Boston Dynamics and Google DeepMind aims to improve autonomous reaction to real-world challenges.
🔊 Industry Insights & Updates
Robots trained on massive, diverse datasets outperform those trained on individual tasks — and the gap widens when real-world conditions drift from training scenarios, according to research from the Toyota Research Institute. TRI's team, led in part by researcher Jose Barreiros, developed and tested large behavior models by compiling approximately 1,700 hours of human robot demonstrations spanning over 500 tasks, drawn from both proprietary and publicly available sources. After training, the models were evaluated across 1,800 real-world trials involving precision-demanding, multi-step challenges such as slicing an apple, assembling a breakfast tray, and fitting a bicycle brake rotor. TRI's researchers found that performance advantages became especially pronounced during distribution shifts — moments when conditions deviate from what the model encountered during training — suggesting that multitask pretraining builds more robust generalization.
Quantum processors capable of running real-world applications took a step closer to reality on April 13, 2026, when NVIDIA announced Ising — the world's first open-source AI model family designed specifically for quantum hardware development. Jensen Huang, NVIDIA's founder and CEO, described Ising as the AI control plane for quantum machines, built to tackle the two challenges that currently prevent quantum systems from scaling: noisy qubit calibration and real-time error correction. NVIDIA's Ising Calibration model, trained on data spanning superconducting qubits, quantum dots, ions, neutral atoms, and electrons on helium, allows AI agents to autonomously tune processors to within target specifications without human intervention. Ising Decoding, available in Fast and Accurate variants, demonstrated a 2.25x speedup and 1.53x improvement in logical error rate in testing on GB300 hardware at FP16 precision, and is designed to scale all the way to lattice surgery on million-qubit systems.
A growth method for 2D semiconductor materials nearly 1,000 times faster than existing techniques was developed by Chinese researchers, marking a major step toward industrial-scale chip production beyond silicon. The team, led by Zhu Mengjian of the National University of Defence Technology along with Wencai Xu from the Institute of Metal Research, modified the chemical vapor deposition (CVD) technique by introducing a liquid tungsten-based layer as a substrate. This innovation enabled the production of monolayer tungsten disulfide films at the wafer scale with adjustable doping characteristics, pushing single-crystal domain sizes to sub-millimeter dimensions. Zhu Mengjian noted that the absence of high-performance p-type materials has long been a critical bottleneck for sub-5-nanometer node 2D semiconductors, and that this advance directly addresses that barrier.
An AI tool that exposes the hidden physical cost of tapping and swiping on a smartphone was developed by researchers at Aalto University and Leipzig University. Antti Oulasvirta, professor at Aalto University and ELLIS Finland, led the team in creating Log2Motion — a system that takes the simple coordinate data in smartphone touch logs and converts it into realistic simulations of hand, finger, and arm motion, complete with muscle activation patterns and energy estimates. The tool integrates the Android emulator with the MuJoCo physics engine, allowing the biomechanical model to manipulate real apps in real time and replay recorded user sessions to reveal the physical strain behind each gesture.
🧬 BioTech
A noninvasive way to deliver targeted light anywhere inside the body was developed by researchers at Stanford University, ending a long-standing barrier in medicine. Guosong Hong, assistant professor of materials science and engineering at Stanford's School of Engineering, led the team in creating mechanoluminescent nanoparticles — tiny ceramic-derived particles that circulate through the bloodstream and emit a blue glow at 490 nanometers only when struck by focused ultrasound waves. Hong explained that with these materials, light can be generated in the brain, gut, spinal cord, or muscle without any physical implant. To prove the system works, the team placed a small ultrasound-emitting hat on mice and steered their behavior by activating different brain regions — no skull drilling, no wires, no surgery required.
For the first time, scientists have guided the self-organizing forces of living tissues to produce programmable three-dimensional shapes. IBEC, UPC, and CIMNE, in collaboration with EMBL Barcelona, developed the technique by exploiting nematic order — the tendency of elongated cells to align in the same direction, similar to textile fibers. Pau Guillamat, the study's first author and IBEC researcher, noted that the key innovation is the ability to place topological defects, where forces converge, at specific predetermined locations rather than letting nature place them randomly. When the programmed tissue is released from its flat substrate, stress redistributes and the tissue deforms rapidly into the intended shape, a process that Guillamat compared to releasing a stretched elastic sheet. Xavier Trepat and Marino Arroyo served as co-lead authors of this proof-of-concept study that also holds promise for understanding organ formation and tumor behavior.
Uncontrolled cell growth in cancer may be explained in part by the lipid composition of the cell membrane. Senior author Gabriela Schlau-Cohen and lead author Shwetha Srinivasan PhD '22, along with colleagues including MIT associate professor of chemistry Bin Zhang, used single molecule FRET — a fluorescence-based technique that measures distances between protein segments — to track how EGFR changes shape under different membrane conditions. Normally, around 15 percent of the membrane is negatively charged lipids; the team found that once that level reached 60 percent, EGFR locked into a permanently active state that continually signals cells to proliferate. The research, funded by the National Institutes of Health and MIT's Department of Chemistry, also found that elevated cholesterol levels made membranes more rigid and suppressed EGFR signaling, pointing to cholesterol as another potential regulatory lever.
💡Products/tools of the week
A one-line setup is all it takes for teams to gain complete observability over their AI agent operations with Fluq, a governance platform that tracks every LLM call, tool use, and file write across any agent framework without requiring SDK installation or code changes. Fluq's waterfall timeline view breaks down each agent decision with token counts and costs per event, giving engineering and operations teams the granular data needed to detect inefficient or misbehaving agents immediately. Policy controls and approval gates let organizations enforce access restrictions, set spending limits, and maintain compliance across their full fleet, whether running 10 agents or 10,000. Fluq built the platform to be OpenTelemetry-compatible and framework-agnostic, integrating with LangChain, CrewAI, AutoGen, and custom agent builds alike.
Scalable AI automation became significantly more accessible with the release of CapiBot, an agent orchestration platform providing businesses and developers with 280+ pre-configured agents, 600+ skills, and a hierarchical system of CEO, specialist, and intern agents organized across 12+ professional categories. The platform addresses fragmented AI tooling by offering a single environment where agents collaborate on tasks ranging from research to full business operations, with users retaining full control over how much autonomy agents receive. CapiBot reaches users across Telegram, WhatsApp, Slack, and web chat, and supports 16 one-click company templates for rapid deployment.
ConceptSeek finds ideas, quotes, and exact moments across YouTube videos, podcasts, transcripts, and long-form documents by meaning — not keywords — giving researchers a faster, more precise way to locate evidence inside the sources they trust. With semantic AI indexing user-curated libraries, the platform returns a grounded synthesis of how a concept appears across selected sources, with every result linked directly to a precise timestamp or transcript passage for immediate verification. Researchers, students, journalists, and writers use ConceptSeek to trace arguments, gather verifiable quotes, and prepare interviews without depending on broad AI summaries disconnected from real material. Users control scope entirely by searching only the sources they choose, keeping research evidence-first and fully traceable from the first query.
Sales decks that write themselves from customer data — AutoScaled built the agentic AI platform that makes this a daily workflow reality for B2B revenue teams. AutoScaled plugs into existing CRM and spreadsheet tools — HubSpot, Salesforce, Attio, Google Sheets, and Excel — and generates personalized PowerPoint or Google Slides presentations at scale using a single AI prompt and a pre-uploaded brand template. AutoScaled processes all customer information in line with industry-leading compliance standards and never stores data on its servers, keeping every piece of customer information under the client's control. Teams can share finished decks instantly via direct links or branded landing pages and track client engagement to know exactly when to follow up





