Using light as a neural network, as this viral video depicts, is actually closer than you think. In 5-10yrs, we could have matrix multiplications in constant time O(1) with 95% less energy. This is the next era of Moore's Law. Let's talk about Silicon Photonics... The core concept: Replace electrical signals with photons. While current processors push electrons through metal pathways, photonic systems use light beams, operating at fundamentally higher speeds (electronic signals in copper are 3x slower) with minimal heat generation. It's way faster. While traditional chips operate at 3-5 GHz, photonic devices can achieve >100 GHz switching speeds. Current interconnects max out at ~100 Gb/s. Photonic links have demonstrated 2+ Tb/s on a single channel. A single optical path can carry 64+ signals. It's way more energy efficient. Current chip-to-chip communication costs ~1-10pJ/bit. Photonic interconnects demonstrate 0.01-0.1pJ/bit. For data centers processing exabytes, this 200x improvement means the difference between megawatt and kilowatt power requirements. The AI acceleration potential is revolutionary. Matrix operations, fundamental to deep learning, become near-instantaneous: Traditional chips: O(n²) operations. Photonic chips: O(1) - parallel processing through optical interference. 1000×1000 matmuls in picoseconds. Where are we today? Real products are shipping: — Intel's 400G transceivers use silicon photonics. — Ayar Labs demonstrates 2Tb/s chip-to-chip links with AMD EPYC processors. Performance scales with wavelength count, not just frequency like traditional electronics. The manufacturing challenges are immense. — Current yield is ~30%. Silicon's terrible at emitting light and bonding III-V materials to it lowers yield — Temp control is a barrier. A 1°C change shifts frequencies by ~10GHz. — Cost/device is $1000s To reach mass production we need: 90%+ yield rates, sub-$100 per device costs, automated testing solutions, and reliable packaging techniques. Current packaging alone can cost more than the chip itself. We're 5+ years from hitting these targets. Companies to watch: ASML (manufacturing), Intel (data center), Lightmatter (AI), Ayar Labs (chip interconnects). The technology requires major investment, but the potential returns are enormous as we hit traditional electronics' physical limits.
Hardware Development Trends
Explore top LinkedIn content from expert professionals.
-
-
Virtual Reality can be about more than seeing and hearing - it can also include FEELING - which we call "Haptics" Now usually this is achieved using special Haptic Gloves which create the illusion of pressure on the wearers' fingertips and resistance to their grip. There are even full body suits and rigs for total immersion. However thus far these are either cumbersome, expensive or both, ruling out many users from these more physical experiences. However, systems like the one shown use technologies like ultrasonic fields (basically high frequency, high intensity soundwaves) to "beam" the shape of virtual objects into the air, creating the illusion of touch without any need for gloves or other peripherals. Over the years I've tried several of these systems, and they have progressed from beaming vague inpressions of very small, basic shapes to providing everything from movement to texture and even temperature (imagine being able to feel the difference between a cold glass of water and a hot clay mug of coffee - when neither are really there at all?) If you've ever woken up having slept on your arm and tried to make your morning tea with a numb hand, you'll know the importance of being able to feel what you're doing. Perhaps this form of Virtual Touch technology could be the opportunity we need to bring feeling into the spatial experience. #virtualreality #vr #haptics
-
Don't reduce the carbon footprint of your products without understanding all the possible trade-offs. You could end up increasing your environmental impact instead. Here are 3 things to consider when designing sustainable sound experiences: ⚠ Lowest footprint ≠ Winning concept Successful circular products don’t have the lowest environmental burden by default. Modularity is considered a circular design practice, but it also contributes to increased carbon footprint and depletion of materials (mostly gold, beryllium, and neodymium). A modular product containing electronics has roughly 10% higher impact for both GWP and ADP. 🛠 UX plays a core role as much as CMF and ID Functionalities and usability have their footprint: removing a battery from earpods charging case and using the smartphone battery instead decrease hardware volume and materials footprint (-25%) . The same works for magnets: fashionable to have an earpod snapping to the charging case, until you realize that 1/3 of the overall material impact is due to neodymium. 🔄 Trade-offs are inevitable It is better to design for one core circular principle than having a concept that mediocrely covers all of them. A concept can successfully be repairable and fit a circular ecosystem, but it will hardly be repairable, modular, recyclable, refurbishable, low-carbon, low-resource, long-lasting, energy-efficient, biodegradable, compostable and fit a circular ecosystem. Sustainable design isn’t about ticking every box. It’s about making informed choices that truly minimize impact. ➡What’s your take? Which design principle would you prioritize for a truly circular product? Drop your thoughts below and let’s discuss! #sustainabledesign
-
What's a key innovation driver in leading-edge logic and memory chips? 𝐈𝐭'𝐬 𝐭𝐡𝐞 𝐭𝐡𝐢𝐫𝐝 𝐝𝐢𝐦𝐞𝐧𝐬𝐢𝐨𝐧. A bit of an explainer below, with spotlights on wafer bonding and backside power delivery. 🔎 For advanced logic chips, the third dimension is used to 𝐛𝐨𝐨𝐬𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 and to be able to 𝐬𝐭𝐚𝐜𝐤 𝐭𝐫𝐚𝐧𝐬𝐢𝐬𝐭𝐨𝐫𝐬. It started with FinFET transistors, taking planar transistors into the third dimension. This now extends into gate-all-around and nanosheet transistors, where chipmakers use 3D layers to boost performance, and into the next-gen "CFET" transistors, where chipmakers stack in order to scale. All of this is enabled by innovations such as 𝐁𝐚𝐜𝐤𝐬𝐢𝐝𝐞 𝐏𝐨𝐰𝐞𝐫 𝐃𝐞𝐥𝐢𝐯𝐞𝐫𝐲. More below! 👇 And for a little bit of transistor history and future, read Always Be Curious: https://lnkd.in/eQg3_gKF For memory, the evolution to 3D has also been going on for a while already. NAND memory has a super dense structure, so when real estate became scarce, chipmakers started to build up to further increase bit density. The industry is now mass producing 3D NAND with high 200-something layers, with a roadmap to more than 1,000 (!) layers by the end of this decade. In DRAM, the roadmap is also increasingly 3D-powered with vertical cells, followed by stacked layers of horizontal cell transistors and capacitors. A key enabler for some of these innovations is 𝐰𝐚𝐟𝐞𝐫 𝐛𝐨𝐧𝐝𝐢𝐧𝐠. In wafer bonding, the chip manufacturing process is split over multiple wafers that have to come together as one. An example: 3D NAND originally combined the logic circuitry and the layers of memory cells on a single wafer. To scale further, chipmakers are now splitting the manufacturing process: the logic circuits are made on one wafer, and the memory stack on another. The surfaces are covered in oxide insulation and pads that link up the chips’ interconnect layers. The bonding process then brings the logic wafer and the flipped-over memory wafer together as one, after which the memory wafer is ground down to the memory array and gets an additional interconnect layer. 🔎 𝐌𝐨𝐫𝐞 𝐛𝐚𝐜𝐤𝐠𝐫𝐨𝐮𝐧𝐝 𝐨𝐧 𝐛𝐚𝐜𝐤𝐬𝐢𝐝𝐞 𝐩𝐨𝐰𝐞𝐫 𝐝𝐞𝐥𝐢𝐯𝐞𝐫𝐲 Today’s chips have power delivered from the top of the chip, which requires the power ‘lines’ to go through many layers of wiring to get to the transistors at the bottom of the stack. This means that precious chip real estate has to be used for power delivery, while power is lost as it travels through those many layers. Backside power delivery flips the script and routs power delivery from the bottom (or ‘backside’) of the chip, gaining more direct access to the transistors. In return, the ‘frontside’ real estate can be used to increase transistor density, while improving the overall power and performance of the chip. Image source: ASML's Investor Day, November 2024 #3dintegration #gaafet #cfet #transistors #3dnand #dram #nand
-
5 key developments this month in Wearable Devices supporting Digital Health ranging from current innovations to exciting future breakthroughs. And I made it all the way through without mentioning AI… until now. Oops! >> 🔘Movano Health has received FDA 510(k) clearance for its EvieMED Ring, a wearable that tracks metrics like blood oxygen, heart rate, mood, sleep, and activity. This approval enables the company to expand into remote patient monitoring, clinical trials, and post-trial management, with upcoming collaborations including a pilot study with a major payor and a clinical trial at MIT 🔘ŌURA has launched Symptom Radar, a new feature for its smart rings that analyzes heart rate, temperature, and breathing patterns to detect early signs of respiratory illness before symptoms fully develop. While it doesn’t diagnose specific conditions, it provides an “illness warning light” so users can prioritize rest and potentially recover more quickly 🔘A temporary scalp tattoo made from conductive polymers can measure brain activity without bulky electrodes or gels simplifying EEG recordings and reducing patient discomfort. Printed directly onto the head, it currently works well on bald or buzz-cut scalps, and future modifications, like specialized nozzles or robotic 'fingers', may enable use with longer hair 🔘Researchers have developed a wearable ultrasound patch that continuously and non-invasively monitors blood pressure, showing accuracy comparable to clinical devices in tests. The soft skin patch sensor could offer a simpler, more reliable alternative to traditional cuffs and invasive arterial lines, with future plans for large-scale trials and wireless, battery-powered versions 🔘According to researchers, a new generation of wearable sensors will continuously track biochemical markers such as hydration levels, electrolytes, inflammatory signals, and even viruses, from bodily fluids like sweat, saliva, tears, and breath. By providing minimally invasive data and alerting users to subtle health changes before they become critical, these devices could accelerate diagnosis, improve patient monitoring, and reduce discomfort (see image) 👇Links to related articles in comments #DigitalHealth #Wearables
-
Researchers have made a significant breakthrough in AI hardware with a 3D photonic-electronic platform that enhances efficiency and bandwidth, potentially revolutionizing data communication. Energy inefficiencies and data transfer bottlenecks have hindered the development of next-generation AI hardware. Recent advancements in integrating photonics with electronics are poised to overcome these challenges. 💻 Enhanced Efficiency: The new platform achieves unprecedented energy efficiency, consuming just 120 femtojoules per bit. 📈 High Bandwidth: It offers a bandwidth of 800 Gb/s with a density of 5.3 Tb/s/mm², far surpassing existing benchmarks. 🔩 Integration: The technology integrates photonic devices with CMOS electronic circuits, facilitating widespread adoption. 🤖 AI Applications: This innovation supports distributed AI architectures, enabling efficient data transfer and unlocking new performance levels. 📊 Practical Quantum Advancements: Unlike quantum entanglement for faster-than-light communication, using quantum physics to boost communication speed is more feasible and practical. This breakthrough is long overdue, but the AI boost might create a burning need for this technology. Quantum computing might be seen as a lot of hype, but using advanced quantum physics to enhance communication speed is more down-to-earth than relying on quantum entanglement for faster-than-light communications, which is short-lived #AI #MachineLearning #QuantumEntanglement #QuantumPhysics #PhotonicIntegration #SiliconPhotonics #ArtificialIntelligence #QuantumMechanics #DataScience #DeepLearning
-
Today, Science Robotics has published our work on the first drone performing fully #neuromorphic vision and control for autonomous flight! 🥳 Deep neural networks have led to amazing progress in Artificial Intelligence and promise to be a game-changer as well for autonomous robots 🤖. A major challenge is that the computing hardware for running deep neural networks can still be quite heavy and power consuming. This is particularly problematic for small robots like lightweight drones, for which most deep nets are currently out of reach. A new type of neuromorphic hardware draws inspiration from the efficiency of animal eyes 👁 and brains 🧠. Neuromorphic cameras do not record images at a fixed frame rate, but instead have the pixels track the brightness over time, sending a signal only when the brightness changes. These signals can now be sent to a neuromorphic processor, in which the neurons communicate with each other via binary spikes, simplifying calculations. The resulting asynchronous, sparse sensing and processing promises to be both quick and energy efficient! 🔋 In our article, we investigated how a spiking neural network (#SNN) can be trained and deployed on a neuromorphic processor for perceiving and controlling drone flight 🚁. Specifically, we split the network in two. First, we trained an SNN to transform the signals from a downward looking neuromorphic camera to estimates of the drone’s own motion. This network was trained on data coming from our drone itself, with self-supervised learning. Second, we used an artificial evolution 🦠🐒🚶♂️ to train another SNN for controlling a simulated drone. This network transformed the simulated drone’s motion into motor commands such as the drone’s orientation. We then merged the two SNNs 👩🏻🤝👩🏻 and deployed the resulting network on Intel Labs’ neuromorphic research chip "Loihi". The merged network immediately worked on the drone, successfully bridging the reality gap. Moreover, the results highlight the promises of neuromorphic sensing and processing: The network ran 10-64x faster 🏎💨 than a comparable network on a traditional embedded GPU and used 3x less energy. I want to first congratulate all co-authors at TU Delft | Aerospace Engineering: Federico Paredes Vallés, Jesse Hagenaars, Julien Dupeyroux, Stein Stroobants, and Yingfu Xu 🎉 Moreover, I would like to thank the Intel Labs' Neuromorphic Computing Lab and the Intel Neuromorphic Research Community (#INRC) for their support with Loihi (among others Mike Davies and Yulia Sandamirskaya). Finally, I would like to thank NWO (Dutch Research Council), the Air Force Office of Scientific Research (AFOSR) and Office of Naval Research Global (ONR Global) for funding this project. All relevant links can be found below. Delft University of Technology, Science Magazine #neuromorphic #spiking #SNN #spikingneuralnetworks #drones #AI #robotics #robot #opticalflow #control #realitygap
-
After a decade at Intel, I learned something that will blow your mind about the semiconductor industry. The $600B chip market just changed forever. Here's why: → Generic chips are hitting a wall → AI workloads need custom silicon → One-size-fits-all is dead. But Broadcom + OpenAI just revealed the solution: CUSTOM AI CHIPS. • Tesla's FSD chip: 21x faster than GPUs • Google's TPUs: 80% cost reduction • Apple's M-series: 40% better efficiency • Amazon's Graviton: 20% price improvement Instead of forcing AI into generic hardware... what if we built hardware specifically for AI? The benefits are insane: - 10x performance improvements - 50% power reduction - Custom architectures for specific models - Direct chip-to-algorithm optimization - Massive cost savings at scale This is about RETHINKING THE ENTIRE STACK. From my manufacturing AI work, I've seen how custom silicon transforms production lines. Now we're seeing the same revolution in AI infrastructure. Sometimes the best solutions hide in plain sight 🌟 #AI #Semiconductors #Innovation #Manufacturing #TechTrends #DigiFabAI
-
Smart, connected, Software-Defined Products (SDP) are driving innovation in nearly every industry from medical devices to aircraft. And software and semiconductors are at the foundation of every one of these software-defined products. Embracing the complexity this has introduced by optimizing semiconductors, software, electrical and mechanical systems in a Comprehensive Digital Twin (CDT) is the only way to gain a significant competitive advantage Semiconductors are at the heart of these new products, so let's dig a bit more into how the CDT can accelerate semiconductor development. But first, what is the CDT? ** A digital twin is a physics-based digital representation of an asset or process. To be comprehensive, the digital twin must include all the elements required to define a product, production process or business operations, ** incorporate information across all domains -- semiconductor, software, electrical and mechanical, ** and span across the lifecycle from engineering to manufacturing to deliver and support. Why is this important for the semiconductor industry? First, semiconductors exist within the context of a product, such as an automobile, which means they should be designed and verified in the context of the entire product. This includes software, the wire harness and how they will connect to other systems of the car. The CDT is the only way to do this and in turn understand the performance characteristics of the semiconductor as well as how long it will take for the semiconductor and software together to interact with the car’s systems. This interaction of the software and semiconductors is critical for SDP, which means companies can no longer afford to select an off-the-shelf processor and then build around it. Due to rapidly advancing product complexity, it would result in a suboptimal solution that ultimately limits the features that can be added in the future or worse, creates a product not capable of handling all the software features. The CDT enables companies to codevelop the semiconductor and software architecture to deliver an optimized solution that meets the requirements of their product, today, and has room to upgrade with new software features in the future. Finally, companies need to embrace new chip designs and architecture. 3D-IC helps accelerate the design of new chips so companies can focus on incorporating the most advanced nodes in a chiplet, and then build around it with existing solutions. This in turn can accelerate the design, testing and availability of new chip designs, but it does introduce new challenges for thermal management and the mechanical design of the chip, highlighting the need for the CDT and a multi-domain design environment. If you are interested in learning more, I recently had an opportunity to discuss some of these challenges with my colleague Michael Munsey on a new podcast series. You can find the link to the series in the comments below. #digitaltransformation