Don’t miss tomorrow’s Educational Webinar: A Chemical Perspective on Liquid Cooled Data Centers! Date: Thursday, November 13, 2025 Time: 11:00 AM ET Duration: 1 Hour Data center power densities are on an upward trajectory with no sign of slowing down. The increase in power unavoidably necessitates an increase in waste heat that requires removal and dissipation. Current AI technologies have outgrown air cooling designs and now require liquid cooling to fully utilize the potential of today’s powerful AI chips. The rapid shift in cooling requirements has led to a knowledge gap for those intimately involved with the implementation of performance fluids such as OAT PG-25 for data center thermal management. OCP is working to bridge this gap by publishing guidelines and specifications for liquid cooling best practices. In this webinar, a fluid expert from the PFX Group will breakdown the chemistry behind the liquid “life-blood” of the AI data center and explain the importance of establishing an analytical testing cadence with professional analysis to monitor the health of the cooling system and prolong uninterrupted up-time. Who should attend? Data center operators, CDU manufacturers, Rack manufacturers, AI hyperscalers, Data center engineers, Data center service contractors, Chemical distributors, Chip designers/manufacturers, and Data center hardware contractors (e.g. Foxconn) Learn more and register for free here: https://bit.ly/3WSqjLC
Learn about Liquid Cooled Data Centers in Webinar
More Relevant Posts
-
Redefining AI Data Center Power System - Independent Power Paths for Compute System and Cooling System: To meet 60KW-100KW /Rack AI densities with higher reliability and maintainability, the compute power and cooling power shall be electrically decoupled: 1. Compute: 13.8kV AC - Solid State Transformer - 800V DC backbone - Rack level 48V DC - on board VRMS. 2. Cooling System: Dedicated 400V AC(480V AC) or 400V DC Busbar, fed by Group A/ Group B redundant sources( UPS-backed) via STS; serves centralized pumps skids/CDUs/dry coolers/rear-door HEX loops for multiple racks. This separation reduces stress and heat on rack DC buses, improves fault isolation and serviceability, and aligns with high-density liquid-cooling practices. This summary outlines a new approach to AI data center electrical power design: separating compute and cooling power paths to improve efficiency and reliability. I haven't gone into the full implementation details here, but if this topic resonates with your current projects, I'd be very interested in exchanging ideas. Feedback and comments from operations engineers and R&D specialists working on next-generation AIDC architectures are very welcome. Let's discuss how this can be applied in practice.
To view or add a comment, sign in
-
Houghton Chemical is excited to attend the upcoming webinar “A Chemical Perspective on Liquid Cooled Data Centers,” with speakers from the Open Compute Project Foundation, the Dell’Oro Group, and the Performance Fluid Experts Group. Join us on November 13th as we explore the evolving role of liquid cooling in next-generational data centers! The utilization of liquid cooling for data centers is essential for today’s AI technologies and high-performance computing. In order to handle higher power densities with more efficient heat transfer, air cooling data centers are being replaced with liquid cooling. The rapid shift from air to liquid cooling designs comes with new guidelines and operations to facilitate the thermal management and waste heat removal and dissipation. Webinar content features: ◈ What types of fluids are used as data center coolants, and their advantages/disadvantages ◈ How chemistry affects fluid performance and life ◈ Preparing a system to accept liquid coolant (flushing and commissioning) ◈ What to look for during operation to ensure a healthy system ◈ How to test, remediate, flush, and fill aging fluid Register here today! https://lnkd.in/gmd7Wkfw
To view or add a comment, sign in
-
-
💾 Data Centers – Liquid Cooling for High-Performance Computing Industry Context: As AI workloads grow, traditional air cooling struggles to keep up with rising rack densities. Problem: A tech firm’s data center was overheating during AI model training cycles, throttling performance and increasing downtime risk. Solution: Direct-to-chip liquid cooling systems were implemented to replace inefficient air cooling in high-density racks. Implementation: • Installed modular liquid cooling units • Integrated leak detection and safety valves • Deployed monitoring dashboards for performance tracking Results: ✔ 30% improvement in cooling efficiency ✔ Reduced noise and space footprint ✔ Increased uptime during peak workloads Lessons Learned: Liquid cooling is not the future—it’s the now for high-performance computing environments. Planning AI expansion? Liquid cooling can keep your performance running hot while staying cool. 📞 Quick Connect: 7720032487 |aniket@wcsipl.com 📞 Quick Connect: 9881719453 | yogiraj@wcsipl.com 🔗 Website: www.wcsipl.com // www.wcsipl.net YouTube - https://lnkd.in/d2tvMtsD #DataCenters #LiquidCooling #AIInfrastructure #Uptime #SmartHVAC
To view or add a comment, sign in
-
To realize the increase in Power Demand in AI Data Centers. Note the AI racks are reaching to consume up to 200–500 kW per rack. At these densities the air cooling is no longer applicable and even conventional liquid cooling approaches will be reaching their physical limits. Data centers are transforming into industrial-scale energy systems! Accordingly, the market exponentially demanding new approaches to power distribution, thermal cooling management, and infrastructure design. And I believe this is where - the high-Quality, multidisciplinary professionals and engineers with knowledge and experience across electrical, mechanical, and control systems - have a unique opportunity to make a significant impact in shaping the next generation of AI data centers.
To view or add a comment, sign in
-
Power Stability Redefined: Eulex XG Gap Capacitors for AI Data Centers ⚡ AI-driven data centers are redefining performance standards—power density per rack is surging from 20 kW to 30+ kW. With this escalation comes a critical need: stability, reliability, and peace of mind in power delivery. 👉 Enter Eulex XG Gap Capacitors—engineered for the next generation of ultra-high-power infrastructure. Purpose-built to handle today’s demanding workloads, they deliver industry-leading performance, longevity, and scalability. 🔑 Key Features & Benefits: 1️⃣ Extreme Power Handling – Stabilizes ultra-high current surges at the rack level, perfect for OCP Open Rack V3 48 V/54 V systems moving beyond 20 kW. 2️⃣ Superior Stability & Reliability – Maintains operational integrity even during unpredictable power swings—vital for AI workloads with relentless uptime requirements. 3️⃣ Cutting-Edge Design – Advanced materials and precision architecture minimize ESR, optimize thermal management, and ensure reliable performance under rapid cycling. 4️⃣ Seamless Integration – Compatible with modern rack systems, available in flexible mounting formats and custom configurations. 5️⃣ Future-Forward Scalability – Built to evolve with increasing AI workloads, reducing design rework and lowering long-term cost of ownership. 💡 As AI continues to push the boundaries of data center power, Eulex XG Gap Capacitors are redefining what’s possible in stability and endurance. 🔗 Learn more about how Eulex is powering the future of AI infrastructure and read the latest Signal Integrity Journal article penned by Steven Sandler at Picotest offering Eulex Gap Capacitors as a next-gen solution for demanding AI Data Center workloads. https://lnkd.in/eF23Q5bY hashtag #AIDatacenters #PowerElectronics #Capacitors #DataCenterInnovation #NextGenTechnology #EngineeringExcellence #Eulex
To view or add a comment, sign in
-
-
𝗪𝗲 𝘁𝗮𝗹𝗸 𝗮 𝗹𝗼𝘁 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜’𝘀 𝗽𝗼𝘄𝗲𝗿 𝗵𝘂𝗻𝗴𝗲𝗿 𝗮𝗻𝗱 𝗰𝗼𝗼𝗹𝗶𝗻𝗴… 𝗯𝘂𝘁 𝗮𝗹𝗺𝗼𝘀𝘁 𝗻𝗼 𝗼𝗻𝗲 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝘀 𝗵𝗼𝘄 𝗔𝗜 𝗱𝗲𝗰𝗶𝗱𝗲𝘀 𝘄𝗵𝗲𝗻 𝗮𝗻𝗱 𝘄𝗵𝗲𝗿𝗲 𝘁𝗼 𝗿𝘂𝗻. Behind the scenes, an AI Scheduler, think “𝗮𝗶𝗿 𝘁𝗿𝗮𝗳𝗳𝗶𝗰 𝗰𝗼𝗻𝘁𝗿𝗼𝗹” for compute, routes jobs across data centers based on real-time conditions: • Latency: keep inference close to users/data • Hardware: send work where GPUs are free • Performance: rush jobs now, batch jobs later • Energy & cost: run where power is cleaner and cheaper - for reference see my previous post. The new twist: schedulers are becoming 𝗲𝗻𝗲𝗿𝗴𝘆-𝗮𝘄𝗮𝗿𝗲. Some already use forecasts and RL to line up compute with renewable peaks and low prices. Example: 𝘪𝘧 𝘴𝘰𝘭𝘢𝘳 𝘪𝘯 𝘛𝘦𝘹𝘢𝘴 𝘪𝘴 𝘴𝘵𝘳𝘰𝘯𝘨𝘦𝘴𝘵 𝘢𝘵 1 𝘱.𝘮., 𝘢 𝘣𝘪𝘨 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘳𝘶𝘯 𝘤𝘢𝘯 𝘸𝘢𝘪𝘵 𝘢 𝘣𝘪𝘵, 𝘴𝘢𝘮𝘦 𝘫𝘰𝘣, 𝘭𝘰𝘸𝘦𝘳 𝘤𝘰𝘴𝘵, 𝘭𝘰𝘸𝘦𝘳 𝘤𝘢𝘳𝘣𝘰𝘯. 𝘛𝘩𝘢𝘵’𝘴 𝘤𝘰𝘮𝘱𝘶𝘵𝘦 𝘧𝘪𝘯𝘢𝘭𝘭𝘺 𝘴𝘱𝘦𝘢𝘬𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘰𝘧 𝘵𝘩𝘦 𝘨𝘳𝘪𝘥. This isn’t just “𝗴𝗿𝗲𝗲𝗻 𝗜𝗧.” It’s resilience and economics: fewer price shocks, smarter utilization, more predictable capacity planning. Compute that understands power (and power that understands compute) is where the next performance gains come from. Well I guess 𝗪𝗮𝘁𝘁𝘀 𝗮𝗻𝗱 𝗙𝗟𝗢𝗣𝘀 𝗮𝗿𝗲 𝗳𝗶𝗻𝗮𝗹𝗹𝘆 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗺𝗲𝗲𝘁𝗶𝗻𝗴. Picture your AI Scheduler and EMS actually talking: “Here’s my training queue.” “Here’s when power is clean and cheap.” Then the scheduler lines up jobs to hit those windows. Yeah I know - hard to digest. #datacenter #aidatacenter #energy #gridtransformation
To view or add a comment, sign in
-
Tech Tip Thursday: Don’t Let Your Infrastructure Melt Under the Heat 🔥 The latest wave of AI infrastructure growth is driving data centres to spend over $10 billion on liquid cooling and power optimisation this week alone. When compute density and data demands surge, your infrastructure must be ready — not reactive. Tech Tip: Build systems that manage heat and scale intelligently so your tech delivers performance and longevity. Prioritise high‑density compute designs with advanced cooling and power management. Opt for modular architectures that scale out rather than trigger costly overhauls. Embed monitoring to track thermal and power health in real‑time. Choose solutions designed for future loads, not just today’s peaks. At Data Sciences Corporation, we engineer infrastructure that keeps cool under pressure and performs under scale — from GPU racks to data‑platforms. Unlock tomorrow’s demands, today. 👉 Learn more: https://datasciences.co.za #techtipthursday #datasciencescorp #infrastructure #aiinfrastructure #datacentre #scalableit #techstrategy #innovation
To view or add a comment, sign in
-
The extraordinary demand from Artificial Intelligence (AI) and HPC Data Centers is redefining Power Quality challenges. It's no longer just about traditional harmonics. The pulsed and massive nature of the AI load is generating subharmonics, oscillations at frequencies below the fundamental, which conventional filtering systems cannot solve. This is a wake-up call for electrical engineers: subharmonic mitigation and ultra-fast reactive compensation are the new frontier. Innovative solutions, such as Capacitive Energy Storage Systems (CESS), are emerging to support and balance power supplies during load peaks, protecting both sensitive equipment and the electrical grid. #ElectricalEngineering #PowerQuality #Harmonics #ReactiveCompensation #ArtificialIntelligence #DataCenter https://lnkd.in/dznX4Gi9
To view or add a comment, sign in
-
Revolutionizing Level 5 IST — From Rack to Rooftop Across EMEA, hundreds of “AI-ready” data halls go through commissioning every quarter — yet most still test them in pieces. A rack here. A CRAH loop there. Maybe a CDU on standby. And then… we call it Level 5. But true Level 5 IST is a system test, not a checklist. It must validate how every component — from rack to rooftop — performs under real AI thermal stress. We’ve spent years simulating electrical load, but never the thermal reality of next-gen GPU racks. At Refroid, that’s exactly what we’re changing. Our Hybrid Load Banks bridge the gap between electrical draw and coolant dynamics — giving commissioning teams the closest possible experience to live AI workloads: 💧 60:40 Liquid-to-Air load simulation (70:30,80:20) ⚙️ Up to 150 kW per rack footprint 🔌 Plug-and-Play CDU integration (DI / PG25 compatible) 🏗️ End-to-End testing realism — from rack to rooftop Because the data centers shaping the AI decade won’t just be powered differently — they’ll be cooled and commissioned differently. To everyone across leading the next wave of Level 5 testing — 👉 What part of your IST still feels piece-meal today? Let’s redefine commissioning together. 📩 Write to us at sales_emea@refroid.com #RevolutionizingLevel5IST #HybridLoadBanks #LiquidCooling #DataCenters #AIInfrastructure #Refroid #EMEA #Commissioning #RackToRooftop Refroid Technologies Private Limited Payal Dandige Satya Bhavaraju
To view or add a comment, sign in
-
New Technical Paper: A powerful feature of Campbell Scientific data loggers is the ability to support a wide range of serial protocols, enabling seamless integration of custom or third-party sensors into data acquisition systems. Our latest technical paper walks through how to connect and configure serial sensors with Campbell Scientific data loggers. It offers practical guidance, general best practices, and program examples to help bridge the gap between sensor documentation and a working data logger program. 📄 Check it out here 👉 https://lnkd.in/gkASZnfH #CampbellScientific #CRBasic #Dataloggers #Sensors #DataAcquisition
To view or add a comment, sign in