Enhancing Page Load Speed

Explore top LinkedIn content from expert professionals.

  • View profile for Addy Osmani

    Engineering Leader, Google Chrome. Best-selling Author. Speaker. AI, DX, UX. I want to see you win.

    235,029 followers

    "How JavaScript is parsed, fetched and executed" JavaScript can be executed in different modes, primarily distinguished as classic scripts and module scripts. The behavior of these scripts is influenced by specific attributes - async, defer, nomodule to name a few. Classic Scripts By default, scripts are render-blocking, meaning the browser won't display any page content until the script has finished executing. This can slow down the loading of web pages. The async and defer attributes can be used to tell the browser that a script doesn't need to run immediately, and the browser can continue to process the rest of the page while the script loads. When present, the async attribute allows the script to be fetched in parallel to the HTML parsing process. The script gets evaluated as soon as it's available, but before the window's load event. This non-blocking behavior can help with improving page load times. Conversely, if the async attribute is absent but defer is present, the script still fetches in parallel but waits until the entire page is parsed before executing. This ensures that the script runs with full knowledge of the DOM but doesn't hinder the parsing process. In the absence of both async and defer, the script turns into a blocking resource. It's fetched and executed immediately, pausing the parsing process until these steps are complete. Module Scripts Module scripts, a newer standard, exhibit slightly different behavior. If async is specified, both the module script and its dependencies are fetched in parallel to the parsing, with execution happening as soon as possible. Without async, the fetching still occurs alongside parsing, but execution waits until parsing is complete. Notably, the defer attribute has no effect on module scripts. Mathias Bynens and I wrote about JS Modules in much more detail over here: https://lnkd.in/gmw_q7DD The Role of nomodule and crossorigin Attributes The nomodule attribute serves a compatibility function. It prevents the script from executing in browsers that support module scripts, enabling a fallback to classic scripts in older browsers. It's a way to write code that selectively executes depending on the user agent's capabilities. Understanding Attribute Combinations It's important to note that these attributes are not mutually exclusive and can be combined for nuanced behaviors. For instance, specifying both async and defer in a classic script enables legacy browsers that support only defer to fall back to this behavior, avoiding the default blocking mode. In summary, the efficient parsing, fetching, and execution of JavaScript hinge on understanding and correctly using these script attributes. There are of course JavaScript engine specifics worth keeping in mind, which we've also written about: https://lnkd.in/g72tCXuJ. DebugBear also has a great write-up on this topic: https://lnkd.in/gSurVygw #programming #developers #performance

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | Generative AI | Agentic AI

    690,660 followers

    Let's speed things up! Here's a beginner-friendly guide to optimizing your SQL queries, starting with some tips from my infographic: 1. Be Selective with DISTINCT: Only use DISTINCT when you absolutely need unique results. It can slow down your query if used unnecessarily. 2. Rethink Scalar Functions: Instead of using functions that return a single value for each row in SELECT statements, try using aggregate functions. They're often faster! 3. Cursor Caution: Avoid cursors when possible. They're like going through your data one by one, which can be slow. Set-based operations are usually faster. 4. WHERE vs HAVING: Use WHERE to filter rows before grouping, and HAVING to filter after grouping. This can significantly reduce the amount of data processed. 5. Index for Success: Think of indexes like the table of contents in a book. Create them on columns you frequently search or join on for faster lookups. 6. JOIN Smartly: INNER JOIN is often faster than using WHERE for the same condition. It's like telling the database exactly how to connect your tables. 7. CASE for Clarity: Use CASE WHEN statements instead of multiple OR conditions. It's clearer and can be more efficient. 8. Divide and Conquer: Break down complex queries into simpler parts. It's easier to optimize and understand smaller pieces. But wait, there's more! Here are some extra tips to supercharge your queries: 9. EXISTS vs IN: Use EXISTS instead of IN for subqueries. It's often faster, especially with large datasets. 10. LIKE with Caution: Avoid using wildcards (%) at the beginning of your LIKE patterns. It prevents the use of indexes. 11. Analyze Your Plans: Learn to read query execution plans. They're like a roadmap showing how your database processes your query. 12. Partitioning Power: For huge tables, consider partitioning. It's like organizing your data into smaller, manageable chunks. 13. Table Variables: Sometimes, using table variables instead of temporary tables can boost performance. 14. Subquery Switcheroo: Try converting subqueries to JOINs or CTE. In many cases, this can speed up your query. Remember, optimization is a journey, not a destination. Start with these tips and keep learning! What's your favorite SQL optimization trick? Share in the comments!

  • View profile for Mark Shust
    Mark Shust Mark Shust is an Influencer

    Founder, Educator & Developer @ M.academy. The simplest way to learn Magento. Currently exploring building production apps with Claude Code & AI.

    25,255 followers

    Your Magento store is slow. And it's not Magento's fault. After a couple decades building these sites, I've seen it all. The last Magento site that I took over costs the merchant 500k.HALF A MILLION DOLLARS. And it had 70+ modules installed and looked like someone threw spaghetti at a wall and called it architecture. You don't always get what you pay for. I rebuilt it from scratch for a fraction of that price, and the load time went from 17 seconds to under 2. The original developers? They blamed Magento, of course. But here's what's really slowing down your store: // The "Kitchen Sink" Approach Your theme loads 50+ JavaScript files on every page. Your homepage needs maybe 4. But hey, why optimize when you can just... not? // Extension Addiction "There's an extension for that!" Sure, and now your breadcrumbs are running off of 18 different database queries. Sometimes 10 lines of custom code beats a 10,000-line extension. // "Caching? What's That?" I've literally heard developers say caching is optional. That's like saying engines are optional for cars. Your server isn't slow... it's just doing 100x more work than it needs to. // Cold Cache Syndrome First visitor at 9am gets the slow experience. Everyone else gets it fast. Guess which visitor was ready to buy? Cache warming takes a few minutes to set up, folks. // Bargain Basement Hosting "Magento Optimized Hosting - $29/month!" Sure, and I'm a Michelin-starred chef because I can microwave some eggs. Your hosting matters. A lot. The truth? I've seen Magento stores handle Black Friday traffic that would make other platforms sweat. The difference is that developers actually knew what they were doing. Quick test: Ask your developer: "What's our Time to First Byte?" If they don't immediately answer with a number under 500ms, or if they ask "What's that?"... well, mystery solved. I know we've all written bad code, installed one too many modules, and took a few things for granted. But the difference is learning from it. If your developer's response to every performance issue is "Magento just can't handle it," they're not learning. They're making excuses. // The good news? Your slow store can be fast. Really fast. Usually it just takes someone willing to: - Profile before pointing fingers - Remove more code than they add - Treat performance like a feature, not an afterthought That $500k disaster I mentioned? The merchant thought they needed to replatform. Turns out they just needed a developer who understood the platform they already had. So before you blame Magento and spend another fortune migrating to the next "faster" platform, maybe get a second opinion. Your store might just be a few optimizations away from being a blazing fast site. What's your easiest performance fix? Share it below and let's get some quick wins 👇

  • The simplest but most impactful optimisation you can do in your frontend app is enable compression. Story first Our bundle was 3.5 MB. Users kept saying the site felt slow. Turned on gzip on the server → transfer dropped to 1.4 MB. No code changes. Just a config. Users instantly felt the site lighter. Why this works JS, CSS, HTML, JSON, SVG are text-heavy. Text compresses well. • Without compression → full 3.5 MB travels. • With gzip/Brotli → repeated patterns shrink → browser auto-decompresses. • Same content, 60% fewer bytes → faster FCP, LCP, TTI. What to compress ✅ HTML, CSS, JS, JSON, SVG, XML ❌ Images, videos, PDFs, fonts (already compressed) How to check Chrome DevTools → Network → click a JS/CSS file → look for Content-Encoding. If blank, you’re shipping raw bytes. Extra tip Brotli compresses 15–20% smaller than gzip. Serve .br to modern browsers, gzip as fallback. ⚡ Go ahead and check if your app already has this enabled. If not, enable it today and feel the difference yourself.

  • View profile for Shubham Saurabh

    Founder - Auditzy | Real-time Core Web Vitals & Conversion Monitoring 🚀 | Increasing Social Ads Conversion Rates via by-passing in-app browsers of social media apps using InApp Redirect | Jamsfy

    11,194 followers

    My honest opinion about monitoring #CoreWebVitals in real-time for #ecommerce brands. If you’re still relying on “lab tools” like #PageSpeedInsights to gauge your site’s performance, you’re playing with fire. WTF, why? Your customers aren’t shopping in your perfect lab conditions.They’re shopping in the real world. - On a slow 4G Android phone with just 4GB RAM. - Inside Instagram or Facebook’s in-app browser which is often 1.5 times slower than Chrome or Safari. - On a shaky train connection trying to complete checkout before the next tunnel. - On an old MacBook juggling 12 tabs with their CPU struggling to keep up. And in countless other messy unpredictable situations you can never replicate in your staging/lab environment. That’s why monitoring what they actually experience in real time matters far more than your pristine lab scores. 😅 I’ve spent the almost a decade helping e-commerce brands monitor and fix website performance. And here’s the hard truth: what you measure in your staging environment almost never matches what your real customers feel. 🤐 In e-commerce, every 100ms counts: ✅ Faster LCP = more products seen, more carts filled ✅ Lower INP = smoother interactions, fewer drop-offs ✅ Stable CLS = trust retained, conversions protected Lab scores (like 80/100 on PSI 🧐) don’t catch regional slowdowns, browser-specific issues, device bottlenecks, or sudden CDN hiccups. The brands we’ve worked with that switched to real-time CWV monitoring with Auditzy™ - Real Time Website Speed & Core Web Vitals Monitoring Tool saw: - Faster detection of performance regressions after deployments - Conversion lifts by fixing bottlenecks they didn’t even know existed - The confidence to push campaigns, knowing the site could handle the load So if you’re serious about revenue, stop obsessing over your PageSpeed score and start obsessing over your customer’s actual experience. Because speed doesn’t just win races — it wins hearts, carts, and revenue. 🚀 What’s your take? Are you still waiting for Google Search Console to tell you 30 days later that your CWV tanked? Or are you watching your real users, in real time? What's your take?

  • View profile for Ryan Yu 🥨

    Follow for daily frontend tips - building high-performance, accessible and beautiful web apps | Lead Frontend Engineer

    5,877 followers

    Mastering the difference between script, async and defer A popular JavaScript interview question is to explain the difference between <script>, async, and defer. But beyond interviews, understanding this is essential for writing high-performance frontend code. 1️⃣ <script> The browser stops parsing the HTML
when it encounters the script. This blocks rendering and slows page load
if the script file is large or the network is slow. 2️⃣ <script async> The script is fetched in parallel with HTML parsing
and executed as soon as it’s ready. Parsing of the HTML pauses
when execution starts. Execution order is not guaranteed relative to
other scripts. Whichever finishes fetching first
runs first. 3️⃣ <script defer> The script is fetched in parallel with HTML parsing
but executed only after the entire document
has been parsed. Execution order is preserved. Scripts with defer
run in the order they appear in the document. 💡 Use cases 1️⃣ <script> When the script must run immediately
before the HTML continues. Examples: polyfills, inline critical scripts 2️⃣ <script async> Independent scripts that don’t depend
on each other. Examples: analytics, ads, tracking scripts 3️⃣ <script defer> Scripts that rely on the DOM being
complete, but don’t need to block
HTML parsing. Examples: most site JS files Understanding these concepts is essential for writing performant frontend code, and also for acing your next JavaScript interview. #frontend #javascript #interviewtips

  • View profile for Ghazi Khan

    Staff Software Engineer | Building Scalable Enterprise Solutions | 10+ Years in Agile & Fullstack Dev | Creator of iocombats.com & toolifyx.com

    3,469 followers

    Code Splitting & Lazy Loading: Make Your App Faster Instantly Here’s one mistake I see too often: Everything bundled. Everything loaded. On page load. 💥 Result? Slow app. Frustrated users. The fix? Code splitting + lazy loading. ✅ Only load what the user needs immediately. ✅ Split rarely-used components into separate chunks. ✅ Lazy-load routes, modals, heavy libraries. In React, it’s as simple as: const Chart = React.lazy(() => import('./Chart')); The result? ⚡ Faster initial loads ⚡ Better Core Web Vitals ⚡ Users see content before scripts choke their browser Performance isn’t just backend magic. Frontend engineers own it too. 👉 Are you code splitting in your projects—or shipping everything upfront? --- Follow Ghazi Khan & iocombats for frontend/full-stack development & jobs-related stuff.

  • View profile for Rahul Kaundal

    Head - Radio Access & Transport Network

    32,382 followers

    Capacity Optimization (Optimization Part-5) Efficient PRB (Physical Resource Block) usage is crucial for improving DL user throughput. High PRB utilization can lead to network congestion and degraded performance, especially in areas with high traffic demand. Here's a breakdown: High Utilization Challenges (example): Carrier 1 - 800 MHz: •13% of samples show PRB utilization > 70%, resulting in DL user throughput < 4 Mbps. Carrier 2 - 1800 MHz: •7% of samples show PRB utilization > 90%, with DL user throughput < 4 Mbps. Ways to Cater to High Utilization: 1. Channel Optimization: Optimize channel allocation and resource scheduling to improve PRB efficiency. 2. Add New Sectors in Sites / Load Balance: New sectors can help distribute traffic evenly across the network, reducing congestion and improving throughput. 3. Enhance Antenna Technology: Leverage advanced antenna tech (e.g., MIMO) for better signal distribution and capacity handling. 4. Add New Sites / Carrier / Spectrum Refarming: Deploy additional sites to expand coverage and capacity. Implement spectrum refarming to repurpose underutilized frequency bands for more efficient resource use. Key Takeaways: • High PRB utilization is directly linked to poor DL throughput, especially in congested areas. • Capacity optimization strategies, including channel optimization, sector addition, and spectrum management, are key to enhancing network performance and user experience. By applying these strategies, operators can reduce congestion, improve DL throughput, and better cater to high utilization areas, ensuring optimal network performance. To learn more, refer to the course on RAN Engineering - https://lnkd.in/e9TpSHzF

  • View profile for Pragyan Tripathi

    Clojure Developer @ Amperity | Building Chuck Data

    3,967 followers

    Our App Was Crawling at Snail Speed… Until I Made This One Mistake 🚀 A few months ago, I checked our Lighthouse scores—30s. That’s like running an F1 race on a bicycle. 🏎️➡️🚲 𝐀𝐧𝐝 𝐭𝐡𝐞 𝐰𝐨𝐫𝐬𝐭 𝐩𝐚𝐫𝐭? We did everything right—modern stack, top framework, best practices. Yet, our app was sluggish. ❌ AI-powered search engines ignored us. ❌ Users kept waiting. ❌ Something was off. So, we did what every dev does—optimize. 🔧 Cut dependencies 🔧 Shrunk bundles 🔧 Tweaked configs We went from 30s to 70s. Better, but still not great. Then, I made a 𝐦𝐢𝐬𝐭𝐚𝐤𝐞. A glorious, game-changing mistake. One deploy, I accidentally removed JavaScript. And guess what? Lighthouse: 91. 😳 Sure, nothing worked. No buttons, no interactivity. But it proved our app could be fast. 💡 The lesson? Stop making JavaScript do everything. 𝐒𝐨 𝐰𝐞 𝐫𝐞𝐛𝐮𝐢𝐥𝐭: ✅ JavaScript only where needed ✅ No unnecessary hydration ✅ No bloated client-side rendering 𝐓𝐡𝐞 𝐫𝐞𝐬𝐮𝐥𝐭? 🚀 From 30s to consistent 90+ scores 🚀 Faster load times 🚀 Better search engine visibility Sometimes, the problem isn’t a lack of optimization—it’s an excess of complexity. Not every app needs a heavy framework. Not every UI should be hydrated. If you’re struggling with performance, ask yourself: ❓ Do I really need this much JavaScript? ❓ Can I pre-render more? ❓ What happens if I strip everything back to basics? You might be surprised by what you find. 👀

Explore categories