Enhancing Product Recommendations

Explore top LinkedIn content from expert professionals.

  • View profile for Damien Benveniste, PhD
    Damien Benveniste, PhD Damien Benveniste, PhD is an Influencer

    Founder @ TheAiEdge | Follow me to learn about Machine Learning Engineering, Machine Learning System Design, MLOps, and the latest techniques and news about the field.

    173,022 followers

    If you want to know where the money is in Machine Learning, look no further than Recommender Systems! Recommender systems are usually a set of Machine Learning models that rank items and recommend them to users. We tend to care primarily about the top-ranked items, the rest being less critical. If we want to assess the quality of a specific recommendation, typical ML metrics may be less relevant. Let’s take the search results of a Google search query, for example. All the results are somewhat relevant, but we need to make sure that the most relevant items are at the top of the list. To capture the level of relevance, it is common to hire human labelers to rate the search results. It is a very expensive process and can be quite subjective since it involves humans. For example, we know that Google performed 757,583 search quality tests in 2021 using human raters: https://lnkd.in/gYqmmT2S. Normalized Discounted Cumulative Gain (NDCG) is a common metric to exploit relevance measured on a continuous spectrum. Let’s break that metric down. Using the relevance labels we can compute diverse metrics to measure the quality of the recommendation. The cumulative gain (CG) metric answers the question: How much relevance is contained in the recommended list? To get a quantitative answer to that question, we simply add the relevance scores provided by the labeler: CG = relevance 1 + relevance 2 + ... The problem with cumulative gain is that it doesn’t take into account the position of the search results. Any order would give the same value however we want the most relevant items at the top. Discounted cumulative gain (DCG) discounts relevance scores based on their position in the list. The discount is usually done with a log function, but other monotonic functions could be used: DCG = relevance 1 / log(position 1) + relevance 2 / log(position 2) + ... DCG is quite dependent on the specific values used to describe relevance. Even with strict guidelines, some labelers may use high numbers and others low numbers. To put those different DCG values on the same level, we normalize them by the highest value DCG can take. The highest value corresponds to the ideal ordering of the recommended items. We call the DCG for ideal ordering the Ideal Discounted Cumulative Gain (IDCG). The Normalized Discounted Cumulative Gain (NDCG) is the normalized DCG NDCG = DCG / IDCG If the relevance scores are all positive, then NDCG is contained in the range [0, 1], where 1 is the ideal ordering of the recommendation. #MachineLearning #DataScience #ArtificialIntelligence

  • View profile for Shubhangi Madan
    Shubhangi Madan Shubhangi Madan is an Influencer

    Co-founder @The People Company | Linkedin Top Voice | Personal Brand Strategist | Linkedin Ghostwriter & Organic Growth Marketer 🚀 | Content Management | 200M+ Client Views | Publishing Daily for next 350 Days

    121,465 followers

    𝗬𝗼𝘂𝗿 𝗽𝗿𝗼𝗳𝗶𝗹𝗲 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗳𝗶𝗿𝘀𝘁 𝗶𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗮𝗻𝗱 𝗺𝗼𝘀𝘁 𝗽𝗲𝗼𝗽𝗹𝗲 𝗴𝗲𝘁 𝗶𝘁 𝘄𝗿𝗼𝗻𝗴. We all want to look good online. But there's a fine line between polished and forgettable. Here’s how to stand out and stay true to you: 📸 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝘆𝗼𝘂𝗿 𝗽𝗵𝗼𝘁𝗼. It’s the first thing people notice so make it count. Use a clear headshot. Show your personality. A genuine smile works wonders. 📝 𝗖𝗿𝗮𝗳𝘁 𝗮 𝘀𝘁𝗿𝗼𝗻𝗴 𝗵𝗲𝗮𝗱𝗹𝗶𝗻𝗲. Think of it like your elevator pitch. What you do + Who you help + How you help keep it clear and direct. 📖 𝗪𝗿𝗶𝘁𝗲 𝗮𝗻 𝗔𝗯𝗼𝘂𝘁 𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘁𝗲𝗹𝗹𝘀 𝘆𝗼𝘂𝗿 𝘀𝘁𝗼𝗿𝘆. Talk about your journey, skills, and goals but let your personality shine. People connect with people, not buzzwords. 🎯 𝗣𝗮𝘆 𝗮𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝘁𝗼 𝘁𝗵𝗲 𝘀𝗺𝗮𝗹𝗹 𝘀𝘁𝘂𝗳𝗳. Claim a custom LinkedIn URL. Add a banner that reflects who you are. Use keywords thoughtfully but never at the cost of authenticity. The best profiles do more than list achievements. They tell a story. They teach, inspire, and even entertain. 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲? Don’t copy someone else’s style. Be the profile that only you can write. When you show up as your real self flaws and all people notice. And they remember. 𝗔𝗹𝘀𝗼, 𝗜 𝗮𝗺 𝗼𝗻 𝗮 𝘀𝘁𝗿𝗲𝗮𝗸 𝘁𝗼 𝗽𝘂𝗯𝗹𝗶𝘀𝗵 𝗱𝗮𝗶𝗹𝘆, 𝗮𝗻𝗱 𝘁𝗼𝗱𝗮𝘆 𝗶𝘀 𝗗𝗮𝘆 𝟭𝟵𝟭/𝟯𝟱𝟬. 𝗣.𝗦. 𝗜 𝗵𝗲𝗹𝗽 𝗳𝗶𝗻𝗮𝗻𝗰𝗲 𝗰𝗿𝗲𝗮𝘁𝗼𝗿𝘀, 𝗳𝗼𝘂𝗻𝗱𝗲𝗿𝘀, 𝗖𝗫𝗢𝘀, 𝗮𝗻𝗱 𝗰𝗼𝗮𝗰𝗵𝗲𝘀 𝗴𝗿𝗼𝘄 𝗼𝗻 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻 𝘄𝗶𝘁𝗵 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝗰𝗼𝗻𝘁𝗲𝗻𝘁. 𝗗𝗠 𝗺𝗲, 𝗮𝗻𝗱 𝗹𝗲𝘁’𝘀 𝗺𝗮𝗸𝗲 𝗶𝘁 𝗵𝗮𝗽𝗽𝗲𝗻.

  • View profile for Pau Labarta Bajo
    Pau Labarta Bajo Pau Labarta Bajo is an Influencer

    Building and teaching AI that works > Maths Olympian> Father of 1.. sorry 2 kids

    68,211 followers

    What is an 𝗔/𝗕 𝘁𝗲𝘀𝘁 and why do you need to master it as an 𝗠𝗟 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿? An A/B test is a testing strategy that helps you decide if an ML model is good enough to be used in production. But, wait a second. Isn't a low test error (e.g. low mean absolute error) enough to decide if the model is good or not? 🤔 Well, nope. But why? Because of 2 reasons 𝗥𝗲𝗮𝘀𝗼𝗻 𝟭 An ML system is way more than just the model artifact you generated with your training script. Test metrics (like test mean absolute error) only verify your model artifact is okay. The rest of the system remains untested. 𝗥𝗲𝗮𝘀𝗼𝗻 𝟮 An end-2-end ML app has to move business metrics. Otherwise, it produces no value for the business. Checking your standard metrics is often not enough to guarantee that your overall business impact is positive. So the question is: Is there a way to evaluate the business impact of the entire ML system? Yes. Here is where the A/B test comes in. 𝗔/𝗕 𝘁𝗲𝘀𝘁 An A/B test is a standard strategy to test software releases, which helps you answer the question: is the new version of my app better than the previous one? 𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗶𝘁 𝘄𝗼𝗿𝗸? Here is an example Imagine you work at Spotify, and you have developed a new (and maybe better) ML system to recommend songs to users (aka a recommender system). Your ML system will be better only if it increases click-through-rates (CTR), that is, users like its recommendations more than the old system, so they click and listen to the song. To run an A/B test you first randomly split the user base into 2 groups: → Group A (aka control group) will receive recommendations from the old system. → Group B (aka test group) will receive recommendations from the new system you developed. You set an experiment duration (e.g. 2 weeks) that is long enough, so the overall CTRs you observe for groups A and B are as reliable as possible, so you can compare them and extract statistically valid conclusions. The more samples you have, the more reliable it is, but the longer you need to wait. To find the right experiment duration you can use a calculator like this → https://lnkd.in/dUzp7mDw At the end of the day, if your test group shows significantly better CTR than the control group, you can deploy the new version of the model to the entire user base. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗼𝘄 𝘆𝗼𝘂 𝗱𝗲𝗽𝗹𝗼𝘆 𝗠𝗟 𝗺𝗼𝗱𝗲𝗹𝘀 𝘀𝗮𝗳𝗲𝗹𝘆. ---------- Hi there! It's Pau 👋 Every week I share free, hands-on content, on production-grade ML, to help you build real-world ML products. 𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 and 𝗰𝗹𝗶𝗰𝗸 𝗼𝗻 𝘁𝗵𝗲 🔔 so you don't miss what's coming next #machinelearning #mlops #realworldml

  • View profile for Omkar Sawant
    Omkar Sawant Omkar Sawant is an Influencer

    Helping Startups Grow @Google | Ex-Microsoft | IIIT-B | Data Analytics | AI & ML | Cloud Computing | DevOps

    14,981 followers

    In today's data-driven world, organizations are sitting on a treasure trove of information. But what good is all that data if you can't find what you need? That's where multimodal search comes in. Multimodal search is a new technology that allows you to search for information using images and videos. This can be a game-changer for organizations that have a lot of visual content, such as retail, media, and healthcare. 𝐇𝐞𝐫𝐞'𝐬 𝐚 𝐬𝐜𝐞𝐧𝐚𝐫𝐢𝐨: Imagine you're a retail company with a large library of product images. A customer comes to you with a picture of a product they want to buy, but they don't know the name of the product. With multimodal search, you can easily find the product in your inventory, even if it's not labeled correctly. 𝐇𝐨𝐰 𝐝𝐨𝐞𝐬 𝐢𝐭 𝐰𝐨𝐫𝐤? Multimodal search uses a combination of natural language processing (NLP), BigQuery, and embeddings to create a system that can understand the meaning of images and videos. 👉 Embeddings: The system first creates numerical representations of the content, called embeddings. These embeddings capture the essential features of the image or video, such as the colors, shapes, and objects that appear in it. 👉 Vector Index: Then, a vector index is created to allow for efficient searching. The vector index is like a giant dictionary that maps embeddings to the images and videos that they represent. 👉 Query Embeddings: Finally, the user's query is turned into an embedding and compared to the indexed embeddings to find similar images and videos. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 𝐟𝐨𝐫 𝐎𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬: 👉 Improved Search Accuracy: Multimodal search can help organizations find the information they need more quickly and accurately. This can lead to increased productivity and efficiency. 👉 Enhanced Customer Experience: Multimodal search can provide customers with a more intuitive and engaging way to find the products they're looking for. This can lead to increased customer satisfaction and loyalty. 👉 New Insights from Data: Multimodal search can help organizations unlock new insights from their data. For example, you can use multimodal search to identify trends in customer behavior or to discover new product opportunities. 𝐌𝐨𝐫𝐞 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐡𝐞𝐫𝐞: https://lnkd.in/dVwhDJkH If you're interested in learning more about multimodal search, check out the blog post below. It provides a step-by-step guide on how to implement a similar solution. Follow Omkar Sawant for more! #multimodalsearch #nlp #bigquery #embeddings #dataanalytics #bigdata #innovation #datascience #machinelearning #artificialintelligence

  • View profile for Kuldeep Singh Sidhu
    Kuldeep Singh Sidhu Kuldeep Singh Sidhu is an Influencer

    Senior Data Scientist @ Walmart | BITS Pilani

    13,118 followers

    I just came across a groundbreaking paper titled "Benchmarking LLMs in Recommendation Tasks: A Comparative Evaluation with Conventional Recommenders" that provides comprehensive insights into how large language models (LLMs) perform in recommendation tasks. The researchers from The Hong Kong Polytechnic University, Huawei Noah's Ark Lab, Nanyang Technology University, and National University of Singapore have developed RecBench - a systematic evaluation platform that thoroughly assesses the capabilities of LLMs in recommendation scenarios. >> Key Technical Insights: This benchmark evaluates various item representation forms: - Unique identifiers (traditional approach) - Text representations (using item descriptions) - Semantic embeddings (leveraging pre-trained LLM knowledge) - Semantic identifiers (using discrete encoding techniques like RQ-VAE) The study covers two critical recommendation tasks: - Click-through rate (CTR) prediction (pair-wise recommendation) - Sequential recommendation (list-wise recommendation) Their extensive experiments evaluated 17 different LLMs across five diverse datasets from fashion, news, video, books, and music domains. The results are eye-opening: - LLM-based recommenders outperform conventional recommenders by up to 5% AUC improvement in CTR prediction and a staggering 170% NDCG@10 improvement in sequential recommendation - However, these performance gains come with significant computational costs, making real-time deployment challenging - Conventional deep learning recommenders enhanced with LLM support can achieve 95% of standalone LLM performance while being thousands of times faster Under the hood, the researchers implemented a conditional beam search technique for semantic identifier-based models to ensure valid item recommendations. They also employed low-rank adaptation (LoRA) for parameter-efficient fine-tuning of the large models. Most interestingly, they found that while most LLMs have limited zero-shot recommendation abilities, models like Mistral, GLM, and Qwen-2 performed significantly better, likely due to exposure to more implicit recommendation signals during pre-training. This research opens exciting avenues for recommendation system development while highlighting the need for inference acceleration techniques to make LLM-based recommenders practical for industrial applications.

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    The AI PM Guy 🚀 | Helping you land your next job + succeed in your career

    290,385 followers

    Your LinkedIn profile is a 24/7 inbound job magnet if you set it up right! It's an opportunity to have the hottest companies and hiring managers chasing you rather than you running after them. Impossible? Hell no. It’s how I got my senior product position at Affirm and the same story for VP of product at Apollo. Here’s the complete guide to converting your LinkedIn profile into a job-attracting asset: — 𝟭. 𝗛𝗘𝗔𝗗𝗟𝗜𝗡𝗘 Don't use generic headline templates mentioning your job title and company name. ↳ Highlight your expertise or niche. ↳ Mention companies for credibility. ↳ Add a secondary offer; are you a coach, speaker, or consultant? ↳ Example: "Senior Product Manager @ TechCo | Driving B2B SaaS Growth 🚀 | Ex-Google, Ex-Amazon | Product Leadership Coach" — 𝟮. 𝗔𝗕𝗢𝗨𝗧 𝗠𝗘 Think of your "About" section as your personal story. ↳ Experience summary showcasing your value. ↳ Use storytelling to highlight your key achievements (don’t forget to mention numbers/results) with a personal touch. ↳ Wrap up by stating what kind of roles or challenges you’re interested in next. — 𝟯. 𝗣𝗥𝗢𝗙𝗜𝗟𝗘 𝗣𝗜𝗖𝗧𝗨𝗥𝗘 𝗔𝗡𝗗 𝗖𝗢𝗩𝗘𝗥 𝗜𝗠𝗔𝗚𝗘 How people perceive you depends a lot on how you visually present yourself. Here’s how to do it right: ↳ High-quality and professional headshot. Use AI if you don’t have a good photo. ↳ Don’t use cover photos for vague quotes; use it to highlight your achievements, awards, reviews, your products, etc. — 𝟰. 𝗘𝗫𝗣𝗘𝗥𝗜𝗘𝗡𝗖𝗘 Your experience section is where the real depth comes in. ↳ Go beyond job duties and focus on the specific results and outcomes you achieved. ↳ Use the Situation, Action, Result (SAR) framework to highlight what you did and how it made an impact. (e.g., “Increased customer retention by 25% in 6 months”). ↳ Use industry-specific keywords so recruiters can easily find you in searches. — 𝟱. 𝗔𝗗𝗩𝗔𝗡𝗖𝗘𝗗 𝗦𝗘𝗧𝗧𝗜𝗡𝗚𝗦 ↳ Simplify your LinkedIn URL (e.g., linkedin.com/in/YourName) with a custom URL. ↳ Make sure to add a link to your portfolio, website, or a side project directly in your profile. ↳ Regularly review your contact info and make it easy for recruiters to reach out to you. — 𝟲. 𝗥𝗘𝗖𝗢𝗠𝗠𝗘𝗡𝗗𝗔𝗧𝗜𝗢𝗡𝗦 Think of recommendations as built-in references that add credibility to your profile. ↳ Reach out to people who can specifically highlight your key skills and achievements. ↳ Aim for a variety of recommendations—managers, colleagues, and clients. ↳ Pin your top 2-3 recommendations. — 𝟳. 𝗦𝗞𝗜𝗟𝗟𝗦 The "Skills" section helps you appear in searches and validates your expertise: ↳ Choose skills that define your professional strengths, and pin your top 3. ↳ Take LinkedIn skill assessments to add credibility with “verified” badges. — If you want to dive deeper into how to do it all with real-time examples and breakdowns, check out the guide below in comments.

  • View profile for Leslie Venetz
    Leslie Venetz Leslie Venetz is an Influencer

    Sales Strategy & Training for Outbound Orgs | SKO & Keynote Speaker | 2024 Sales Innovator of the Year | Top 50 USA Today Bestselling Author - Profit Generating Pipeline ✨#EarnTheRight✨

    51,970 followers

    Stop pretending that a single data point from a prospect's LinkedIn profile defines them. When you do that, your outreach sounds like you're reading the news to them - "Congrats on XYZ" or "I see that you're the [title] at ABC." You can't tell your prospects sh*t they already know and expect them to care. When you use a single data point as the cornerstone of your outreach, it's a telltale sign that you don't really understand your buyers, the challenges they're facing, or the opportunities they are excited about. Elite sellers understand how to uncover a more complete view of their prospects. More importantly, they understand how those data points come together. My go-to way to understand how prospects are interacting with me across multiple channels is the Members Dashboard in Common Room The 3 things I love most about this dashboard are: 1. It ranks prospects based on their overall impact in my ecosystem. I can see which people or orgs are most engaged with my content, across multiple channels, in a meaningful way. 2. I can get a view beyond LinkedIn. I have my YouTube, X (Twitter), and Company LinkedIn pages integrated as well as Slack for my Business Book Club community AND HubSpot. I can pull in so much data that is relevant to me and the folks interacting with me to figure out what matters TO THEM! P.S. The enterprise integrations are even better than the stuff I use as a solopreneur. It's impressive. 3. The tags. For instance, the first person in this list is tagged as an economic buyer [image]. This happens automatically. I didn't have to do that work. They are also tagged as a pioneer meaning they are the first person from that org to engage with my content. What this quick view tells me is that I have an economic buyer, a CRO, who is new in seat and is talking online about building a tech stack. They are engaging with me across LinkedIn & they are a member of my Slack community. The timing is ideal to connect to better understand their vendor selection process. You can filter to only see economic buyers or other tags or filter to only view specific channels that you might know are where most revenue is attributed. The result? Instead of reaching out to a prospect with disingenuous personalization, I have an immediate view of the conversations they are having across social channels that relate to me. It's advanced social listening + identity resolution + person-level AND account-level AND org-level enrichment based on a multitude of signals. It's a true 360 view. It allows me to have a more complete view of what's going on in a prospect's world before I reach out which increases engagement and conversion rates significantly. If this has sparked your interest, read this Blog about how to uncover the person behind the data points: https://lnkd.in/gEv26z6k

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    49,789 followers

    The recommendation is a powerful tool for e-commerce sites to boost sales by helping customers discover relevant products and encouraging additional purchases. By offering well-curated product bundles and personalized suggestions, these platforms can improve the customer experience and drive higher conversion rates. In a recent blog post, the CVS Health data science team shares how they explore advanced machine learning capabilities to develop new recommendation prototypes. Their objective is to create high-quality product bundles, making it easier for customers to select complementary products to purchase together. For instance, bundles like a “Travel Kit” with a neck pillow, travel adapter, and toiletries can simplify purchasing decisions. The implementation includes several components, with a key part being the creation of product embeddings using a Graph Neural Network (GNN) to represent product similarity. Notably, rather than using traditional co-view or co-purchase data, the team leveraged GPT-4 to directly identify the top complementary segments as labels for the GNN model. This approach has proven effective in improving recommendation accuracy. With these product embeddings in place, the bundle recommendations are further refined by incorporating user-specific data based on recent purchase patterns, resulting in more personalized suggestions. As large language models (LLMs) become increasingly adept at mimicking human decision-making, using them to enhance labeling quality and streamline insights in machine learning workflows is becoming more popular. For those interested, this is an excellent case study to explore. #machinelearning #datascience #ChatGPT #LLMs #recommendation #personalization #SnacksWeeklyOnDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gb6UPaFA

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    33,854 followers

    Most companies are using AI for efficiency. Some are accelerating value creation. A great case study is how Colgate-Palmolive is driving innovation. Here are specific ways they are embedding GenAI across innovation processes to substantlly improve research and product development. These come from an excellent article in MIT Sloan Management Review by Tom Davenport and Randy Bean (link in comments). 💡 AI-Driven Product Concept Generation Accelerates Ideation By linking one AI system that surfaces consumer needs with another that crafts product concepts, Colgate-Palmolive can swiftly generate creative ideas like novel toothpaste flavors. This AI-augmented workflow produces a broader product funnel and allows rapid iteration, enabling more employees to participate in the innovation process under guided human oversight. 🔍 Retrieval-Augmented Generation Enhances Data Reliability The firm’s use of retrieval-augmented generation (RAG) integrates company-specific research, syndicated data, and real-time trends from sources like Google search data. This approach minimizes the risk of hallucinations and ensures that responses are deeply grounded in verified, internal content—delivering more accurate market analysis and trend detection. 🤖 Digital Consumer Twins Validate and Refine Concepts Moving beyond traditional focus groups, the company has developed “digital consumer twins”—virtual representations of real consumer behavior. These digital twins rapidly test hundreds of AI-generated product ideas. Early evaluations show a high level of agreement between virtual feedback and actual consumer responses. This innovation speeds up early-stage concept validation and reduces reliance on slower, more limited human panels. 🔐 Democratizing AI Through a Secure Internal AI Hub Colgate-Palmolive’s AI Hub provides employees with controlled access to advanced AI tools (including models from OpenAI and Google) behind corporate firewalls. Mandatory training on responsible AI use, including guardrails and prompt engineering best practices, ensures that employees harness these tools safely and effectively. Built-in surveys and KPI tracking further enable the company to measure improvements in creativity, productivity, and overall work quality. 🌐 Bridging Traditional Analytics with Next-Gen AI for Measurable Impact By integrating traditional machine learning with cutting-edge generative AI, Colgate-Palmolive is not only boosting operational efficiencies but also driving strategic growth. This seamless blend supports tasks ranging from market research and innovation to marketing content creation—demonstrating a holistic, value-driven approach to adopting AI that is a model for other organizations.

  • View profile for Prashanthi Ravanavarapu
    Prashanthi Ravanavarapu Prashanthi Ravanavarapu is an Influencer

    VP of Product, Sustainability, Workiva | Product Leader Driving Excellence in Product Management, Innovation & Customer Experience

    15,256 followers

    What if we reimagined the Double Diamond through the lens of Jobs-to-be-Done? 🤔 Product Management is about mastering various methodologies and knowing when to apply them. No single framework fits all scenarios - the key is understanding how different approaches can complement each other to drive better outcomes. I have been learning and practicing the art and science of Innovation through the concepts of JTBD, Human Centered Design, Design Thinking, Customer Driven Innovation, Continuous Discovery, Product Discovery, Lean, etc., I've found these methodologies aren't just related, they're deeply interconnected pieces of the same puzzle. I took the classic double diamond design thinking framework and applied JTBD to it and here is how it looks in my view. While the double diamond model divides the journey into Problem → Solution spaces, the evolved version speaks the language of jobs and outcomes 💎Left Diamond: Transformed from problem-finding to "Jobs & Outcomes" - focusing on understanding what customers are trying to achieve in their contexts. 🌉The Bridge: "Opportunity Statements" replace "Problem Definition" - shifting from fixing issues to unlocking potential. Opportunity Statements are what Tony Ulwick calls "Hidden Growth Opportunities". These statements guide our innovation direction. 💎Right Diamond: Maintains the Design/Develop and Iterate/Deliver phases, but shifts validation focus to measuring how effectively we enable customers to achieve their desired outcomes. This framework moves beyond problem-solution thinking to create value through deep understanding of customer progress and success metrics in the form of jobs and outcomes. Have you integrated different innovation frameworks in your work? What have you learned? Would love to hear your experiences! #innovation #JTBD #designthinking #productdiscovery

Explore categories