Cloud Migration Challenges and Solutions

Explore top LinkedIn content from expert professionals.

  • View profile for Sean Connelly🦉
    Sean Connelly🦉 Sean Connelly🦉 is an Influencer

    Zscaler | Fmr CISA - Zero Trust Director & TIC Program Manager | CCIEx2, MS-IST, CISSP

    21,705 followers

    🚨NSA Releases Guidance on Hybrid and Multi-Cloud Environments🚨 The National Security Agency (NSA) recently published an important Cybersecurity Information Sheet (CSI): "Account for Complexities Introduced by Hybrid Cloud and Multi-Cloud Environments." As organizations increasingly adopt hybrid and multi-cloud strategies to enhance flexibility and scalability, understanding the complexities of these environments is crucial for securing digital assets. This CSI provides a comprehensive overview of the unique challenges presented by hybrid and multi-cloud setups. Key Insights Include: 🛠️ Operational Complexities: Addressing the knowledge and skill gaps that arise from managing diverse cloud environments and the potential for security gaps due to operational siloes. 🔗 Network Protections: Implementing Zero Trust principles to minimize data flows and secure communications across cloud environments. 🔑 Identity and Access Management (IAM): Ensuring robust identity management and access control across cloud platforms, adhering to the principle of least privilege. 📊 Logging and Monitoring: Centralizing log management for improved visibility and threat detection across hybrid and multi-cloud infrastructures. 🚑 Disaster Recovery: Utilizing multi-cloud strategies to ensure redundancy and resilience, facilitating rapid recovery from outages or cyber incidents. 📜 Compliance: Applying policy as code to ensure uniform security and compliance practices across all cloud environments. The guide also emphasizes the strategic use of Infrastructure as Code (IaC) to streamline cloud deployments and the importance of continuous education to keep pace with evolving cloud technologies. As organizations navigate the complexities of hybrid and multi-cloud strategies, this CSI provides valuable insights into securing cloud infrastructures against the backdrop of increasing cyber threats. Embracing these practices not only fortifies defenses but also ensures a scalable, compliant, and efficient cloud ecosystem. Read NSA's full guidance here: https://lnkd.in/eFfCSq5R #cybersecurity #innovation #ZeroTrust #cloudcomputing #programming #future #bigdata #softwareengineering

  • View profile for Chandresh Desai

    I help Transformation Directors at global enterprises reduce cloud & technology costs by 30%+ through FinOps, Cloud Architecture, and AI-led optimization | Cloud & Application Architect | DevOps | FinOps | AWS | Azure

    125,693 followers

    𝐎𝐧-𝐩𝐫𝐞𝐦𝐢𝐬𝐞 𝐭𝐨 𝐂𝐥𝐨𝐮𝐝 𝐌𝐈𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐲❗ Cloud migration strategy involves a comprehensive plan for moving data, applications, and other business elements from an on-premise computing environment to the cloud, or from one cloud environment to another. The strategy is crucial for organizations looking to leverage the scalability, flexibility, and efficiency benefits of cloud computing. A well-defined cloud migration strategy should encompass several key components and phases: 𝟏. 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 𝐚𝐧𝐝 𝐏𝐥𝐚𝐧𝐧𝐢𝐧𝐠 Evaluate Business Objectives: Understand the reasons behind the migration, whether it's cost reduction, enhanced scalability, improved reliability, or agility. Assess Current Infrastructure: Inventory existing applications, data, and workloads to determine what will move to the cloud and how. Choose the Right Cloud Model: Decide between public, private, or hybrid cloud models based on the organization's requirements. Identify the Right Cloud Provider: Evaluate cloud providers (like AWS, Azure, Google Cloud) based on compatibility, cost, services offered, and compliance with industry standards. 𝟐. 𝐂𝐡𝐨𝐨𝐬𝐢𝐧𝐠 𝐚 𝐌𝐢𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲 The "6 R's" are often considered when deciding on a migration strategy: Rehost (Lift and Shift): Moving applications and data to the cloud without modifications. Replatform (Lift, Tinker and Shift): Making minor adjustments to applications to optimize them for the cloud. Refactor: Re-architecting applications to fully exploit cloud-native features and capabilities. Repurchase: Moving to a different product, often a cloud-native service. Retain: Keeping certain elements in the existing environment if they are not suitable for cloud migration. Retire: Decommissioning and eliminating unnecessary resources. 𝟑. 𝐌𝐢𝐠𝐫𝐚𝐭𝐢𝐨𝐧 𝐄𝐱𝐞𝐜𝐮𝐭𝐢𝐨𝐧 Migrate Data: Use tools and services (like AWS Database Migration Service or Azure Migrate) to transfer data securely and efficiently. Migrate Applications: Based on the chosen strategy, move applications to the cloud environment. Testing: Conduct thorough testing to ensure applications and data work correctly in the new cloud environment. Optimization: Post-migration, optimize resources for performance, cost, and security. 𝟒. 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐚𝐧𝐝 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 Implement Cloud Security Best Practices: Ensure the cloud environment adheres to industry security standards and best practices. Compliance: Ensure the migration complies with relevant regulations and standards (GDPR, HIPAA, etc.). 𝟓. 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 Prepare Your Team: Train staff on cloud technologies and the new operating model to ensure smooth transition and operation. Adopt a Cloud-Native Approach: Encourage innovation and adoption of cloud-native services to enhance agility and efficiency. Tools and Services #cloudcomputing #cloudarchitect #cloudmigration #cloud

  • View profile for Milan Jovanović
    Milan Jovanović Milan Jovanović is an Influencer

    Practical .NET and Software Architecture Tips | Microsoft MVP

    262,004 followers

    Still relying on services in your Application layer to perform logic? That’s a sign your domain model is doing too little. If your entities look like plain data containers, you’re working with an anemic domain model. Here’s how to refactor toward behavior-driven design: ✅ Push business logic into the domain ✅ Use methods that enforce invariants ✅ Make invalid states unrepresentable ✅ Rethink service boundaries This makes your code easier to maintain, test, and extend. 📘 I walk through a full refactor, step by step → https://lnkd.in/eW_83_jy

  • View profile for Pooja Jain
    Pooja Jain Pooja Jain is an Influencer

    Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

    181,721 followers

    𝗔𝗻𝗸𝗶𝘁𝗮: You know 𝗣𝗼𝗼𝗷𝗮, last Monday our new data pipeline was live in cloud and it failed terribly. Literally had an exhaustive week fixing the critical issues. 𝗣𝗼𝗼𝗷𝗮: Ohh, so don’t you use Cloud monitoring for data pipelines? From my experience always start by tracking these four key metrics: latency, traffic, errors, and saturation. It helps you to check your pipeline health, if it's running smoothly or if there’s a bottleneck somewhere.. 𝗔𝗻𝗸𝗶𝘁𝗮: Makes sense. What tools do you use for this? 𝗣𝗼𝗼𝗷𝗮: Depends on the cloud platform. For AWS, I use CloudWatch—it lets you set up dashboards, track metrics, and create alarms for failures or slowdowns. On Google Cloud, Cloud Monitoring (formerly Stackdriver) is awesome for custom dashboards and log-based metrics. For more advanced needs, tools like Datadog and Splunk offer real-time analytics, anomaly detection, and distributed tracing across service. 𝗔𝗻𝗸𝗶𝘁𝗮: And what about data lineage tracking? How do you track when something goes wrong, it's always a nightmare trying to figure out which downstream systems are affected. 𝗣𝗼𝗼𝗷𝗮: That's where things get interesting. You could simply implement custom logging to track data lineage and create dependency maps. If the customer data pipeline fails, you’ll immediately know that the segmentation, recommendation, and reporting pipelines might be affected. 𝗔𝗻𝗸𝗶𝘁𝗮: And what about logging and troubleshooting? 𝗣𝗼𝗼𝗷𝗮: Comprehensive logging is key. I make sure every step in the pipeline logs events with timestamps and error details. Centralized logging tools like ELK stack or cloud-native solutions help with quick debugging. Plus, maintaining data lineage helps trace issues back to their source. 𝗔𝗻𝗸𝗶𝘁𝗮: Any best practices you swear by? 𝗣𝗼𝗼𝗷𝗮: Yes, here’s what’s my mantra to ensure my weekends are free from pipeline struggles - Set clear monitoring objectives—know what you want to track. Use real-time alerts for critical failures. Regularly review and update your monitoring setup as the pipeline evolves. Automate as much as possible to catch issues early. 𝗔𝗻𝗸𝗶𝘁𝗮: Thanks, 𝗣𝗼𝗼𝗷𝗮! I’ll set up dashboards and alerts right away. Finally, we'll be proactive instead of reactive when it comes to pipeline issues! 𝗣𝗼𝗼𝗷𝗮: Exactly. No more finding out about problems from angry business users. Monitoring will catch issues before they impact anyone downstream. In data engineering, a well-monitored pipeline isn’t just about catching errors—it’s about building trust in every insight you deliver. #data #engineering #reeltorealdata #cloud #bigdata

  • View profile for Kevin Donovan
    Kevin Donovan Kevin Donovan is an Influencer

    Empowering Organizations with Enterprise Architecture | Digital Transformation | Board Leadership | Helping Architects Accelerate Their Careers

    17,547 followers

    𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗻𝗴 𝗖𝗹𝗼𝘂𝗱-𝗡𝗮𝘁𝗶𝘃𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 𝘄𝗶𝘁𝗵 𝗟𝗲𝗴𝗮𝗰𝘆 𝗦𝘆𝘀𝘁𝗲𝗺𝘀: 𝗟𝗲𝘀𝘀𝗼𝗻𝘀 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗙𝗶𝗲𝗹𝗱 In a recent engagement with a large financial services company, the goal was ambitious: 𝗺𝗼𝗱𝗲𝗿𝗻𝗶𝘇𝗲 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗼𝗳 𝗲𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝘁𝗼 𝗽𝗿𝗼𝘃𝗶𝗱𝗲 𝗮 𝗰𝘂𝘁𝘁𝗶𝗻𝗴-𝗲𝗱𝗴𝗲 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲. 𝙏𝙝𝙚 𝙘𝙖𝙩𝙘𝙝? Much of the critical functionality resided on mainframes—reliable but inflexible systems deeply embedded in their operations. They needed to innovate without sacrificing the stability of their legacy infrastructure. Many organizations face this challenge as they 𝗯𝗮𝗹𝗮𝗻𝗰𝗲 𝗺𝗼𝗱𝗲𝗿𝗻 𝗰𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 𝘄𝗶𝘁𝗵 𝗹𝗲𝗴𝗮𝗰𝘆 systems. While cloud-native solutions promise scalability and agility, legacy systems remain indispensable for core processes. Successfully integrating these two requires overcoming issues like 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗰𝗼𝗻𝘁𝗿𝗼𝗹, and 𝗰𝗼𝗺𝗽𝗮𝘁𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗴𝗮𝗽𝘀. Drawing from that experience and others, here are 📌 𝟯 𝗯𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 I’ve found valuable when integrating legacy functionality with cloud-based services: 𝟭 | 𝗔𝗱𝗼𝗽𝘁 𝗮 𝗛𝘆𝗯𝗿𝗶𝗱 𝗠𝗼𝗱𝗲𝗹 Transition gradually by adopting hybrid architectures. Retain critical legacy functions on-premises while deploying new features to the cloud, allowing both environments to work in tandem. 𝟮 | 𝗟𝗲𝘃𝗲𝗿𝗮𝗴𝗲 𝗔𝗣𝗜𝘀 𝗮𝗻𝗱 𝗠𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀 Use APIs to expose legacy functionality wherever possible and microservices to orchestrate interactions. This approach modernizes your interfaces without overhauling the entire system. 𝟯 | 𝗨𝘀𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗧𝗼𝗼𝗹𝘀 Enterprise architecture tools provide a 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰 𝘃𝗶𝗲𝘄 of your IT landscape, ensuring alignment between cloud and legacy systems. This visibility 𝗵𝗲𝗹𝗽𝘀 𝘆𝗼𝘂 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗲 with Product and Leadership to prioritize initiatives and avoid redundancies. Integrating cloud-native architectures with legacy systems isn’t just a technical task—it’s a strategic journey. With the right approach, organizations can unlock innovation while preserving the strengths of their existing infrastructure. _ 👍 Like if you enjoyed this. ♻️ Repost for your network.  ➕ Follow @Kevin Donovan 🔔 _ 🚀 Join Architects' Hub!  Sign up for our newsletter. Connect with a community that gets it. Improve skills, meet peers, and elevate your career! Subscribe 👉 https://lnkd.in/dgmQqfu2 Photo by Raphaël Biscaldi  #CloudNative #LegacySystems #EnterpriseArchitecture #HybridIntegration #APIs #DigitalTransformation

  • View profile for Kai Waehner
    Kai Waehner Kai Waehner is an Influencer

    Global Field CTO | Author | International Speaker | Follow me with Data in Motion

    38,125 followers

    "Replacing Legacy Systems, One Step at a Time with Data Streaming: The Strangler Fig Approach" Modernizing #legacy systems does not need to mean a risky big bang rewrite. Many enterprises are now embracing the #StranglerFig Pattern to migrate gradually, reduce risk, and modernize at their own pace. When combined with #DataStreaming using #ApacheKafka and #ApacheFlink, this approach becomes even more powerful. It allows: - Real time synchronization between old and new systems - Incremental modernization without downtime - True decoupling of applications for scalable, cloud native architectures - Trusted, enriched, and governed data products in motion This is why organizations like #Allianz are using data streaming as the backbone of their #ITModernization strategy. The result is not just smoother migrations, but also improved agility, faster innovation, and stronger business resilience. By contrast, many companies have learned that #ReverseETL is only a fragile workaround. It introduces latency, complexity, and unnecessary cost. In today’s world, batch cannot deliver the real time insights that modern enterprises demand. Data streaming ensures that modernization is no longer a bottleneck but a business enabler. It empowers organizations to innovate without disrupting operations, migrate at their own speed, and prepare for the future of event driven, AI powered applications. Are you ready to transform legacy systems without the risks of a big bang rewrite? Which part of your legacy landscape would you “strangle” first with real time streaming, and why? More details: https://lnkd.in/erxrBJNn

  • View profile for Gurumoorthy Raghupathy

    Effective Solutions and Services Delivery | Architect | DevOps | SRE | Engineering | SME | 5X AWS, GCP Certs | Mentor

    13,701 followers

    𝗟𝗲𝘃𝗲𝗹 𝗨𝗽 𝗬𝗼𝘂𝗿 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆: 𝗪𝗵𝘆 𝗟𝗼𝗸𝗶 & 𝗧𝗲𝗺𝗽𝗼 𝗼𝗻 𝗖𝗹𝗼𝘂𝗱 𝗦𝘁𝗼𝗿𝗮𝗴𝗲 𝗢𝘂𝘁𝘀𝗵𝗶𝗻𝗲 𝗘𝗟𝗞 & 𝗝𝗮𝗲𝗴𝗲𝗿 For teams hosting modern applications, choosing the right observability tools is paramount. While the ELK stack (Elasticsearch, Logstash, Kibana) and Jaeger are popular choices, I want to make a strong case for considering Loki and Tempo, especially when paired with Google Cloud Storage (GCS) or AWS S3. Here's why this combination can be a game-changer: 🚀 Scalability Without the Headache: 1 . Loki: Designed for logs from the ground up, Loki excels at handling massive log volumes with its efficient indexing approach. Unlike Elasticsearch, which indexes every word, Loki indexes only metadata, leading to significantly lower storage costs and faster query performance at scale. Scaling Loki horizontally is also remarkably straightforward. 2 . Tempo: Similarly, Tempo, a CNCF project like Loki, offers a highly scalable and cost-effective solution for tracing. It doesn't index spans, but rather relies on object storage to store them, making it incredibly efficient for handling large trace data volumes. 🤝 Effortless Integration: Both Loki and Tempo are designed to integrate seamlessly with Prometheus, the leading cloud-native monitoring system. This creates a unified observability platform, simplifying setup and operation. Imagine effortlessly pivoting from metrics to logs and traces within the same ecosystem! Integration with other tools like Grafana for visualization is also first-class, providing a smooth and intuitive user experience. 💰 Significant Cost Savings: The combination with GCS or S3 buckets truly shines. By leveraging the scalability and cost-effectiveness of object storage, you can drastically reduce your infrastructure costs compared to provisioning and managing dedicated disk for Elasticsearch and Jaeger. The operational overhead associated with managing and scaling storage for ELK and Jaeger can be substantial. Offloading this to managed cloud storage services frees up valuable engineering time and resources. 💡 Key Advantages Summarized: 1 . Superior Scalability: Handle massive log and trace volumes with ease. 2 . Simplified Integration: Seamlessly integrates with Prometheus and Grafana. 3 . Significant Cost Reduction: Leverage the affordability of cloud object storage. 4 . Reduced Operational Overhead: Eliminate the complexities of managing dedicated storage. Of course, every team's needs are unique. However, if scalability, ease of integration, and cost savings are high on your priority list, I strongly encourage you to explore Loki for logs and Tempo for traces, backed by the power and affordability of GCS or S3. Implementation screenshots shown below took me less than 2 nights to implement using argo-cd + helm + kustomize ... https://lnkd.in/gZyB5VZj #observability #logs #tracing #loki #tempo #grafana #prometheus #gcp #aws #cloudnative #devops #sre

  • View profile for David Linthicum

    Top 10 Global Cloud & AI Influencer | Enterprise Tech Innovator | Strategic Board & Advisory Member | Trusted Technology Strategy Advisor | 5x Bestselling Author, Educator & Speaker

    190,745 followers

    Reconsidering Cloud Strategy: A Comprehensive Look into Key Factors and Solutions The move to cloud computing has been a significant trend in the IT industry, driven by the promise of scalability, flexibility, and cost-efficiency. However, recent findings reveal a shift in this trend, with notable reconsideration from companies about their cloud strategies. This reconsideration is characterized by critical challenges and reconsiderations that have led some UK organizations and IT leaders to reevaluate and even reverse their cloud migration decisions. Here's a detailed exploration of the factors influencing these decisions and proposed solutions to address these challenges. 1. Application Suitability and Cloud Readiness Understanding Suitability: Not all applications or data sets are suitable for cloud environments. Companies have recognized that while cloud platforms offer significant advantages for certain applications—such as those benefiting from cloud-native features and scalability, including generative AI platforms and business analytics—other applications might not be as compatible due to their specific requirements or the nature of their data. Solution: Conducting comprehensive application assessments prior to migration can help identify which applications will thrive in the cloud and which should remain on-premise. Such assessments should consider the technical compatibility, security requirements, and the potential for innovation and growth provided by moving to the cloud. 2. Cost Considerations and Financial Implications Unanticipated Costs: The allure of cloud computing often centers on its perceived cost-efficiency. However, many businesses encountered operational costs that were substantially higher than anticipated. Initial cloud migration costs were reported to be 2.5 times higher than expected, exacerbated by challenges in acquiring the necessary skills for cloud operations and managing data integration costs. Solution: A detailed cost-benefit analysis that encompasses not only the initial migration costs but also ongoing operational, maintenance, and scalability costs is crucial. Businesses should also invest in training for their IT teams to ensure they possess the requisite skills for efficient cloud management. 3. Future Needs and Performance Requirements Overlooking Future Needs: Companies have found that moving to the cloud without thoroughly considering future needs, such as security, compliance, and specific performance requirements, can lead to significant challenges. Unexpected requirements for data transmission, special security, governance, and compliance needs have forced some businesses to revert to on-premise solutions, incurring high costs and operational risks. Performance Issues: Particularly, application latency in cloud setups and the inability of cloud services to match the performance of traditional mainframes and hig…

  • View profile for Frank Schwab
    Frank Schwab Frank Schwab is an Influencer

    Board Member I Advisor I Speaker

    33,693 followers

    Confronting Technical Debt: The Strategic Imperative for Banking IT Over the last 30 years, I have seen banking software where only 2% of the code is still active. A resulting mantra in many banks is: "never touch a running system." However, the accelerating increase in regulatory requirements and innovation forces banks to change faster and faster. As a consequence, technical debt in banking IT is a ticking time bomb that threatens the stability and future of the entire industry. Decades of neglect, quick fixes, and prioritization of short-term gains have left banks saddled with creaking legacy systems that are ill-equipped to handle the demands of the digital age. This accumulated debt is not just an IT problem; it's a strategic risk that can lead to catastrophic consequences. Security breaches, system outages, and missed opportunities for innovation are just some of the dangers that lurk beneath the surface. Banks must confront this debt head-on, investing in modernization and adopting agile practices to build a technology infrastructure that is resilient, secure, and adaptable. Failure to do so will leave them vulnerable in an increasingly competitive and technologically driven landscape, risking irrelevance and ultimately, extinction. In this labyrinthine world of banking IT, I see software refactoring as the unsung hero battling the looming specter of technical debt. It's a meticulous process of restructuring existing code, not to add new features, but to enhance its internal structure and maintainability. Refactoring breathes new life into aging systems, making them more adaptable to evolving business needs and regulatory requirements. It's a proactive measure, akin to regular maintenance of a complex machine, that prevents the accumulation of technical debt and its associated risks. By improving code readability, reducing complexity, and eliminating redundancies, refactoring enhances the efficiency and reliability of banking software. My belief and recommendation: A bank should spend 20% of its software development budget on refactoring and should prefer vendor software with a transparent and relevant refactoring approach. It's an investment in the future, ensuring that the technology infrastructure remains robust, secure, and capable of supporting innovation. In the long run, refactoring is not just a technical necessity but a strategic imperative for banks aiming to thrive in the digital age. #banking #IT #digital #technicaldebt #refactoring #SundayThoughts

  • View profile for Rahul Kaundal

    Head - Radio Access & Transport Network

    32,382 followers

    Software-defined networking (SDN) SDN is a disaggregated layer 2/3 architecture which is abstracted, controlled and programmed using software applications. In a conventional network, components such as switch, router have control and data (forwarding) plane coupled together. Control plane makes routing decisions. Data plane forwards data (packets) through the router/switch. SDN separates control plane from data plane & move towards open infrastructure by decoupling software and hardware. Open and well-defined interface between control and user plane is a prerequisite of SDN & is defined by protocol called OpenFlow, which introduced Flow Rules, a simple-but-powerful way to specify the forwarding behavior. A flow rule is a Match-Action pair: Any packet that Matches the first part of the rule should have the associated Action applied to it. A simple flow rule, for example, might specify that any packet with destination address X be forwarded on output port ge-0. OpenFlow controls data plane and make real-time decisions about how to respond to link and switch failures. If the data plane reports a failure, the control plane provide a remedy (e.g., a new Match/Action flow rule) within milliseconds. Control plane is logically centralized, fully independent of the data plane and implemented off-switch, for example, by running the controller in the cloud. If need more capacity in the data plane, add a bare-metal switch. If need more capacity in the control plane, add a compute server. Network Operating System (NOS) in controller is like a server operating system which provides a set of high-level abstractions. Virtualization layer (NFV) is added between the hardware layer and the control system that allow generic networking hardware to support multiple configurations. With SDN, it can create one set of (forwarding) rules and applications for one group of users, and an entirely different set of rules and applications for another group of users. Use cases of SDN are traffic engineering for WANs, SD-WANs, Access networks, network telemetry and switching fabrics. In 5G NR, Near-RT RIC (Radio Intelligent controller) implemented as an SDN Controller to host a set of SDN control apps. These apps - Link Aggregation Control, Interference Management, Load Balancing, and Handover Control, are currently implemented by individual base stations with only local visibility, but they have global consequences. The SDN approach is to collect the available input data centrally, make a globally optimal decision, and then push the respective control parameters back to the base stations for execution in real time. Note - Realizing value of SDN in the 5G NR is ongoing and emerging. Reference – ONF, ETSI To learn more, visit - https://lnkd.in/eSYuK9V7

Explore categories