Vai offline con l'app Player FM !
Data Migration Strategies For Large Scale Systems
Manage episode 420441902 series 3449056
Summary
Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
- This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
- Your host is Tobias Macey and today I'm interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the process
Interview
- Introduction
- How did you get involved in the area of data management?
- Can you start by sharing some of your experiences with data migration projects?
- As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems?
- How would you categorize the different types and motivations of migrations?
- How does the motivation for a migration influence the ways that you plan for and execute that work?
- Can you talk us through one or two specific projects that you have taken part in?
- Part 1: The Triggers
- Section 1: Technical Limitations triggering Data Migration
- Scaling bottlenecks: Performance issues with databases, storage, or network infrastructure
- Legacy compatibility: Difficulties integrating with modern tools and cloud platforms
- System upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade)
- Section 2: Types of Migrations for Infrastructure Focus
- Storage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.)
- Data center migration: Physical relocation or consolidation of data centers
- Virtualization migration: Moving from physical servers to virtual machines (or vice versa)
- Section 3: Technical Decisions Driving Data Migrations
- End-of-life support: Forced migration when older software or hardware is sunsetted
- Security and compliance: Adopting new platforms with better security postures
- Cost Optimization: Potential savings of cloud vs. on-premise data centers
- Section 1: Technical Limitations triggering Data Migration
- Part 2: Challenges (and Anxieties)
- Section 1: Technical Challenges
- Data transformation challenges: Schema changes, complex data mappings
- Network bandwidth and latency: Transferring large datasets efficiently
- Performance testing and load balancing: Ensuring new systems can handle the workload
- Live data consistency: Maintaining data integrity while updates occur in the source system
- Minimizing Lag: Techniques to reduce delays in replicating changes to the new system
- Change data capture: Identifying and tracking changes to the source system during migration
- Section 2: Operational Challenges
- Minimizing downtime: Strategies for service continuity during migration
- Change management and rollback plans: Dealing with unexpected issues
- Technical skills and resources: In-house expertise/data teams/external help
- Section 3: Security & Compliance Challenges
- Data encryption and protection: Methods for both in-transit and at-rest data
- Meeting audit requirements: Documenting data lineage & the chain of custody
- Managing access controls: Adjusting identity and role-based access to the new systems
- Section 1: Technical Challenges
- Part 3: Patterns
- Section 1: Infrastructure Migration Strategies
- Lift and shift: Migrating as-is vs. modernization and re-architecting during the move
- Phased vs. big bang approaches: Tradeoffs in risk vs. disruption
- Tools and automation: Using specialized software to streamline the process
- Dual writes: Managing updates to both old and new systems for a time
- Change data capture (CDC) methods: Log-based vs. trigger-based approaches for tracking changes
- Data validation & reconciliation: Ensuring consistency between source and target
- Section 2: Maintaining Performance and Reliability
- Disaster recovery planning: Failover mechanisms for the new environment
- Monitoring and alerting: Proactively identifying and addressing issues
- Capacity planning and forecasting growth to scale the new infrastructure
- Section 3: Data Consistency and Replication
- Replication tools - strategies and specialized tooling
- Data synchronization techniques, eg Pros and cons of different methods (incremental vs. full)
- Testing/Verification Strategies for validating data correctness in a live environment
- Implication of large scale systems/environments
- Comparison of interesting strategies:
- DBLog, Debezium, Databus, Goldengate etc
- DBLog, Debezium, Databus, Goldengate etc
- Section 1: Infrastructure Migration Strategies
- What are the most interesting, innovative, or unexpected approaches to data migrations that you have seen or participated in?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on data migrations?
- When is a migration the wrong choice?
- What are the characteristics or features of data technologies and the overall ecosystem that can reduce the burden of data migration in the future?
Contact Info
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
Links
- DagKnows
- Google Cloud Dataflow
- Seinfeld Risk Management
- ACL == Access Control List
- LinkedIn Databus - Change Data Capture
- Espresso Storage
- HDFS
- Kafka
- Postgres Replication Slots
- Queueing Theory
- Apache Beam
- Debezium
- Airbyte
- [Fivetran](fivetran.com)
- Designing Data Intensive Applications by Martin Kleppman (affiliate link)
- Vector Databases
- Pinecone
- Weaviate
- LAMP Stack
- Netflix DBLog
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
- Red Hat Code Comments Podcast: ![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).
- Starburst: ![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)
445 episodi
Manage episode 420441902 series 3449056
Summary
Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
- This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
- Your host is Tobias Macey and today I'm interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the process
Interview
- Introduction
- How did you get involved in the area of data management?
- Can you start by sharing some of your experiences with data migration projects?
- As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems?
- How would you categorize the different types and motivations of migrations?
- How does the motivation for a migration influence the ways that you plan for and execute that work?
- Can you talk us through one or two specific projects that you have taken part in?
- Part 1: The Triggers
- Section 1: Technical Limitations triggering Data Migration
- Scaling bottlenecks: Performance issues with databases, storage, or network infrastructure
- Legacy compatibility: Difficulties integrating with modern tools and cloud platforms
- System upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade)
- Section 2: Types of Migrations for Infrastructure Focus
- Storage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.)
- Data center migration: Physical relocation or consolidation of data centers
- Virtualization migration: Moving from physical servers to virtual machines (or vice versa)
- Section 3: Technical Decisions Driving Data Migrations
- End-of-life support: Forced migration when older software or hardware is sunsetted
- Security and compliance: Adopting new platforms with better security postures
- Cost Optimization: Potential savings of cloud vs. on-premise data centers
- Section 1: Technical Limitations triggering Data Migration
- Part 2: Challenges (and Anxieties)
- Section 1: Technical Challenges
- Data transformation challenges: Schema changes, complex data mappings
- Network bandwidth and latency: Transferring large datasets efficiently
- Performance testing and load balancing: Ensuring new systems can handle the workload
- Live data consistency: Maintaining data integrity while updates occur in the source system
- Minimizing Lag: Techniques to reduce delays in replicating changes to the new system
- Change data capture: Identifying and tracking changes to the source system during migration
- Section 2: Operational Challenges
- Minimizing downtime: Strategies for service continuity during migration
- Change management and rollback plans: Dealing with unexpected issues
- Technical skills and resources: In-house expertise/data teams/external help
- Section 3: Security & Compliance Challenges
- Data encryption and protection: Methods for both in-transit and at-rest data
- Meeting audit requirements: Documenting data lineage & the chain of custody
- Managing access controls: Adjusting identity and role-based access to the new systems
- Section 1: Technical Challenges
- Part 3: Patterns
- Section 1: Infrastructure Migration Strategies
- Lift and shift: Migrating as-is vs. modernization and re-architecting during the move
- Phased vs. big bang approaches: Tradeoffs in risk vs. disruption
- Tools and automation: Using specialized software to streamline the process
- Dual writes: Managing updates to both old and new systems for a time
- Change data capture (CDC) methods: Log-based vs. trigger-based approaches for tracking changes
- Data validation & reconciliation: Ensuring consistency between source and target
- Section 2: Maintaining Performance and Reliability
- Disaster recovery planning: Failover mechanisms for the new environment
- Monitoring and alerting: Proactively identifying and addressing issues
- Capacity planning and forecasting growth to scale the new infrastructure
- Section 3: Data Consistency and Replication
- Replication tools - strategies and specialized tooling
- Data synchronization techniques, eg Pros and cons of different methods (incremental vs. full)
- Testing/Verification Strategies for validating data correctness in a live environment
- Implication of large scale systems/environments
- Comparison of interesting strategies:
- DBLog, Debezium, Databus, Goldengate etc
- DBLog, Debezium, Databus, Goldengate etc
- Section 1: Infrastructure Migration Strategies
- What are the most interesting, innovative, or unexpected approaches to data migrations that you have seen or participated in?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on data migrations?
- When is a migration the wrong choice?
- What are the characteristics or features of data technologies and the overall ecosystem that can reduce the burden of data migration in the future?
Contact Info
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
Links
- DagKnows
- Google Cloud Dataflow
- Seinfeld Risk Management
- ACL == Access Control List
- LinkedIn Databus - Change Data Capture
- Espresso Storage
- HDFS
- Kafka
- Postgres Replication Slots
- Queueing Theory
- Apache Beam
- Debezium
- Airbyte
- [Fivetran](fivetran.com)
- Designing Data Intensive Applications by Martin Kleppman (affiliate link)
- Vector Databases
- Pinecone
- Weaviate
- LAMP Stack
- Netflix DBLog
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
- Red Hat Code Comments Podcast: ![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).
- Starburst: ![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)
445 episodi
Semua episode
×Benvenuto su Player FM!
Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.