Presentation on theme: "Sunilkumar Kakade – Director IT"— Presentation transcript:
1Sunilkumar Kakade – Director IT Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy ModernizationSunilkumar Kakade – Director ITAashish Chandra – DVP, Legacy ModernizationHadoop Summit June 26th, 2013
2Legacy Rides The Elephant Hadoop is disrupting the enterprise IT processing.Legacy Rides The Elephant
3Recognition - Contributors Our LeadersTed RudmanAashish ChandraTeamSimon ThomasSunil KakadeSusan HsuBob PultKim HavensMurali NandulaWilla TaoArlene PynadathNagamani BandaTushar TannaKesavan SrinivasanBatch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.
4The Enterprise Challenge The ChallengeGrowing Data VolumesShortened Processing WindowsEscalating CostsHitting Scalability CeilingsDemanding Business RqmtsETL ComplexityLatency in DataTight IT Budgets
5Mainframe Migration - Overview In spite of recent advances in computing, many core business processes are batch-oriented running on mainframes.Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organization have considered or attempted multi-year mainframe migration/re-hosting strategies.Batch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.
6Batch Processing Characteristics Large amounts of input data are processed and stored (perhaps terabytes or more).Large numbers of records are accessed, and a large volume of output is producedImmediate response time is usually not a requirement, however, must complete within a “batch window”Batch jobs are often designed to run concurrently with online transactions with minimal resource contention.*Ref:. IBM Redbook
7Batch Processing Characteristics Key infrastructure requirements:Sufficient data storageAvailable processor capacity, or cyclesjob schedulingProgramming utilities to process basic operations (Sort/Filter/Split/Copy/Unload etc.)
8Why Hadoop and Why Now? THE ADVANTAGES: THE CHALLENGE: THE SOLUTION: Cost reductionAlleviate performance bottlenecks ETL too expensive and complexMainframe and Data Warehouse processing HadoopTHE CHALLENGE:Traditional enterprises lack of awarenessTHE SOLUTION:Leverage the growing support system for HadoopMake Hadoop the data hub in the EnterpriseUse Hadoop for processing batch and analytic jobsBatch workload can be migrated and run anytime in a fraction of the clock-time leveraging Hadoop.
9The ArchitectureEnterprise solutions using Hadoop must be an eco-systemLarge companies have a complex environment:Transactional systemServicesEDW and Data martsReporting tools and needsWe needed to build an entire solution
13MetaScale Batch Processing Architecture With Hadoop
14Typical Batch Processing Units (JCL) on Mainframe
15Batch Processing Migration With Hadoop Seamless migration of high MIPS processing jobs with no application alteration
16Mainframe to Hadoop-PIG conversion example Mainframe JCL//PZHDC110 EXEC PGM=SORT//SORTIN DD DSN=PZ.THDC100.PLMP.PRC,// DISP=(OLD,DELETE,KEEP)//SORTOUT DD DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000,// DISP=(,CATLG,DELETE),// UNIT=CART,// VOL=(,RETAIN),// RECFM=FB,LRECL=40//SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A),// DISP=SHR//SYSOUT DD SYSOUT=V//SYSUDUMP DD SYSOUT=D//*__________________________________________________//* SORT FIELDS=(1,9,CH,A)- 500 Million Records sort took 45 minutes of clock time on A168 mainframePIGa = LOAD 'data' AS f1:char;b = ORDER a BY f1;- 500 Million Records sort took less than 2 minutesMore benchmarking studies in progress16
17Mainframe to Hadoop-PIG conversion example Mainframe JCL//PZHDC110 EXEC PGM=SORT//SORTIN DD DSN=PZ.THDC100.PLMP.PRC,// DISP=(OLD,DELETE,KEEP)//SORTOUT DD DSN=PZ.THDC110.PLMP.PRC.SRT,LABEL=EXPDT=99000,// DISP=(,CATLG,DELETE),// UNIT=CART,// VOL=(,RETAIN),// RECFM=FB,LRECL=40//SYSIN DD DSN=KMC.PZ.PARMLIB(PZHDC11A),// DISP=SHR//SYSOUT DD SYSOUT=V//SYSUDUMP DD SYSOUT=D//*__________________________________________________//* SORT FIELDS=(1,9,CH,A)- 500 Million Records sort took 45 minutes of clock time on A168 mainframePIGa = LOAD 'data' AS f1:char;b = ORDER a BY f1;- 500 Million Records sort took less than 2 minutesMore benchmarking studies in progress17
18Mainframe Migration – Value Proposition Cost SavingsOpen Source PlatformSimpler & Easier CodeBusiness AgilityBusiness & IT TransformationModernized SystemsIT EfficienciesCompanies can SAVE 60% ~ 80% of their Mainframe Costs with ModernizationOptimizeHigh TCOMainframe Optimization: -5% ~ 10% MIPS Reduction-Quick Wins with Low hanging fruitsMainframeMigrationInert Business PracticesConvertMainframe ONLINE-Tool based Conversion-Convert COBOL & JCL to JavaTypically 60% ~ 65% of MIPS are used in Mainframes by BATCH processingResourceCrunchPiG / Hadoop RewritesMainframe BATCH-ETL Modernization-Move Batch Processing to HadoopEstimated 45% of FUNCTIONALITY in mainframes is never used
19Mainframe Migration – Traditional Approach Traditional approaches to mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability.Many organizations still utilize mainframe for batch processing applications. Several solutions presented to move expensive mainframe computing to other distributed proprietary platform, most of them rely on end-to-end migration of applications.
20Mainframe Batch Processing MetaScale Architecture Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop Ecosystem, without the risks, time and costs of other methods.The solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen.
21MetaScale Mainframe Migration Methodology Key to our Approach:allowing users to continue to use familiar consumption interfacesproviding inherent HAenabling businesses to unlock previously unusable dataImplement a Hadoop-centric reference architectureMove enterprise batch processing to HadoopMake Hadoop the single point of truthMassively reduce ETL by transforming within HadoopMove results and aggregates back to legacy systems for consumptionRetain, within Hadoop, source files at the finest granularity for re-use123456
22Mainframe Migration - Benefits Cost SavingsSignificant reduction in ISV costs & mainframe software licenses feesOpen Source platformSaved ~ $2MM annually within 13 weeks by MIPS Optimization effortsReduced MIPS by moving batch processing to HadoopTransformI.T.Modernized COBOL, JCL, DB2, VSAM, IMS & so onReduced batch processing in COBOL/JCL from over 6 hrs to less than 10 min in PiG Latin on HadoopSimpler, and easily maintainable codeMassively Parallel ProcessingSkills & ResourcesReadily available resources & commodity skillsAccess to latest technologiesIT Operational EfficienciesMoved 7000 lines of COBOL code to under 50 lines in PiGBusiness AgilityAncient systems no longer bottleneck for businessFaster time to MarketMission critical “Item Master” application in COBOL/JCL being converted by our tool in Java (JOBOL)“MetaScaleis the market leader in moving mainframe batch processing to Hadoop”
23SummaryHadoop can revolutionize Enterprise workload and make business agileCan reduce strain on legacy platformsCan reduce costCan bring new business opportunitiesMust be an eco-systemMust be part of an data overall strategyNot to be underestimated
24The LearningOver two years of Hadoop experience using Hadoop for Enterprise legacy workload.We can dramatically reduce batch processing times for mainframe and EDWWe can retain and analyze data at a much more granular level, with longer historyHadoop must be part of an overall solution and eco-systemHADOOPWe can reliably meet our production deliverable time-windows by using HadoopWe can largely eliminate the use of traditional ETL toolsNew Tools allow improved user experience on very large data setsIMPLEMENTATIONWe developed tools and skills – The learning curve is not to be underestimatedWe developed experience in moving workload from expensive, proprietary mainframe and EDW platforms to Hadoop with spectacular resultsUNIQUEVALUE
25The Horizon – What do we need next? Automation tools and techniques that ease the Enterprise integration of HadoopEducate traditional Enterprise IT organizations about the possibilities and reasons to deploy HadoopContinue development of a reusable framework for legacy workload migration
26Legacy Modernization Service Offerings Leveraging our patent pending and award-winning niche` products, we reduce Mainframe MIPS, Modernize ETL processing and transform business and IT organizations to open source, cloud based, Big Data and agile platformMetaScale Legacy Modernization offers following services –Legacy Modernization Assessment ServicesMainframe Migration ServicesMIPS Reduction ServicesMainframe Application MigrationLegacy Distributed ModernizationETL Modernization ServicesModernize Proprietary Systems and DatabasesManaged Applications SupportSupport Transition Services
27Legacy Modernization Made Easy! Follow us onJoin us on LinkedIn:For more information, visit: