Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shay Hassidim Deputy CTO Oct 2007

Similar presentations

Presentation on theme: "Shay Hassidim Deputy CTO Oct 2007"— Presentation transcript:

1 Shay Hassidim Deputy CTO Oct 2007
Data-Awareness and Low-Latency on the Enterprise Grid Getting the Most out of Your Grid with Enterprise IMDG Shay Hassidim Deputy CTO Oct 2007

2 Overall Presentation Goal
Understand the Space Based Architecture model and its 4 verbs. Understand the Data contention challenge and the latency challenge with Enterprise Grid based applications. Understand why typical In-Memory-Data-Grid can’t solve the above problems and why the Enterprise IMDG can.

3 GigaSpaces in a Nutshell
Founded in 2000 Founder of The Israeli Association of Grid Technologies (IGT) – OGF affiliate. Provides infrastructure software for applications characterized by: High volume transaction processing Very Low latency requirements Real time analytics Product: eXtreme Application Platform – XAP. 6.0 released few months ago. Enterprise In Memory Data Grid (Caching) Application Service Grid Customer base – about 2000 deployments around the world. Financial Services Telecom Defense and Government Presence US: NY (HQ), San Francisco, Atlanta EMEA: UK, France, Germany, Israel (R&D) APAC: Japan, Singapore, Hong Kong

4 About myself – Shay Hassidim
B.Sc. Electrical, Computer & Telecommunications engineer. Focus on Neural networks & Artificial Intelligence , Ben-Gurion University , Graduated 1994 Object and Multi-Dimensional DBMS Expert Extensive knowledge with Object Oriented & Distributed Systems Consultant for Telecom, Healthcare , Defense & Finance projects Technical Skills: MATLAB , C, C++, .Net , PowerBuilder , Visual Basic , Java , XML , CORBA , J2EE , ODMG , JDO , Hibernate, SQL , JMS , JMX, IDE , GUI , Jini , ODBMS , RDBMS , JavaSpaces In the past: Sirius Technologies Israel - VMDB Applications & Tools team Leader Versant Corp US. - Tools Lead Architect , R&D Since GigaSpaces VP Product Management (Based in Israel) Since 2007 – GigaSpaces Deputy CTO (Based in NY)

5 GigaSpaces – Technical overview

6 The Basics… Data Grid: Caching Topologies
Replicated Cache Master / Local Cache Partitioned Cache

7 So. . .What is Space-Based Architecture?
} Utilizing a single logical/virtual resource to share: Data Logic Events Services: Interact with each other through the space Can be co-located with data/events for faster results Are deployed and managed in an adaptive and fail-safe way Objects! Data Provisioning Logic Processing Event Propagation Middleware is virtualized – Multiple API over the same runtime, common cluster Utilize patterns from distributed computing, namely master/worker to achieve parallelization 7

8 Space Based SOA using 4 Simple Verbs
Write Read Take Notify Write + Read = IMDG (Caching) Write + Notify = Messaging Write + Take = Parallel Processing Write Read Take Write Take Notify Why Space 1. Virtualized middleware 8 8

9 IMDG Distributed In-Memory Query Support
Enable aggregation of data transparently Support SQL Query semantics Continues query via notifications Local view – client side cache Parallel Query Partitioned Clustered Space Space proxy Read Local View updated using Continues Query

10 Data virtualization– IMDG Accessed by all popular API and programming languages
Provides true data grid that supports variety of standard based data API API Becomes just a view Same data can be accessed via multiple API Combine the benefits of the relational model with OO model Clustered Space JDBC Map/ JCache Applications Space CPP/.Net

11 Integration with External Database – 2 basic models
Write/Read Through and Write behind enables lazy load of data from DB to the cache and async persistency Complete mirroring cache data into the DB Support also for black box persistency into RDBMS and index file (light embedded ODBMS) Sync/Async Hibernate Cache plug-in provides 2nd level cache for hibernate based applications

12 Seamless Integration with External Data Sources
Data is propagated seamlessly from the IMDG to the external Data source and visa versa Through the CacheStore. store store store Reliable Async Replication load load load Mirror Service The previous slide talks about seamless integration on the business tier which covers mostly the developer API abstraction view. The mirror-service on the other hand covers the seamless integration from the data-grid side i.e. it addresses the question of how can we take an existing database and plug-it into our architecture (without changing the data base) and on the other side how can we export the changes made to the IMDG to the external data base in order that existing application (mostly reporting, legacy applications) would be able to continue their work without changing their existing interfaces, code etc. The main design objectives of this feature were: How to make it seamless How to reduce the performance overhead so that the benefit of using IMDG will be effected negatively as a result of that overhead. Transparency is achieved through Hibernate, which provides a mapping layer between the data source to the object model in the space. In many cases the application already includes that mapping layer so we basically just utilize it for our own purposes. The second items is achieved through the following capabilities: 1. Reliable asynchronous communication. Even in asynchronous updates to the db we ensure no-data loss as long as the data is available in one of the instances of the IMDG. 2. Bulk operations – This capability is designed to reduce the network overhead associated with the network call. It also enables leveraging of bulk operations which a number of data bases currently support. 3. Collocation of the mirror service with the data base. The fact that the actual writes to the data base are not done on the IMDG machine but rather on the data base machine enables leveraging of fast communication channel to the data base and reduces the performance overhead associated with that type of communication. The Mirror service ensures Reliable synchronization with minimal performance overhead External Data Source

13 SBA – Real-time SOA for Stateful Services
Shared state to enable stateful services Services can be Java, C++, .Net Content-Based Routing 13 13

14 Enterprise Data Grid unique features
Benefits Feature - Makes the IMDG accessible to standard reporting tools. - Makes accessing the IMDG just like accessing a JDBC-compatible database, reducing the learning curve. Extended and Standard Query based on SQL, and ability to connect to IMDG using standard JDBC connector. Brings relevant data close to the local memory of the relevant application instance. SQL-based continuous query support. Allows the entire IMDG to be controlled and viewed from an administrator’s console. Central management, monitoring and control. Allows seamless integration with existing reporting and back-office systems. Mirror Service—transparent persistence of data from the entire IMDG to a legacy database or other data source. Provides capabilities usually provided by messaging systems, including slow-consumer support, FIFO, batching, pub/sub, content-based routing. Real-time event notification—application instances can selectively subscribe to specific events.

15 GigaSpaces solution for Enterprise Grid

16 What is the Enterprise Grid?
Improve utilization of HW resources through: Multiple applications can share a pool of hardware resources. Resources are allocated to each application as needed. Applications can scale up very easily. The Grid provides parallelization for heavy computing jobs.

17 What about stateful applications?
Great, But… What about stateful applications? Data Contention challenge How can I bring front office application to the grid? The Latency challenge

18 The Data Contention Challenge
Only stateless applications can scale up freely on the Grid. Any application that needs to: Share state between more than one instance (service/process) Store state using a central database Could not scale easily! This implies Partial analysis results checkpoints to enable recovery. Managing a workflow involving more than one process. Common data need to be shared between processes

19 The Latency Challenge Enterprise Grid designed for batch applications
Each client request is submitted as a job. Hardware resources are allocated. Relevant software instances (service/process) are scheduled to run on the resources and perform the work. Impracticable with low-latency environments! Why? An interactive application receives thousands of client requests per second, each of which needs to be fulfilled within milliseconds. It is impossible to respond fast enough in a “job” approach. Throughput would be severely limited due to the need to schedule and launch large numbers of application instances.

20 Three Stages Approach to the Solution
In Memory Data Grid (IMDG) Data Aware Grid using SLA driven containers Adding front office application to the Grid using Declarative Space Based Architecture (SBA)

21 In Memory Data Grid (IMDG)
Stage 1 Data stored in the memory of numerous physical machines instead of, or alongside, a database. Eliminates I/O, network and CPU load. Partitions the data and moves it closer to the application. However, IMDG in an Enterprise distributed environment, is only a partial solution!

22 Data Aware Grid using SLA driven containers
Stage 2 Common wisdom holds that it is much easier to bring the business logic to the data than to bring the data to the business logic. But… Not all IMDG support data & business logic co-locality! This results: Unnecessary overhead caused by remote calls from business logic to IMDG instances. Data duplication, because business logic elements that use the same data are not necessarily concentrated around the relevant IMDG instance. And worst of all, data contention, because several business logic elements might access the same IMDG instance - leading to exactly the problem the IMDG was meant to solve! Requirements for a Data-Aware Grid The Enterprise Grid must know which data is stored on which IMDG instances. There must be a way to guarantee data affinity - tasks must always be executed with the relevant data coupled to them.

23 Enterprise IMDG Deployment requirements
Stage 2 Deploying a shared IMDG rather than specific IMDG per application requires: Improved resource utilization With the IMDG as a shared resource, memory and CPUs available to the IMDG instances can be shared between different applications, depending on their current data loads. It is also much easier to scale the IMDG to respond to changing data needs Lower total cost of ownership Installation, testing, configuration, maintenance and administration of the IMDG is performed centrally for all the applications on the Grid.

24 Enterprise IMDG requirements for grid environments
Stage 2 Sensitivity to Demand for Data vs. Available Resources Free (Memory) resources when there is no need for them Multi-Tenancy Continuous High-Availability Hot fail-over Versioning—it should be possible to upgrade or update the IMDG instances without affecting the data or interrupting access. Configuration changes—it should be possible to change configuration without affecting availability of the IMDG instances. Schema evolution—changing the data structure (i.e. adding or modifying classes) should not affect the existing data and should not require downtime. Isolation (Groups, instances, Data) Content-Based Security Explicit Control over IMDG Instance Locations (manual relocation while the system is running) Integration with Existing Systems

25 Strategies for adding data awareness to the grid
Stage 2 Method of Providing Data Awareness Scenario Integration using affinity keys—the Enterprise Grid and users submitting tasks share special keys that identify the data relevant to each task. In this way the Enterprise Grid can execute tasks on the same machine as the relevant data. IMDG instances deployed directly by Enterprise Grid (without SLA-Driven Containers). Provides data awareness implicitly—data-intensive procedures can run in the SLA-Driven Container, together (co-located) with the IMDG instances. Because the container itself is data aware, data affinity can be guaranteed, without making the Enterprise Grid itself data aware. SLA-Driven Containers are launched by Enterprise Grid (each container launches relevant IMDG instances). Stage 3

26 Adding front-office to the grid using Declarative SBA
Stage 3 We addressed the Data contention, but what about the low latency, for that we need the co-location of the SBA Managing applications instead of instances Processing unit All services are collocated on the same machine Transparent data affinity via content based routing (i.e. hash based load-balancing) Sharing can be done in local memory => the lowest possible latency.

27 Declarative SBA (cont.)
Stage 3 So what it this “processing unit”? A mini-application which can perform the entire business process. Accept a user request, perform all steps of the transaction on its own, and provide a result. Removes the need for sharing of state and partial results between different components of the application running on different physical machines. We addressed the Data contention, but what about the low latency, for that we need the co-location of the SBA 27

28 SLA Driven Application Service Container
Stage 3 Provides built-in support for deployment of Spring based applications Virtualize the network and physical resources from the application Handles Fail Over, Scaling and Relocation policies using SLA based definitions. Provides distributed dependency injection to handle partial failure and deployment dependency. Provides single point of access for monitoring and management

29 SLA Driven Deployment Stage 3 SLA: Failover policy Scaling policy
Ststem requirements Space cluster topology PU Services beans definition

30 Continuous High Availability
Stage 3 Failure Fail-Over

31 Dynamic Partitioning = Dynamic Capacity Growth
P - Primary Max Capacity=6G Max Capacity=4G Max Capacity=2G B - Backup VM 1 ,2G VM 2 ,2G VM 3 , 2G GSC GSC GSC A B Partition 2 P E F Partition 1 P C D Partition 3 P VM 4 ,4G VM 5 , 4G GSC A B Partition 2 GSC B C D Partition 3 In some point VM 1 free memory is below 20 % - it about the time to increase the capacity – lets move Partitions 1 to another GSC and recover the data from the running backup! E F Partition 1 B Later .. Partition 2 needs to move… After the move , data is recovered from the backup B

32 A closer look at OpenSpaces and Declarative SBA Development

33 Declarative Spring-SBA – How it works.
Step 1: Implement POJO domain model Step 2: Implement the POJO Services Step 3: Wire the services through spring Step 4: Packaging Deploy to Grid (Scale-Out)

34 The POJO Based Data Domain Model
@SpaceClass public class Data { @SpaceId(autoGenerate = true) public String getId() { return id; } @SpaceRouting public Long getType() { return type; public void setProcessed(boolean processed) { this.processed = processed; SpaceClass indicate that this is a SpaceEntry – SpaceClass includes classlevel attributes such as FIFO,Persistent… SpaceId used to define the key for that entry. SpaceRouting used to set the data affinity i.e. define the partition where this entry will be routed to.

35 Order Processor Service Bean
SpaceDataEvent annotation marks the processData method as the one that need to be called when an event is triggered public class DataProcessor implements IDataProcessor { @SpaceDataEvent public Data processData(Data data) { data.setProcessed(true); data.setData("PROCESSED : " + data.getRawData()); // reset the id as we use auto generate true data.setId(null); System.out.println(" PROCESSED : " + data); return data; }

36 Wiring Order Processor Service Bean through Spring
<bean id="dataProcessor“ class="com.gigaspaces.pu.example1.processor.DataProcessor" /> <os-events:polling-container id="dataProcessorPollingEventContainer" giga-space="gigaSpace"> <os-events:tx-support tx-manager="transactionManager"/> <os-core:template> <bean class=""> <property name="processed" value="false"/> </bean> </os-core:template> <os-events:listener> <os-events:annotation-adapter> <os-events:delegate ref="dataProcessor"/> </os-events:annotation-adapter> </os-events:listener> </os-events:polling-container> The PollingEventContainer will implicitly call take with template defined in the template property and invoke the method marked on dataProcessor bean.

37 Direct Data Loader Client
Space BUS Order Processor Service Bean Polling Event Container Notify Event Processed Orders Routing Take Write Notify Space Proxy Write Data Loader

38 SpaceServiceExporter SpaceServiceProxyFactoryBean
Space Based Remoting SpaceServiceExporter Order Processor Service Bean Order Proxy ProcesData Invoke Order Processor Client SpaceInvokeData Write OrderProcessor Delegator SpaceServiceProxyFactoryBean Take SpaceInvokeData Write result Space BUS

39 Space Based Remoting – Inherent Scalability/Reliability
Order Proxy Order Processor Client SpaceServiceProxyFactoryBean Invoke Write SpaceInvokeData OrderProcessor Delegator Space BUS Order Processor Service Bean SpaceServiceExporter Take result ProcessData

40 Looking into the Future… Many Enhancements!
Enhance Performance Built in infiniband support – Voltaire , Cisco Enhance Database integration Enhance the Space Mirror support (async persistency) Enhance partnership and integration with grid vendors DataSynapse , Platform Computing , Sun Grid Engine, Microsoft Compute Cluster Server Enhance CPP and .Net support Performance optimization – first goal – same as java Support for complex object mapping

41 Conclusions and Summary
Typical IMDG won’t help you You need Data Aware Enterprise IMDG to solve the data contention and latency challenges. Data affinity need its twin: data & business locality The Enterprise IMDG co-locates the data with the business logic Using self-sufficient autonomic processing unit deployed into SLA based container that scales via the Enterprise Grid The Enterprise IMDG bring the Front-office into the grid Makes the grid a utility model for wide spectrum of applications across the organization

42 Case Studies

43 A Dynamically Scalable Architecture for Data Intensive Trading Analysis Applications
Most financial organizations today use Excel™ or Reporting Databases as the main trading analysis tools. These are very difficult to scale. The solution is to create a shared In-Memory Data Grid (IMDG) which stores the trading data in a shared pool of machines. Common data calculation and analysis run on that pool as well, leveraging the available memory and CPU resources. JavaSpaces is a powerful model for distributed persistence. GigaSpaces is a JavaSpaces vendor providing Enterprise features. Spring hides the details of the JavaSpaces model, allows effort to be focused on requirements rather than frameworks. Using shared data grid for all users This is based on BOFA MO tools group work in london. They use GigaSpaces to move people from excel as their analytic engine in order that they could deal with the increase in the volume of data, get consistent results etc. Running analytics close to the data to improve performance and leverage the available resources

44 Reconciliation Calculation
This is reconciliation application we did for Goldman: Goldman- Background GS application is a reconciliation application (Has lots of similarities to NASD) which is basically a near-real-time analytics application. Reconciliation in general means that the application run over the trades that has been executed and find if they where executed as expected (The balance, It needs to mark anomalies (trades that doesn't balance – called break). The purpose of this application is to run over relatively large set of data, generate correlation output between different trades based on certain matching criteria. Other applications, Users use this data to generate reports. The current system is based on DB and Store procedures and is handling a set of couple of millions of trades per day and processed on a post trade basis (end of day) as batch operation. Motivation: Capacity - They expect that the number of trades will grow up to 80 millions trades per day Performance - They would like to be able to process this analytic on inter-day basis i.e. within 2 hours (if I recall correctly) This two requirements force them to re-architect their existing system since it is clear that the current DB approach will not scale and wouldn't meet the performance expectations. The performance expectations is to handle at least 2000 trades/sec. The trades will be loaded in bursts   GigaSpaces Proposed architecture The following diagram shows the proposed architecture: As can be seen in the above diagram the loaders are basically space clients that read data from files or messaging queue and write it into the grid. The grid consists of a partition space topology that takes the role of IMDG (In Memory Data Grid). The application register for notification on events coming from the loaders. The business logic is currently collocated in each partition and handles the data that is collocated with each partition only. They would like that the entire application will be highly available and would maintain hot failover policy. The business logic is currently implemented as IWorker that runs collocated with each partition.

45 Questions?

46 Thank You!

Download ppt "Shay Hassidim Deputy CTO Oct 2007"

Similar presentations

Ads by Google