Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 1 Optimizing Membrane.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

CoMPI: Enhancing MPI based applications performance and scalability using run-time compression. Rosa Filgueira, David E.Singh, Alejandro Calderón and Jesús.
Data Compression CS 147 Minh Nguyen.
Distributed Systems CS
CA 714CA Midterm Review. C5 Cache Optimization Reduce miss penalty –Hardware and software Reduce miss rate –Hardware and software Reduce hit time –Hardware.
Parallel Programming Motivation and terminology – from ACM/IEEE 2013 curricula.
IoP HEPP 2004 Birmingham, 7/4/04 David Cameron, University of Glasgow 1 Simulation of Replica Optimisation Strategies for Data.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Algorithm of Rules Application based on Competitiveness of Evolution Rules Speaker: Jorge Aurelio Tejedor Cerbel 8th Workshop on Membrane Computing Natural.
Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500 Cluster.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Procedures of Extending the Alphabet for the PPM Algorithm Radu Rădescu George Liculescu Polytechnic University of Bucharest Faculty of Electronics, Telecommunications.
History of Distributed Systems Joseph Cordina
A Parallel Computational Model for Heterogeneous Clusters Jose Luis Bosque, Luis Pastor, IEEE TRASACTION ON PARALLEL AND DISTRIBUTED SYSTEM, VOL. 17, NO.
Reference: Message Passing Fundamentals.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Data Parallel Algorithms Presented By: M.Mohsin Butt
WiOpt’03: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks March 3-5, 2003, INRIA Sophia-Antipolis, France Session : Energy Efficiency.
Bogdan Tanasa, Unmesh D. Bordoloi, Petru Eles, Zebo Peng Department of Computer and Information Science, Linkoping University, Sweden December 3, 2010.
Improving Robustness in Distributed Systems Jeremy Russell Software Engineering Honours Project.
Page 1 CS Department Parallel Design of JPEG2000 Image Compression Xiuzhen Huang CS Department UC Santa Barbara April 30th, 2003.
Online Data Gathering for Maximizing Network Lifetime in Sensor Networks IEEE transactions on Mobile Computing Weifa Liang, YuZhen Liu.
Virtual Memory BY JEMINI ISLAM. What is Virtual Memory Virtual memory is a memory management system that gives a computer the appearance of having more.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Introduction to Parallel Processing Ch. 12, Pg
FLANN Fast Library for Approximate Nearest Neighbors
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Chapter 1 Algorithm Analysis
Calculating Discrete Logarithms John Hawley Nicolette Nicolosi Ryan Rivard.
1 Telematics/Networkengineering Confidential Transmission of Lossless Visual Data: Experimental Modelling and Optimization.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Network Aware Resource Allocation in Distributed Clouds.
An Integration Framework for Sensor Networks and Data Stream Management Systems.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
Unit – I CLIENT / SERVER ARCHITECTURE. Unit Structure  Evolution of Client/Server Architecture  Client/Server Model  Characteristics of Client/Server.
Frankfurt (Germany), 6-9 June 2011 Pyeongik Hwang School of Electrical Engineering Seoul National University Korea Hwang – Korea – RIF Session 4a – 0324.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
A performance evaluation approach openModeller: A Framework for species distribution Modelling.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Expanding the CASE Framework to Facilitate Load Balancing of Social Network Simulations Amara Keller, Martin Kelly, Aaron Todd.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Parallelization of Classification Algorithms For Medical Imaging on a Cluster Computing System 指導教授 : 梁廷宇 老師 系所 : 碩光通一甲 姓名 : 吳秉謙 學號 :
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
An Energy Efficient Hierarchical Clustering Algorithm for Wireless Sensor Networks Seema Bandyopadhyay and Edward J. Coyle Presented by Yu Wang.
Enterprise Integration Patterns CS3300 Fall 2015.
StrideBV: Single chip 400G+ packet classification Author: Thilan Ganegedara, Viktor K. Prasanna Publisher: HPSR 2012 Presenter: Chun-Sheng Hsueh Date:
Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.
Radix Sort and Hash-Join for Vector Computers Ripal Nathuji 6.893: Advanced VLSI Computer Architecture 10/12/00.
+ Clusters Alternative to SMP as an approach to providing high performance and high availability Particularly attractive for server applications Defined.
Static Process Scheduling
CSCI-455/552 Introduction to High Performance Computing Lecture 23.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
G.v. Bochmann, revised Jan Comm Systems Arch 1 Different system architectures Object-oriented architecture (only objects, no particular structure)
1 Parallel Applications Computer Architecture Ning Hu, Stefan Niculescu & Vahe Poladian November 22, 2002.
Fundamentals of Programming Languages-II
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Massively Parallel Algorithm for Evolution Rules Application in Transition P System Luís Fernández Fernando Arroyo Jorge A. Tejedor Juan Castellanos Grupo.
1 Hierarchical Parallelization of an H.264/AVC Video Encoder A. Rodriguez, A. Gonzalez, and M.P. Malumbres IEEE PARELEC 2006.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
1 Comparative Study of two Genetic Algorithms Based Task Allocation Models in Distributed Computing System Oğuzhan TAŞ 2005.
VU-Advanced Computer Architecture Lecture 1-Introduction 1 Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 1.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Parallel Programming By J. H. Wang May 2, 2017.
Distributed Computing:
High Performance Computing
Duo Liu, Bei Hua, Xianghui Hu, and Xinan Tang
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 1 Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Abraham Gutiérrez Rodríguez Natural Computing Group. Universidad Politécnica de Madrid,

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 2 Introduction ”the next generation of simulators may be oriented to solve (at least partially) the problems of information storage and massive parallelism by using parallel language programming or by using multiprocessor computers” G. Ciobanu, Gh. Păun y M. Pérez-Jiménez

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 3 Goals Present an algorithm for compressing information of multisets and evolution rules stored in membranes. –In particular, without penalizing evolution rules application and communication times with complex processes; –and keeping the same parallelism degree obtained in P-Systems implementation over distributed architectures.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 4 Distributed Architectures Parallel Application/ Parallel Communication a--> c bc2 --> (d, in2) a x b y c z n membrane processor data bus

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 5 UNFEASIBLE! Nowadays, no technology exists that permit M (  ∞) communication lines per processor Distributed Architectures Time of an evolution step: T = T apl + T com T apl is the maximum time used by the slowest membrane in applying its rules T com is the maximum time used by the slowest membrane for communication Parallel Application/ Parallel Communication T APL T COM P1P1 P2P2 P3P3

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 6 Distributed Architectures Parallel Application/ Sequential Communication a--> c bc2 --> (d, in2) a x b y c z n membraneprocessordata bus communication interface parent-child membrane relationship

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 7 UNFEASIBLE! network congestion Ciobanu: “the response time of the program has been acceptable. There are however executions that could take a rather long time due to unexpected network congestion” Distributed Architectures Implementations with a cluster of PC’s –Message Passing Interface (MPI), Ciobanu –Java Remote Method Invocation (RMI), Syropoulos Time of an evolution step: T = T apl + 2.(M-1)·T com Parallel Application/ Sequential Communication P1P1 P2P2 P3P3 T APL T COM

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 8 Distributed Architectures Application & communication partially parallel a--> c bc2 --> (d, in2) a x b y c z n membraneprocessorExternal communication Internal communication Virtual communication proxy P1P1 3 P2P2 P4P4 P3P

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 9 FEASIBLE! Evolution step times are acceptable and costs moderated Permits a certain level of parallelism both in application rules phase and communication Distributed Architectures Minimum evolution time obtain with the optimum number of membranes by processor : Application & communication partially parallel Text COM T APL Tint COM P1P1 P2P2 P3P3

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 10 Distributed Architectures To reach minimum times over distributed architectures, there should be a balance between the time dedicated to evolution rules application and the time used for communication among membranes.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 11 Implementation Technologies Context Tejedor and Bravo distributed architectures are independent of specific technology. Thus, in specific hardware implementations (FPGA’s and microcontrollers) and solutions based upon cluster of microprocessors, the amount of information that has to be stored and transmitted is very important. –In the first case, the main problem is due to their low storage capacity. –In the second case, the main problem is due to the bottleneck in processor communication.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 12 Compression Requirements 1.there should be no information loss; 2.it should use the lowest amount of space for storage and transmission; 3.it should not penalize time for rules application phase and communication among membranes while processing compressed information.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 13 Compression Requirements Thus, this means that the compression schema should: a)encode information for a direct manipulation in both phases without having to use encoding/decoding processes. b)do the compression in a previous stage to the P- System evolution c)therefore, abandon entropy limit to be able to maintain parallelism level and evolution time reached in previous research works.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 14 Proposed compression schema is presented here in three consecutive steps using the next P-System that generates n 2, n>1 [Gh. Păun, 2000]: Compression Schema M1 M2 M4 M3

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 15 Compression Schema Step 0 - Parikh's Vector over P System alphabet M1 M2 M4M3 M2 M1 b’ 0w 1 =0000 abcf M1

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 16 Compression Schema Step 0 - Parikh's Vector over P System alphabet M1 M2 M4M3 M2 b’ 0 1r 1 =0000→0100 abcfabcf r 3 > r 4 b’ 0w 2 =0000 abcf b’ 0 0r 2 =0100→0100 abcfabcf b’ in 4 abcf b’ 0 0r 3 =0002→1001 abcfabcf here b’ 0 0r 4 =0001→1000,δ,δ abcfabcf here M2 here

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 17 Compression Schema Step 0 - Parikh's Vector over P System alphabet M1 M2 M4M3 M2 M3

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 18 Compression Schema Step 0 - Parikh's Vector over P System alphabet M1 M2 M4M3 M2 M4 b’ 0w 4 =0000 abcf M4 This codification requires 95 storage units for the multiplicities present at the multisets and the evolution rules.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 19 Step 1 - Parikh's Vector over each membrane alphabet Compression Schema M1 M2 M4M3 M2 M1 b’ 0w 1 =000 abf M1 Considers only the alphabet subset for the P-System that may exist in each of the regions for the membrane system. This subset may be calculated by a static analysis, previous to P-System evolution time

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 20 Step 1 - Parikh's Vector over each membrane alphabet Compression Schema M1 M2 M4M3 M2

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 21 Step 1 - Parikh's Vector over each membrane alphabet Compression Schema M1 M2 M4M3 M2 M3

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 22 Step 1 - Parikh's Vector over each membrane alphabet Compression Schema M1 M2 M4M3 M2 M4 w 4 =0 c M4 This codification requires 63 storage units for the multiplicities present at the multisets and the evolution rules.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 23 Step 2 - Parikh's Vector without null values Compression Schema M1 M2 M4M3 M2 M1 b’ 0w 1 =000 abf M1 Is an alteration over the Run Length Encoding (RLE) algorithm. The goal is eliminate all the null values in Parikh’s vector

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 24 Step 2 - Parikh's Vector without null values Compression Schema M1 M2 M4M3 M2

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 25 Step 2 - Parikh's Vector without null values Compression Schema M1 M2 M4M3 M2 M3 2 1r 1 =1→1 here 11 b’ 0w 3 =11 af M3 here 2 1r 2 =1→,δ,δ 1 r 3 =1→2 here 33

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 26 Step 2 - Parikh's Vector without null values Compression Schema M1 M2 M4M3 M2 M4 w 4 =0 c M4 Requires 46 storage units for the multiplicities present at the multisets and evolution rules. This codification reduces information size until a 51.1% from the initial Parikh's Vector codification.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 27 Compression Schema Step 3 - Step 3 - Storage Unit Compression Depending on the storage unit size (measured in bits), we will be able to codify a greater or smaller range of values.

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 28 Analysis of Results Attenuate the storage problem Not penalized with compression, decompression processes. Compression Schema Analysis

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 29 Analysis of Results Primitive operations will decrease its execution time approximately until a 26.7%. Evolution rules application time will be approximately 3.75 times faster. Impact Analysis for Evolution Rules Application Time

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 30 Analysis of Results A reduction until a 55.6% of the information to transmit among membranes may be reached in the worst case. Communication time among membranes will be approximately 1.80 times faster. Impact Analysis for Communication among Membranes Time

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 31 Analysis of Results Impact Analysis over Distributed Architecture Parameters According to the previous empirical data, we get: T app 3.75 times faster –increment of a 93.5% K opt –a reduction until a 51.6% P opt T com 1.80 times faster –increment of a 32.4% P opt –a reduction until a 74.5% K opt

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 32 Analysis of Results Impact Analysis over Distributed Architecture Parameters Taking in account previous analysis –a reduction of a 69.3% P opt –an increment of a 44.3% K opt –a reduction until a 38.5% T mim

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 33 Conclusions The compression schema presented: –reduce degrees of compression varying from 51.1 % to 18.1% depending on the size in bits needed to store objects multiplicities –does not penalize evolution rule application nor communication times during P System evolution –does not required compression decompression process during P System evolution (static analysis)

Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Workshop on Membrane Computing Eighth page 34 Optimizing Membrane System Implementation with Multisets and Evolution Rules Compression Abraham Gutiérrez Rodríguez Natural Computing Group. Universidad Politécnica de Madrid,