Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent.

Similar presentations


Presentation on theme: "Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent."— Presentation transcript:

1 Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.hardwaresoftwaresystemsstorage concurrentregime In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer.parallel computing Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

2 Distributed Computing : Goals and advantages There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way.transparentscalable Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.fault tolerantstand-alone

3 Form of Distributed Computing : Grid Computing Grid computing (or the use of a computational grid) is the application of several computers to a single problem at the same time  usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. Grid computing depends on software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing can also be thought of as distributed and large-scale cluster computing, as well as a form of network-distributed parallel processing. It can be small  confined to a network of computer workstations within a corporation, for example  or it can be a large, public collaboration across many companies or networks.

4 More on Grid Computing It is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely coupled computers, acting in concert to perform very large tasks.distributed computingclusterloosely coupled This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and Web services.volunteer computingdrug discoveryeconomic forecastingseismic analysisback-officee-commerceWeb services What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general-purpose grid software libraries and middleware.

5 Parallel Computing Parallel computing is a form of computation in which many calculations are carried out simultaneously,computation  operating on the principle that large problems can often be divided into smaller ones,  which are then solved concurrently ("in parallel").concurrently  There are several different forms of parallel computing: bit-level-, instruction-level-, data-, and task parallelism.bit-level- instruction-level-data-task parallelism Parallelism has been employed for many years, mainly in high-performance computing.high-performance computing

6 Parallel Computing Parallel computers can be roughly classified according to the level at which the hardware supports parallelism—  with multi-core and multi-processor computers having multiple processing elements within a single machinemulti-coremulti-processor  while clusters, MPPs, and grids use multiple computers to work on the same task.clustersMPPsgrids  Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more difficult to write than sequential ones, Parallel computer programs  because concurrency introduces several new classes of potential software bugs,software bugs  of which race conditions are the most common.race conditions  Communication and synchronization between the different subtasks is typically one of the greatest barriers to getting good parallel program performance. The speed-up of a program as a result of parallelization is given by Amdahl's law.Communicationsynchronization speed-upAmdahl's law

7 Fine-grained, coarse-grained, and embarrassing parallelism Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it is embarrassingly parallel if they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize.embarrassingly parallel

8 Hardware on Software for Distributed or Parallel Computing Hardware  High Performance Computer Beowulf Cluster Computer, Proprietary Cluster Computer. Workstation with LAN, Personal Computer. Software  Globus for Grid Middleware  MPI – Library for parallel programming for distributed memory.  JAVA, C language  Parallel Processing Model - LINDA

9 Additional Notes

10 Pipeline In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.computing buffer storage Computer-related pipelines include: Instruction pipelines, such as the classic RISC pipeline, which are used in processors to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time. Instruction pipelinesclassic RISC pipelineprocessorscircuitry Graphics pipelines, found in most graphics cards, which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.). Graphics pipelinesgraphics cardsarithmetic unitsCPUsperspective projectioncolorlight Software pipelines, consisting of multiple processes arranged so that the output stream of one process is automatically and promptly fed as the input stream of the next one. Unix pipelines are the classical implementation of this concept. Software pipelinesprocessesUnix pipelines Instruction scheduling on the Intel Pentium 4.


Download ppt "Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent."

Similar presentations


Ads by Google