Presentation is loading. Please wait.

Presentation is loading. Please wait.

On the Criteria to Be Used in Decomposing Systems into Modules Group 1: Lisa Anthony, Erik Hayes, Luiza Helena da Silva, and Diana Tetelman.

Similar presentations


Presentation on theme: "On the Criteria to Be Used in Decomposing Systems into Modules Group 1: Lisa Anthony, Erik Hayes, Luiza Helena da Silva, and Diana Tetelman."— Presentation transcript:

1 On the Criteria to Be Used in Decomposing Systems into Modules Group 1: Lisa Anthony, Erik Hayes, Luiza Helena da Silva, and Diana Tetelman

2 Summary Modular decomposition should focus more on information hiding than related functionality This formulation should be an early design decision Module interfaces should provide only means to accessing internal information Evaluate decomposition through change assessment Hierarchical and modular breakdown are independent

3 Compiler Phases/Processes/DataStructures Symbol Table  Data structure containing a record for each identifier, with fields for the attributes of the identifiers. Lexical Analyzer  Reads the characters in the source program and groups them into a stream of tokens in which each token represents a logically cohesive sequence of characters, such as an identifier, and keyword. Uses Symbol Table. Syntax Analyzer  Imposes a hierarchical structure on the token stream, which can be portrayed with a syntax tree. Semantic Analyzer  Checks the syntax tree for semantic errors and gatherstype information for the subsequent code-generation phase. Intermediate Code Generation  Generates an intermediate form of the program. This form is very simple but not as low level as assembly code. Code Optimizer  Optimizes the intermediate form of the program. Code Generator  Generates the actual target code, consisting of assembly code.

4 Lexical Analyzer Syntax Analyzer Semantic Analyzer Intermediate Code Generator Code Optimizer Code Generator Symbol-table Manager Error Handler Source Program Target Program Decomposition Criterion: Data Flow

5 Lexical Analyzer SyntaxTree Generator Semantic Analyzer Intermediate Code Code Optimizer Code Generator Symbol-table Manager Syntax Tree Intermediate Code Generator I/O Manager Decomposition Criterion: Data Hiding

6 Rationale for Data Flow Design In the FLOW diagram design, the break down of the modules is based simply on the data flow. This decomposition criterion creates an architecture that is very similar to the contemporary model of the pipe-and-filter design pattern. Source Program file is fed right into the Lexical Analyzer, and the Target Program file is created directly by the Code Generator.

7 Rationale for Data Hiding Design In this design, the criterion used was data hiding, much like what Parnas attempted to demonstrate in his second breakdown of the KWIC program. The objects are fairly independent. They also perform their own error handling. Functions that use the same Data Structures were combined under a common object to reinforce the Data Hiding Theme. I/O was combined into an object.

8 How are they similar? Different? Similarities: –Achieve same end result –Order of execution of processes still predetermined Differences: –Tools operate on internal representation (not text of program) –Hybrid = repository + pipeline (from lecture) –Versions may be identical in execution, but not in changing, documentation, understanding, and maintenance

9 Design Differences Pipeline –flowchart idea –each module heavily dependent on previous –outdated Hybrid –information hiding –each module has its own design decisions hidden from the others –modularization lends itself to encapsulation/reuse

10 Why is the Hybrid Better? Changeability Make design decisions based on what is likely to change Hybrid allows changes to: –Data representation/Abstraction (symbol table mechanism/parse tree) –Future flexibility in order of execution? –Reuse of modules for parsers of other languages

11 Why is the Hybrid Better? Independent Development Reduce interfaces between modules to decrease discussion among developers of different modules about required formatting and function call conventions Hybrid: –Each module depends only on symbol table (pipeline = does not know id of upstream/downstream filters) –Sequence of processing does not affect development of successful modules

12 Why is the Hybrid Better? Comprehensibility Modules must be able to stand alone New developer/maintainer should not have to learn entire system to change one piece of code Hybrid allows: –Effective minimal interfaces mean learning internal representation of symbol table not necessary for any parse module –Symbol table implementation does not depend on order of execution or other module processing

13 Historical Content in Present Context Paper is 30 years old, but only some details might make this fact apparent: –Terminology –Previous concerns –Past design processes (flowcharts) –Changing guidelines –Code reuse (not a major point)

14 Terminology Parnas uses some terms that are not used anymore, or are used nowadays with different meanings, such as: - CORE Then: main memory, general storage space Now: internal functionality, internals - JOB Then: Implied batch processing Now: ??? - Nowadays, we speak of memory in a more abstract way (data structures, etc). “Memory” was more often understood as referring to physical storage (addresses, records…)

15 Parnas mentions as “major advancements in the area of modular programming”… –The development of ASSEMBLERS Nowadays, we could mention higher level languages, mainly object-oriented languages that better: “(1) allow one module to be written with little knowledge of the code in another module, and (2) allow modules to be reassembled and replaced without reassembly of the whole system” Previous Concerns

16 Use of flowcharts –When paper was written, the use of flowchart by programmers before was almost mandatory. With a flowchart in hands, the programmer would “move from there to a detailed implementation.” This caused modularizations like the first one to be created. –Parnas could see the problem with this approach and condemned it; A flowchart would work ok for a small system, but not with a larger one. Past Design Processes

17 “The sequencing of instructions necessary to call a given routine and the routine itself are part of the same module.” This pertain to worries of programmers at the time because they were programming in assembly and other low-level languages. Concerns such as code optimization were very important and involved creating smaller sets of machine instructions for a given task. “The sequence in which certain items will be processed should be hidden within a single module.” It has become irrelevant most times. Changing Guidelines

18 Code Reuse Parnas does not emphasize code reuse so much in this paper. The reason might be the nature of programs written in assembly or lower-level languages programmers (not very portable/reusable). If the paper were to be reviewed by Parnas, reuse would certainly be a point he would emphasize more. It is important to notice that these points do not disturb the relevance of Parnas’ ideas for us nowadays.

19 Effects on Current Programming “Fathered” key ideas of OOP: –Information hiding –Encapsulation before functional relations –Easier understandability/maintainability Design more important than implementation –Good design leads to good implementation –Proper design allows for different implementations (easily modifiable)

20 …Effects Continued Separation of hierarchy and modularization –Hierarchy allows functional layers –Modules do not have to be layers in order to be placed in a hierarchy Evolution of more complex and capable systems


Download ppt "On the Criteria to Be Used in Decomposing Systems into Modules Group 1: Lisa Anthony, Erik Hayes, Luiza Helena da Silva, and Diana Tetelman."

Similar presentations


Ads by Google