Download presentation
Presentation is loading. Please wait.
1
Chapter 1 Preliminaries
2
Chapter 1 Topics Reasons for Studying Concepts of Programming Languages Programming Domains Language Evaluation Criteria Influences on Language Design Language Categories Language Design Trade-Offs Implementation Methods Programming Environments Copyright © 2009 Addison-Wesley. All rights reserved.
3
Reasons for Studying Concepts of Programming Languages
Increased ability to express ideas Improved background for choosing appropriate languages Increased ability to learn new languages Better understanding of significance of implementation Better use of languages that are already known Overall advancement of computing Copyright © 2009 Addison-Wesley. All rights reserved.
4
Programming Domains Scientific applications Business applications
Large numbers of floating point computations; use of arrays Fortran Business applications Produce reports, use decimal numbers and characters COBOL Artificial intelligence Symbols rather than numbers manipulated; use of linked lists LISP Systems programming Need efficiency because of continuous use C Web Software Eclectic collection of languages: markup (e.g., XHTML), scripting (e.g., PHP), general-purpose (e.g., Java) Copyright © 2009 Addison-Wesley. All rights reserved.
5
Language Evaluation Criteria
Readability: the ease with which programs can be read and understood Writability: the ease with which a language can be used to create programs Reliability: conformance to specifications (i.e., performs to its specifications) Cost: the ultimate total cost Copyright © 2009 Addison-Wesley. All rights reserved.
6
Evaluation Criteria: Readability
Overall simplicity A manageable set of features and constructs Minimal feature multiplicity Minimal operator overloading Orthogonality A relatively small set of primitive constructs can be combined in a relatively small number of ways Every possible combination is legal Data types Adequate predefined data types Syntax considerations Identifier forms: flexible composition Special words and methods of forming compound statements Form and meaning: self-descriptive constructs, meaningful keywords Copyright © 2009 Addison-Wesley. All rights reserved.
7
Evaluation Criteria: Writability
Simplicity and orthogonality Few constructs, a small number of primitives, a small set of rules for combining them Support for abstraction The ability to define and use complex structures or operations in ways that allow details to be ignored Expressivity A set of relatively convenient ways of specifying operations Strength and number of operators and predefined functions Copyright © 2009 Addison-Wesley. All rights reserved.
8
Evaluation Criteria: Reliability
Type checking Testing for type errors Exception handling Intercept run-time errors and take corrective measures Aliasing Presence of two or more distinct referencing methods for the same memory location Readability and writability A language that does not support “natural” ways of expressing an algorithm will require the use of “unnatural” approaches, and hence reduced reliability Copyright © 2009 Addison-Wesley. All rights reserved.
9
Evaluation Criteria: Cost
Training programmers to use the language Writing programs (closeness to particular applications) Compiling programs Executing programs Language implementation system: availability of free compilers Reliability: poor reliability leads to high costs Maintaining programs Copyright © 2009 Addison-Wesley. All rights reserved.
10
Evaluation Criteria: Others
Portability The ease with which programs can be moved from one implementation to another Generality The applicability to a wide range of applications Well-definedness The completeness and precision of the language’s official definition Copyright © 2009 Addison-Wesley. All rights reserved.
11
Influences on Language Design
Computer Architecture Languages are developed around the prevalent computer architecture, known as the von Neumann architecture Programming Methodologies New software development methodologies (e.g., object-oriented software development) led to new programming paradigms and by extension, new programming languages Copyright © 2009 Addison-Wesley. All rights reserved.
12
Computer Architecture Influence
Well-known computer architecture: Von Neumann Imperative languages, most dominant, because of von Neumann computers Data and programs stored in memory Memory is separate from CPU Instructions and data are piped from memory to CPU Basis for imperative languages Variables model memory cells Assignment statements model piping Iteration is efficient Copyright © 2009 Addison-Wesley. All rights reserved.
13
The von Neumann Architecture
Copyright © 2009 Addison-Wesley. All rights reserved.
14
The von Neumann Architecture
Fetch-execute-cycle (on a von Neumann architecture computer) initialize the program counter repeat forever fetch the instruction pointed by the counter increment the counter decode the instruction execute the instruction end repeat Copyright © 2009 Addison-Wesley. All rights reserved.
15
Programming Methodologies Influences
1950s and early 1960s: Simple applications; worry about machine efficiency Late 1960s: People efficiency became important; readability, better control structures structured programming top-down design and step-wise refinement Late 1970s: Process-oriented to data-oriented data abstraction Middle 1980s: Object-oriented programming Data abstraction + inheritance + polymorphism Copyright © 2009 Addison-Wesley. All rights reserved.
16
Language Categories Imperative Functional Logic
Central features are variables, assignment statements, and iteration Include languages that support object-oriented programming Include scripting languages Include the visual languages Examples: C, Java, Perl, JavaScript, Visual BASIC .NET, C++ Functional Main means of making computations is by applying functions to given parameters Examples: LISP, Scheme Logic Rule-based (rules are specified in no particular order) Example: Prolog Markup/programming hybrid Markup languages extended to support some programming Examples: JSTL, XSLT Copyright © 2009 Addison-Wesley. All rights reserved.
17
Language Design Trade-Offs
Reliability vs. cost of execution Example: Java demands all references to array elements be checked for proper indexing, which leads to increased execution costs Readability vs. writability Example: APL provides many powerful operators (and a large number of new symbols), allowing complex computations to be written in a compact program but at the cost of poor readability Writability (flexibility) vs. reliability Example: C++ pointers are powerful and very flexible but are unreliable Copyright © 2009 Addison-Wesley. All rights reserved.
18
Implementation Methods
Compilation Programs are translated into machine language Pure Interpretation Programs are interpreted by another program known as an interpreter Hybrid Implementation Systems A compromise between compilers and pure interpreters Copyright © 2009 Addison-Wesley. All rights reserved.
19
Layered View of Computer
The operating system and language implementation are layered over machine interface of a computer Copyright © 2009 Addison-Wesley. All rights reserved.
20
Compilation Translate high-level program (source language) into machine code (machine language) Slow translation, fast execution Compilation process has several phases: lexical analysis: converts characters in the source program into lexical units syntax analysis: transforms lexical units into parse trees which represent the syntactic structure of program Semantics analysis: generate intermediate code code generation: machine code is generated Copyright © 2009 Addison-Wesley. All rights reserved.
21
The Compilation Process
Copyright © 2009 Addison-Wesley. All rights reserved.
22
Additional Compilation Terminologies
Load module (executable image): the user and system code together Linking and loading: the process of collecting system program units and linking them to a user program Copyright © 2009 Addison-Wesley. All rights reserved.
23
Von Neumann Bottleneck
Connection speed between a computer’s memory and its processor determines the speed of a computer Program instructions often can be executed much faster than the speed of the connection; the connection speed thus results in a bottleneck Known as the von Neumann bottleneck; it is the primary limiting factor in the speed of computers Copyright © 2009 Addison-Wesley. All rights reserved.
24
Pure Interpretation No translation
Easier implementation of programs (run-time errors can easily and immediately be displayed) Slower execution (10 to 100 times slower than compiled programs) Often requires more space Now rare for traditional high-level languages Significant comeback with some Web scripting languages (e.g., JavaScript, PHP) Copyright © 2009 Addison-Wesley. All rights reserved.
25
Pure Interpretation Process
Copyright © 2009 Addison-Wesley. All rights reserved.
26
Hybrid Implementation Systems
A compromise between compilers and pure interpreters A high-level language program is translated to an intermediate language that allows easy interpretation Faster than pure interpretation Examples Perl programs are partially compiled to detect errors before interpretation Initial implementations of Java were hybrid; the intermediate form, byte code, provides portability to any machine that has a byte code interpreter and a run-time system (together, these are called Java Virtual Machine) Copyright © 2009 Addison-Wesley. All rights reserved.
27
Hybrid Implementation Process
Copyright © 2009 Addison-Wesley. All rights reserved.
28
Just-in-Time Implementation Systems
Initially translate programs to an intermediate language Then compile the intermediate language of the subprograms into machine code when they are called Machine code version is kept for subsequent calls JIT systems are widely used for Java programs .NET languages are implemented with a JIT system Copyright © 2009 Addison-Wesley. All rights reserved.
29
Preprocessors Preprocessor macros (instructions) are commonly used to specify that code from another file is to be included A preprocessor processes a program immediately before the program is compiled to expand embedded preprocessor macros A well-known example: C preprocessor expands #include, #define, and similar macros Copyright © 2009 Addison-Wesley. All rights reserved.
30
Programming Environments
A collection of tools used in software development UNIX An older operating system and tool collection Nowadays often used through a GUI (e.g., CDE, KDE, or GNOME) that runs on top of UNIX Microsoft Visual Studio.NET A large, complex visual environment Used to build Web applications and non-Web applications in any .NET language NetBeans Related to Visual Studio .NET, except for Web applications in Java Copyright © 2009 Addison-Wesley. All rights reserved.
31
Summary The study of programming languages is valuable for a number of reasons: Increase our capacity to use different constructs Enable us to choose languages more intelligently Makes learning new languages easier Most important criteria for evaluating programming languages include: Readability, writability, reliability, cost Major influences on language design have been machine architecture and software development methodologies The major methods of implementing programming languages are: compilation, pure interpretation, and hybrid implementation Copyright © 2009 Addison-Wesley. All rights reserved.
32
Evolution of the Major Programming Languages
Chapter 2 Evolution of the Major Programming Languages
33
Chapter 2 Topics Zuse’s Plankalkül
Minimal Hardware Programming: Pseudocodes The IBM 704 and Fortran Functional Programming: LISP The First Step Toward Sophistication: ALGOL 60 Computerizing Business Records: COBOL The Beginnings of Timesharing: BASIC Copyright © 2009 Addison-Wesley. All rights reserved.
34
Chapter 2 Topics (continued)
Everything for Everybody: PL/I Two Early Dynamic Languages: APL and SNOBOL The Beginnings of Data Abstraction: SIMULA 67 Orthogonal Design: ALGOL 68 Some Early Descendants of the ALGOLs Programming Based on Logic: Prolog History's Largest Design Effort: Ada Copyright © 2009 Addison-Wesley. All rights reserved.
35
Chapter 2 Topics (continued)
Object-Oriented Programming: Smalltalk Combining Imperative ad Object-Oriented Features: C++ An Imperative-Based Object-Oriented Language: Java Scripting Languages A C-Based Language for the New Millennium: C# Markup/Programming Hybrid Languages Copyright © 2009 Addison-Wesley. All rights reserved.
36
Genealogy of Common Languages
Copyright © 2009 Addison-Wesley. All rights reserved.
37
Zuse’s Plankalkül Designed in 1945, but not published until 1972
Never implemented Advanced data structures floating point, arrays, records Invariants Copyright © 2009 Addison-Wesley. All rights reserved.
38
Plankalkül Syntax An assignment statement to assign the expression A[4] + 1 to A[5] | A + 1 => A V | (subscripts) S | 1.n n (data types) Copyright © 2009 Addison-Wesley. All rights reserved.
39
Minimal Hardware Programming: Pseudocodes
What was wrong with using machine code? Poor readability Poor modifiability Expression coding was tedious Machine deficiencies--no indexing or floating point Copyright © 2009 Addison-Wesley. All rights reserved.
40
Pseudocodes: Short Code
Short Code developed by Mauchly in 1949 for BINAC computers Expressions were coded, left to right Example of operations: 01 – 06 abs value 1n (n+2)nd power 02 ) n (n+2)nd root 03 = 08 pause 4n if <= n 04 / 09 ( print and tab Copyright © 2009 Addison-Wesley. All rights reserved.
41
Pseudocodes: Speedcoding
Speedcoding developed by Backus in 1954 for IBM 701 Pseudo ops for arithmetic and math functions Conditional and unconditional branching Auto-increment registers for array access Slow! Only 700 words left for user program Copyright © 2009 Addison-Wesley. All rights reserved.
42
Pseudocodes: Related Systems
The UNIVAC Compiling System Developed by a team led by Grace Hopper Pseudocode expanded into machine code David J. Wheeler (Cambridge University) developed a method of using blocks of re-locatable addresses to solve the problem of absolute addressing Copyright © 2009 Addison-Wesley. All rights reserved.
43
IBM 704 and Fortran Fortran 0: 1954 - not implemented Fortran I:1957
Designed for the new IBM 704, which had index registers and floating point hardware - This led to the idea of compiled programming languages, because there was no place to hide the cost of interpretation (no floating-point software) Environment of development Computers were small and unreliable Applications were scientific No programming methodology or tools Machine efficiency was the most important concern Copyright © 2009 Addison-Wesley. All rights reserved.
44
Design Process of Fortran
Impact of environment on design of Fortran I No need for dynamic storage Need good array handling and counting loops No string handling, decimal arithmetic, or powerful input/output (for business software) Copyright © 2009 Addison-Wesley. All rights reserved.
45
Fortran I Overview First implemented version of Fortran
Names could have up to six characters Post-test counting loop (DO) Formatted I/O User-defined subprograms Three-way selection statement (arithmetic IF) No data typing statements Copyright © 2009 Addison-Wesley. All rights reserved.
46
Fortran I Overview (continued)
First implemented version of FORTRAN No separate compilation Compiler released in April 1957, after 18 worker-years of effort Programs larger than 400 lines rarely compiled correctly, mainly due to poor reliability of 704 Code was very fast Quickly became widely used Copyright © 2009 Addison-Wesley. All rights reserved.
47
Fortran II Distributed in 1958 Independent compilation Fixed the bugs
Copyright © 2009 Addison-Wesley. All rights reserved.
48
Fortran IV Evolved during 1960-62 Explicit type declarations
Logical selection statement Subprogram names could be parameters ANSI standard in 1966 Copyright © 2009 Addison-Wesley. All rights reserved.
49
Fortran 77 Became the new standard in 1978 Character string handling
Logical loop control statement IF-THEN-ELSE statement Copyright © 2009 Addison-Wesley. All rights reserved.
50
Fortran 90 Most significant changes from Fortran 77 Modules
Dynamic arrays Pointers Recursion CASE statement Parameter type checking Copyright © 2009 Addison-Wesley. All rights reserved.
51
Latest versions of Fortran
Fortran 95 – relatively minor additions, plus some deletions Fortran ditto Copyright © 2009 Addison-Wesley. All rights reserved.
52
Fortran Evaluation Highly optimizing compilers (all versions before 90) Types and storage of all variables are fixed before run time Dramatically changed forever the way computers are used Characterized as the lingua franca of the computing world Copyright © 2009 Addison-Wesley. All rights reserved.
53
Functional Programming: LISP
LISt Processing language Designed at MIT by McCarthy AI research needed a language to Process data in lists (rather than arrays) Symbolic computation (rather than numeric) Only two data types: atoms and lists Syntax is based on lambda calculus Copyright © 2009 Addison-Wesley. All rights reserved.
54
Representation of Two LISP Lists
Representing the lists (A B C D) and (A (B C) D (E (F G))) Copyright © 2009 Addison-Wesley. All rights reserved.
55
LISP Evaluation Pioneered functional programming
No need for variables or assignment Control via recursion and conditional expressions Still the dominant language for AI COMMON LISP and Scheme are contemporary dialects of LISP ML, Miranda, and Haskell are related languages Copyright © 2009 Addison-Wesley. All rights reserved.
56
Scheme Developed at MIT in mid 1970s Small
Extensive use of static scoping Functions as first-class entities Simple syntax (and small size) make it ideal for educational applications Copyright © 2009 Addison-Wesley. All rights reserved.
57
COMMON LISP An effort to combine features of several dialects of LISP into a single language Large, complex Copyright © 2009 Addison-Wesley. All rights reserved.
58
The First Step Toward Sophistication: ALGOL 60
Environment of development FORTRAN had (barely) arrived for IBM 70x Many other languages were being developed, all for specific machines No portable language; all were machine dependent No universal language for communicating algorithms ALGOL 60 was the result of efforts to design a universal language Copyright © 2009 Addison-Wesley. All rights reserved.
59
Early Design Process ACM and GAMM met for four days for design (May 27 to June 1, 1958) Goals of the language Close to mathematical notation Good for describing algorithms Must be translatable to machine code Copyright © 2009 Addison-Wesley. All rights reserved.
60
ALGOL 58 Concept of type was formalized Names could be any length
Arrays could have any number of subscripts Parameters were separated by mode (in & out) Subscripts were placed in brackets Compound statements (begin ... end) Semicolon as a statement separator Assignment operator was := if had an else-if clause No I/O - “would make it machine dependent” Copyright © 2009 Addison-Wesley. All rights reserved.
61
ALGOL 58 Implementation Not meant to be implemented, but variations of it were (MAD, JOVIAL) Although IBM was initially enthusiastic, all support was dropped by mid 1959 Copyright © 2009 Addison-Wesley. All rights reserved.
62
ALGOL 60 Overview Modified ALGOL 58 at 6-day meeting in Paris
New features Block structure (local scope) Two parameter passing methods Subprogram recursion Stack-dynamic arrays Still no I/O and no string handling Copyright © 2009 Addison-Wesley. All rights reserved.
63
ALGOL 60 Evaluation Successes
It was the standard way to publish algorithms for over 20 years All subsequent imperative languages are based on it First machine-independent language First language whose syntax was formally defined (BNF) Copyright © 2009 Addison-Wesley. All rights reserved.
64
ALGOL 60 Evaluation (continued)
Failure Never widely used, especially in U.S. Reasons Lack of I/O and the character set made programs non-portable Too flexible--hard to implement Entrenchment of Fortran Formal syntax description Lack of support from IBM Copyright © 2009 Addison-Wesley. All rights reserved.
65
Computerizing Business Records: COBOL
Environment of development UNIVAC was beginning to use FLOW-MATIC USAF was beginning to use AIMACO IBM was developing COMTRAN Copyright © 2009 Addison-Wesley. All rights reserved.
66
COBOL Historical Background
Based on FLOW-MATIC FLOW-MATIC features Names up to 12 characters, with embedded hyphens English names for arithmetic operators (no arithmetic expressions) Data and code were completely separate The first word in every statement was a verb Copyright © 2009 Addison-Wesley. All rights reserved.
67
COBOL Design Process First Design Meeting (Pentagon) - May 1959
Design goals Must look like simple English Must be easy to use, even if that means it will be less powerful Must broaden the base of computer users Must not be biased by current compiler problems Design committee members were all from computer manufacturers and DoD branches Design Problems: arithmetic expressions? subscripts? Fights among manufacturers Copyright © 2009 Addison-Wesley. All rights reserved.
68
COBOL Evaluation Contributions
First macro facility in a high-level language Hierarchical data structures (records) Nested selection statements Long names (up to 30 characters), with hyphens Separate data division Copyright © 2009 Addison-Wesley. All rights reserved.
69
COBOL: DoD Influence First language required by DoD
would have failed without DoD Still the most widely used business applications language Copyright © 2009 Addison-Wesley. All rights reserved.
70
The Beginning of Timesharing: BASIC
Designed by Kemeny & Kurtz at Dartmouth Design Goals: Easy to learn and use for non-science students Must be “pleasant and friendly” Fast turnaround for homework Free and private access User time is more important than computer time Current popular dialect: Visual BASIC First widely used language with time sharing Copyright © 2009 Addison-Wesley. All rights reserved.
71
2.8 Everything for Everybody: PL/I
Designed by IBM and SHARE Computing situation in 1964 (IBM's point of view) Scientific computing IBM 1620 and 7090 computers FORTRAN SHARE user group Business computing IBM 1401, 7080 computers COBOL GUIDE user group Copyright © 2009 Addison-Wesley. All rights reserved.
72
PL/I: Background By 1963 The obvious solution
Scientific users began to need more elaborate I/O, like COBOL had; business users began to need floating point and arrays for MIS It looked like many shops would begin to need two kinds of computers, languages, and support staff--too costly The obvious solution Build a new computer to do both kinds of applications Design a new language to do both kinds of applications Copyright © 2009 Addison-Wesley. All rights reserved.
73
PL/I: Design Process Designed in five months by the 3 X 3 Committee
Three members from IBM, three members from SHARE Initial concept An extension of Fortran IV Initially called NPL (New Programming Language) Name changed to PL/I in 1965 Copyright © 2009 Addison-Wesley. All rights reserved.
74
PL/I: Evaluation PL/I contributions Concerns
First unit-level concurrency First exception handling Switch-selectable recursion First pointer data type First array cross sections Concerns Many new features were poorly designed Too large and too complex Copyright © 2009 Addison-Wesley. All rights reserved.
75
Two Early Dynamic Languages: APL and SNOBOL
Characterized by dynamic typing and dynamic storage allocation Variables are untyped A variable acquires a type when it is assigned a value Storage is allocated to a variable when it is assigned a value Copyright © 2009 Addison-Wesley. All rights reserved.
76
APL: A Programming Language
Designed as a hardware description language at IBM by Ken Iverson around 1960 Highly expressive (many operators, for both scalars and arrays of various dimensions) Programs are very difficult to read Still in use; minimal changes Copyright © 2009 Addison-Wesley. All rights reserved.
77
SNOBOL Designed as a string manipulation language at Bell Labs by Farber, Griswold, and Polensky in 1964 Powerful operators for string pattern matching Slower than alternative languages (and thus no longer used for writing editors) Still used for certain text processing tasks Copyright © 2009 Addison-Wesley. All rights reserved.
78
The Beginning of Data Abstraction: SIMULA 67
Designed primarily for system simulation in Norway by Nygaard and Dahl Based on ALGOL 60 and SIMULA I Primary Contributions Coroutines - a kind of subprogram Classes, objects, and inheritance Copyright © 2009 Addison-Wesley. All rights reserved.
79
Orthogonal Design: ALGOL 68
From the continued development of ALGOL 60 but not a superset of that language Source of several new ideas (even though the language itself never achieved widespread use) Design is based on the concept of orthogonality A few basic concepts, plus a few combining mechanisms Copyright © 2009 Addison-Wesley. All rights reserved.
80
ALGOL 68 Evaluation Contributions Comments
User-defined data structures Reference types Dynamic arrays (called flex arrays) Comments Less usage than ALGOL 60 Had strong influence on subsequent languages, especially Pascal, C, and Ada Copyright © 2009 Addison-Wesley. All rights reserved.
81
Pascal Developed by Wirth (a former member of the ALGOL 68 committee) Designed for teaching structured programming Small, simple, nothing really new Largest impact was on teaching programming From mid-1970s until the late 1990s, it was the most widely used language for teaching programming Copyright © 2009 Addison-Wesley. All rights reserved.
82
C Designed for systems programming (at Bell Labs by Dennis Richie) Evolved primarily from BCLP, B, but also ALGOL 68 Powerful set of operators, but poor type checking Initially spread through UNIX Many areas of application Copyright © 2009 Addison-Wesley. All rights reserved.
83
Programming Based on Logic: Prolog
Developed, by Comerauer and Roussel (University of Aix-Marseille), with help from Kowalski ( University of Edinburgh) Based on formal logic Non-procedural Can be summarized as being an intelligent database system that uses an inferencing process to infer the truth of given queries Highly inefficient, small application areas Copyright © 2009 Addison-Wesley. All rights reserved.
84
History’s Largest Design Effort: Ada
Huge design effort, involving hundreds of people, much money, and about eight years Strawman requirements (April 1975) Woodman requirements (August 1975) Tinman requirements (1976) Ironman equipments (1977) Steelman requirements (1978) Named Ada after Augusta Ada Byron, the first programmer Copyright © 2009 Addison-Wesley. All rights reserved.
85
Ada Evaluation Contributions Comments
Packages - support for data abstraction Exception handling - elaborate Generic program units Concurrency - through the tasking model Comments Competitive design Included all that was then known about software engineering and language design First compilers were very difficult; the first really usable compiler came nearly five years after the language design was completed Copyright © 2009 Addison-Wesley. All rights reserved.
86
Ada 95 Ada 95 (began in 1988) Support for OOP through type derivation Better control mechanisms for shared data New concurrency features More flexible libraries Popularity suffered because the DoD no longer requires its use but also because of popularity of C++ Copyright © 2009 Addison-Wesley. All rights reserved.
87
Object-Oriented Programming: Smalltalk
Developed at Xerox PARC, initially by Alan Kay, later by Adele Goldberg First full implementation of an object-oriented language (data abstraction, inheritance, and dynamic binding) Pioneered the graphical user interface design Promoted OOP Copyright © 2009 Addison-Wesley. All rights reserved.
88
Combining Imperative and Object-Oriented Programming: C++
Developed at Bell Labs by Stroustrup in 1980 Evolved from C and SIMULA 67 Facilities for object-oriented programming, taken partially from SIMULA 67 Provides exception handling A large and complex language, in part because it supports both procedural and OO programming Rapidly grew in popularity, along with OOP ANSI standard approved in November 1997 Microsoft’s version (released with .NET in 2002): Managed C++ delegates, interfaces, no multiple inheritance Copyright © 2009 Addison-Wesley. All rights reserved.
89
Related OOP Languages Eiffel (designed by Bertrand Meyer - 1992)
Not directly derived from any other language Smaller and simpler than C++, but still has most of the power Lacked popularity of C++ because many C++ enthusiasts were already C programmers Delphi (Borland) Pascal plus features to support OOP More elegant and safer than C++ Copyright © 2009 Addison-Wesley. All rights reserved.
90
An Imperative-Based Object-Oriented Language: Java
Developed at Sun in the early 1990s C and C++ were not satisfactory for embedded electronic devices Based on C++ Significantly simplified (does not include struct, union, enum, pointer arithmetic, and half of the assignment coercions of C++) Supports only OOP Has references, but not pointers Includes support for applets and a form of concurrency Copyright © 2009 Addison-Wesley. All rights reserved.
91
Java Evaluation Eliminated many unsafe features of C++
Supports concurrency Libraries for applets, GUIs, database access Portable: Java Virtual Machine concept, JIT compilers Widely used for Web programming Use increased faster than any previous language Most recent version, 5.0, released in 2004 Copyright © 2009 Addison-Wesley. All rights reserved.
92
Scripting Languages for the Web
Perl Designed by Larry Wall—first released in 1987 Variables are statically typed but implicitly declared Three distinctive namespaces, denoted by the first character of a variable’s name Powerful, but somewhat dangerous Gained widespread use for CGI programming on the Web Also used for a replacement for UNIX system administration language JavaScript Began at Netscape, but later became a joint venture of Netscape and Sun Microsystems A client-side HTML-embedded scripting language, often used to create dynamic HTML documents Purely interpreted Related to Java only through similar syntax PHP PHP: Hypertext Preprocessor, designed by Rasmus Lerdorf A server-side HTML-embedded scripting language, often used for form processing and database access through the Web Copyright © 2009 Addison-Wesley. All rights reserved.
93
Scripting Languages for the Web
Python An OO interpreted scripting language Type checked but dynamically typed Used for CGI programming and form processing Dynamically typed, but type checked Supports lists, tuples, and hashes Lua Supports lists, tuples, and hashes, all with its single data structure, the table Easily extendable Copyright © 2009 Addison-Wesley. All rights reserved.
94
Scripting Languages for the Web
Ruby Designed in Japan by Yukihiro Matsumoto (a.k.a, “Matz”) Began as a replacement for Perl and Python A pure object-oriented scripting language - All data are objects Most operators are implemented as methods, which can be redefined by user code Purely interpreted Copyright © 2009 Addison-Wesley. All rights reserved.
95
A C-Based Language for the New Millennium: C#
Part of the .NET development platform (2000) Based on C++ , Java, and Delphi Provides a language for component-based software development All .NET languages use Common Type System (CTS), which provides a common class library Copyright © 2009 Addison-Wesley. All rights reserved.
96
Markup/Programming Hybrid Languages
XSLT eXtensible Markup Language (XML): a metamarkup language eXtensible Stylesheet Language Transformation (XSTL) transforms XML documents for display Programming constructs (e.g., looping) JSP Java Server Pages: a collection of technologies to support dynamic Web documents servlet: a Java program that resides on a Web server and is enacted when called by a requested HTML document; a servlet’s output is displayed by the browser JSTL includes programming constructs in the form of HTML elements Copyright © 2009 Addison-Wesley. All rights reserved.
97
Summary Development, development environment, and evaluation of a number of important programming languages Perspective into current issues in language design Copyright © 2009 Addison-Wesley. All rights reserved.
98
Describing Syntax and Semantics
Chapter 3 Describing Syntax and Semantics
99
Chapter 3 Topics Introduction The General Problem of Describing Syntax
Formal Methods of Describing Syntax Attribute Grammars Describing the Meanings of Programs: Dynamic Semantics Copyright © 2009 Addison-Wesley. All rights reserved.
100
Introduction Syntax: the form or structure of the expressions, statements, and program units Semantics: the meaning of the expressions, statements, and program units Syntax and semantics provide a language’s definition Users of a language definition Other language designers Implementers Programmers (the users of the language) Copyright © 2009 Addison-Wesley. All rights reserved.
101
The General Problem of Describing Syntax: Terminology
A sentence is a string of characters over some alphabet A language is a set of sentences A lexeme is the lowest level syntactic unit of a language (e.g., *, sum, begin) A token is a category of lexemes (e.g., identifier) Copyright © 2009 Addison-Wesley. All rights reserved.
102
Formal Definition of Languages
Recognizers A recognition device reads input strings over the alphabet of the language and decides whether the input strings belong to the language Example: syntax analysis part of a compiler - Detailed discussion of syntax analysis appears in Chapter 4 Generators A device that generates sentences of a language One can determine if the syntax of a particular sentence is syntactically correct by comparing it to the structure of the generator Copyright © 2009 Addison-Wesley. All rights reserved.
103
BNF and Context-Free Grammars
Developed by Noam Chomsky in the mid-1950s Language generators, meant to describe the syntax of natural languages Define a class of languages called context-free languages Backus-Naur Form (1959) Invented by John Backus to describe Algol 58 BNF is equivalent to context-free grammars Copyright © 2009 Addison-Wesley. All rights reserved.
104
BNF Fundamentals In BNF, abstractions are used to represent classes of syntactic structures--they act like syntactic variables (also called nonterminal symbols, or just terminals) Terminals are lexemes or tokens A rule has a left-hand side (LHS), which is a nonterminal, and a right-hand side (RHS), which is a string of terminals and/or nonterminals Nonterminals are often enclosed in angle brackets Examples of BNF rules: <ident_list> → identifier | identifier, <ident_list> <if_stmt> → if <logic_expr> then <stmt> Grammar: a finite non-empty set of rules A start symbol is a special element of the nonterminals of a grammar Copyright © 2009 Addison-Wesley. All rights reserved.
105
BNF Rules An abstraction (or nonterminal symbol) can have more than one RHS <stmt> <single_stmt> | begin <stmt_list> end Copyright © 2009 Addison-Wesley. All rights reserved.
106
Describing Lists Syntactic lists are described using recursion
<ident_list> ident | ident, <ident_list> A derivation is a repeated application of rules, starting with the start symbol and ending with a sentence (all terminal symbols) Copyright © 2009 Addison-Wesley. All rights reserved.
107
An Example Grammar <program> <stmts>
<stmts> <stmt> | <stmt> ; <stmts> <stmt> <var> = <expr> <var> a | b | c | d <expr> <term> + <term> | <term> - <term> <term> <var> | const Copyright © 2009 Addison-Wesley. All rights reserved.
108
An Example Derivation <program> => <stmts> => <stmt> => <var> = <expr> => a = <expr> => a = <term> + <term> => a = <var> + <term> => a = b + <term> => a = b + const Copyright © 2009 Addison-Wesley. All rights reserved.
109
Derivations Every string of symbols in a derivation is a sentential form A sentence is a sentential form that has only terminal symbols A leftmost derivation is one in which the leftmost nonterminal in each sentential form is the one that is expanded A derivation may be neither leftmost nor rightmost Copyright © 2009 Addison-Wesley. All rights reserved.
110
Parse Tree A hierarchical representation of a derivation
<program> <stmts> <stmt> <var> = <expr> a <term> + <term> <var> const b Copyright © 2009 Addison-Wesley. All rights reserved.
111
Ambiguity in Grammars A grammar is ambiguous if and only if it generates a sentential form that has two or more distinct parse trees Copyright © 2009 Addison-Wesley. All rights reserved.
112
An Ambiguous Expression Grammar
<expr> <expr> <op> <expr> | const <op> / | - <expr> <expr> <expr> <op> <expr> <expr> <op> <op> <expr> <expr> <op> <expr> <expr> <op> <expr> const - const / const const - const / const Copyright © 2009 Addison-Wesley. All rights reserved.
113
An Unambiguous Expression Grammar
If we use the parse tree to indicate precedence levels of the operators, we cannot have ambiguity <expr> <expr> - <term> | <term> <term> <term> / const| const <expr> <expr> - <term> <term> <term> / const const const Copyright © 2009 Addison-Wesley. All rights reserved.
114
Associativity of Operators
Operator associativity can also be indicated by a grammar <expr> -> <expr> + <expr> | const (ambiguous) <expr> -> <expr> + const | const (unambiguous) <expr> <expr> <expr> + const <expr> + const const Copyright © 2009 Addison-Wesley. All rights reserved.
115
Extended BNF Optional parts are placed in brackets [ ]
<proc_call> -> ident [(<expr_list>)] Alternative parts of RHSs are placed inside parentheses and separated via vertical bars <term> → <term> (+|-) const Repetitions (0 or more) are placed inside braces { } <ident> → letter {letter|digit} Copyright © 2009 Addison-Wesley. All rights reserved.
116
BNF and EBNF BNF <expr> <expr> + <term> EBNF
<term> <term> * <factor> | <term> / <factor> | <factor> EBNF <expr> <term> {(+ | -) <term>} <term> <factor> {(* | /) <factor>} Copyright © 2009 Addison-Wesley. All rights reserved.
117
Recent Variations in EBNF
Alternative RHSs are put on separate lines Use of a colon instead of => Use of opt for optional parts Use of oneof for choices Copyright © 2009 Addison-Wesley. All rights reserved.
118
Static Semantics Nothing to do with meaning
Context-free grammars (CFGs) cannot describe all of the syntax of programming languages Categories of constructs that are trouble: - Context-free, but cumbersome (e.g., types of operands in expressions) - Non-context-free (e.g., variables must be declared before they are used) Copyright © 2009 Addison-Wesley. All rights reserved.
119
Attribute Grammars Attribute grammars (AGs) have additions to CFGs to carry some semantic info on parse tree nodes Primary value of AGs: Static semantics specification Compiler design (static semantics checking) Copyright © 2009 Addison-Wesley. All rights reserved.
120
Attribute Grammars : Definition
Def: An attribute grammar is a context-free grammar G = (S, N, T, P) with the following additions: For each grammar symbol x there is a set A(x) of attribute values Each rule has a set of functions that define certain attributes of the nonterminals in the rule Each rule has a (possibly empty) set of predicates to check for attribute consistency Copyright © 2009 Addison-Wesley. All rights reserved.
121
Attribute Grammars: Definition
Let X0 X1 ... Xn be a rule Functions of the form S(X0) = f(A(X1), ... , A(Xn)) define synthesized attributes Functions of the form I(Xj) = f(A(X0), ... , A(Xn)), for i <= j <= n, define inherited attributes Initially, there are intrinsic attributes on the leaves Copyright © 2009 Addison-Wesley. All rights reserved.
122
Attribute Grammars: An Example
Syntax <assign> -> <var> = <expr> <expr> -> <var> + <var> | <var> <var> A | B | C actual_type: synthesized for <var> and <expr> expected_type: inherited for <expr> Copyright © 2009 Addison-Wesley. All rights reserved.
123
Attribute Grammar (continued)
Syntax rule: <expr> <var>[1] + <var>[2] Semantic rules: <expr>.actual_type <var>[1].actual_type Predicate: <var>[1].actual_type == <var>[2].actual_type <expr>.expected_type == <expr>.actual_type Syntax rule: <var> id Semantic rule: <var>.actual_type lookup (<var>.string) Copyright © 2009 Addison-Wesley. All rights reserved.
124
Attribute Grammars (continued)
How are attribute values computed? If all attributes were inherited, the tree could be decorated in top-down order. If all attributes were synthesized, the tree could be decorated in bottom-up order. In many cases, both kinds of attributes are used, and it is some combination of top-down and bottom-up that must be used. Copyright © 2009 Addison-Wesley. All rights reserved.
125
Attribute Grammars (continued)
<expr>.expected_type inherited from parent <var>[1].actual_type lookup (A) <var>[2].actual_type lookup (B) <var>[1].actual_type =? <var>[2].actual_type <expr>.actual_type <var>[1].actual_type <expr>.actual_type =? <expr>.expected_type Copyright © 2009 Addison-Wesley. All rights reserved.
126
Semantics There is no single widely acceptable notation or formalism for describing semantics Several needs for a methodology and notation for semantics: Programmers need to know what statements mean Compiler writers must know exactly what language constructs do Correctness proofs would be possible Compiler generators would be possible Designers could detect ambiguities and inconsistencies Copyright © 2009 Addison-Wesley. All rights reserved.
127
Operational Semantics
Describe the meaning of a program by executing its statements on a machine, either simulated or actual. The change in the state of the machine (memory, registers, etc.) defines the meaning of the statement To use operational semantics for a high-level language, a virtual machine is needed Copyright © 2009 Addison-Wesley. All rights reserved.
128
Operational Semantics
A hardware pure interpreter would be too expensive A software pure interpreter also has problems The detailed characteristics of the particular computer would make actions difficult to understand Such a semantic definition would be machine- dependent Copyright © 2009 Addison-Wesley. All rights reserved.
129
Operational Semantics (continued)
A better alternative: A complete computer simulation The process: Build a translator (translates source code to the machine code of an idealized computer) Build a simulator for the idealized computer Evaluation of operational semantics: Good if used informally (language manuals, etc.) Extremely complex if used formally (e.g., VDL), it was used for describing semantics of PL/I. Copyright © 2009 Addison-Wesley. All rights reserved.
130
Operational Semantics (continued)
Uses of operational semantics: - Language manuals and textbooks - Teaching programming languages Two different levels of uses of operational semantics: - Natural operational semantics - Structural operational semantics Evaluation - Good if used informally (language manuals, etc.) - Extremely complex if used formally (e.g.,VDL) Copyright © 2009 Addison-Wesley. All rights reserved.
131
Denotational Semantics
Based on recursive function theory The most abstract semantics description method Originally developed by Scott and Strachey (1970) Copyright © 2009 Addison-Wesley. All rights reserved.
132
Denotational Semantics - continued
The process of building a denotational specification for a language: - Define a mathematical object for each language entity Define a function that maps instances of the language entities onto instances of the corresponding mathematical objects The meaning of language constructs are defined by only the values of the program's variables Copyright © 2009 Addison-Wesley. All rights reserved.
133
Denotational Semantics: program state
The state of a program is the values of all its current variables s = {<i1, v1>, <i2, v2>, …, <in, vn>} Let VARMAP be a function that, when given a variable name and a state, returns the current value of the variable VARMAP(ij, s) = vj Copyright © 2009 Addison-Wesley. All rights reserved.
134
Decimal Numbers <dec_num> '0' | '1' | '2' | '3' | '4' | '5' |
'6' | '7' | '8' | '9' | <dec_num> ('0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9') Mdec('0') = 0, Mdec ('1') = 1, …, Mdec ('9') = 9 Mdec (<dec_num> '0') = 10 * Mdec (<dec_num>) Mdec (<dec_num> '1’) = 10 * Mdec (<dec_num>) + 1 … Mdec (<dec_num> '9') = 10 * Mdec (<dec_num>) + 9 Copyright © 2009 Addison-Wesley. All rights reserved.
135
Expressions Map expressions onto Z {error}
We assume expressions are decimal numbers, variables, or binary expressions having one arithmetic operator and two operands, each of which can be an expression Copyright © 2009 Addison-Wesley. All rights reserved.
136
Expressions else if (<binary_expr>.<operator> == '+' then
Me(<expr>, s) = case <expr> of <dec_num> => Mdec(<dec_num>, s) <var> => if VARMAP(<var>, s) == undef then error else VARMAP(<var>, s) <binary_expr> => if (Me(<binary_expr>.<left_expr>, s) == undef OR Me(<binary_expr>.<right_expr>, s) = undef) else if (<binary_expr>.<operator> == '+' then Me(<binary_expr>.<left_expr>, s) + Me(<binary_expr>.<right_expr>, s) else Me(<binary_expr>.<left_expr>, s) * ... Copyright © 2009 Addison-Wesley. All rights reserved.
137
Assignment Statements
Maps state sets to state sets U {error} Ma(x := E, s) = if Me(E, s) == error then error else s’ = {<i1,v1’>,<i2,v2’>,...,<in,vn’>}, where for j = 1, 2, ..., n, if ij == x then vj’ = Me(E, s) else vj’ = VARMAP(ij, s) Copyright © 2009 Addison-Wesley. All rights reserved.
138
Logical Pretest Loops Maps state sets to state sets U {error}
Ml(while B do L, s) = if Mb(B, s) == undef then error else if Mb(B, s) == false then s else if Msl(L, s) == error else Ml(while B do L, Msl(L, s)) Copyright © 2009 Addison-Wesley. All rights reserved.
139
Loop Meaning The meaning of the loop is the value of the program variables after the statements in the loop have been executed the prescribed number of times, assuming there have been no errors In essence, the loop has been converted from iteration to recursion, where the recursive control is mathematically defined by other recursive state mapping functions - Recursion, when compared to iteration, is easier to describe with mathematical rigor Copyright © 2009 Addison-Wesley. All rights reserved.
140
Evaluation of Denotational Semantics
Can be used to prove the correctness of programs Provides a rigorous way to think about programs Can be an aid to language design Has been used in compiler generation systems Because of its complexity, it are of little use to language users Copyright © 2009 Addison-Wesley. All rights reserved.
141
Axiomatic Semantics Based on formal logic (predicate calculus)
Original purpose: formal program verification Axioms or inference rules are defined for each statement type in the language (to allow transformations of logic expressions into more formal logic expressions) The logic expressions are called assertions Copyright © 2009 Addison-Wesley. All rights reserved.
142
Axiomatic Semantics (continued)
An assertion before a statement (a precondition) states the relationships and constraints among variables that are true at that point in execution An assertion following a statement is a postcondition A weakest precondition is the least restrictive precondition that will guarantee the postcondition Copyright © 2009 Addison-Wesley. All rights reserved.
143
Axiomatic Semantics Form
Pre-, post form: {P} statement {Q} An example a = b + 1 {a > 1} One possible precondition: {b > 10} Weakest precondition: {b > 0} Copyright © 2009 Addison-Wesley. All rights reserved.
144
Program Proof Process The postcondition for the entire program is the desired result Work back through the program to the first statement. If the precondition on the first statement is the same as the program specification, the program is correct. Copyright © 2009 Addison-Wesley. All rights reserved.
145
Axiomatic Semantics: Axioms
An axiom for assignment statements (x = E): {Qx->E} x = E {Q} The Rule of Consequence: Copyright © 2009 Addison-Wesley. All rights reserved.
146
Axiomatic Semantics: Axioms
An inference rule for sequences of the form S1; S2 {P1} S1 {P2} {P2} S2 {P3} Copyright © 2009 Addison-Wesley. All rights reserved.
147
Axiomatic Semantics: Axioms
An inference rule for logical pretest loops {P} while B do S end {Q} where I is the loop invariant (the inductive hypothesis) Copyright © 2009 Addison-Wesley. All rights reserved.
148
Axiomatic Semantics: Axioms
Characteristics of the loop invariant: I must meet the following conditions: P => I the loop invariant must be true initially {I} B {I} evaluation of the Boolean must not change the validity of I {I and B} S {I} -- I is not changed by executing the body of the loop (I and (not B)) => Q if I is true and B is false, Q is implied The loop terminates can be difficult to prove Copyright © 2009 Addison-Wesley. All rights reserved.
149
Loop Invariant The loop invariant I is a weakened version of the loop postcondition, and it is also a precondition. I must be weak enough to be satisfied prior to the beginning of the loop, but when combined with the loop exit condition, it must be strong enough to force the truth of the postcondition Copyright © 2009 Addison-Wesley. All rights reserved.
150
Evaluation of Axiomatic Semantics
Developing axioms or inference rules for all of the statements in a language is difficult It is a good tool for correctness proofs, and an excellent framework for reasoning about programs, but it is not as useful for language users and compiler writers Its usefulness in describing the meaning of a programming language is limited for language users or compiler writers Copyright © 2009 Addison-Wesley. All rights reserved.
151
Denotation Semantics vs Operational Semantics
In operational semantics, the state changes are defined by coded algorithms In denotational semantics, the state changes are defined by rigorous mathematical functions Copyright © 2009 Addison-Wesley. All rights reserved.
152
Summary BNF and context-free grammars are equivalent meta-languages
Well-suited for describing the syntax of programming languages An attribute grammar is a descriptive formalism that can describe both the syntax and the semantics of a language Three primary methods of semantics description Operation, axiomatic, denotational Copyright © 2009 Addison-Wesley. All rights reserved.
153
Lexical and Syntax Analysis
Chapter 4 Lexical and Syntax Analysis
154
Chapter 4 Topics Introduction Lexical Analysis The Parsing Problem
Recursive-Descent Parsing Bottom-Up Parsing Copyright © 2009 Addison-Wesley. All rights reserved.
155
Introduction Language implementation systems must analyze source code, regardless of the specific implementation approach Nearly all syntax analysis is based on a formal description of the syntax of the source language (BNF) Copyright © 2009 Addison-Wesley. All rights reserved.
156
Syntax Analysis The syntax analysis portion of a language processor nearly always consists of two parts: A low-level part called a lexical analyzer (mathematically, a finite automaton based on a regular grammar) A high-level part called a syntax analyzer, or parser (mathematically, a push-down automaton based on a context-free grammar, or BNF) Copyright © 2009 Addison-Wesley. All rights reserved.
157
Advantages of Using BNF to Describe Syntax
Provides a clear and concise syntax description The parser can be based directly on the BNF Parsers based on BNF are easy to maintain Copyright © 2009 Addison-Wesley. All rights reserved.
158
Reasons to Separate Lexical and Syntax Analysis
Simplicity - less complex approaches can be used for lexical analysis; separating them simplifies the parser Efficiency - separation allows optimization of the lexical analyzer Portability - parts of the lexical analyzer may not be portable, but the parser always is portable Copyright © 2009 Addison-Wesley. All rights reserved.
159
Lexical Analysis A lexical analyzer is a pattern matcher for character strings A lexical analyzer is a “front-end” for the parser Identifies substrings of the source program that belong together - lexemes Lexemes match a character pattern, which is associated with a lexical category called a token sum is a lexeme; its token may be IDENT Copyright © 2009 Addison-Wesley. All rights reserved.
160
Lexical Analysis (continued)
The lexical analyzer is usually a function that is called by the parser when it needs the next token Three approaches to building a lexical analyzer: Write a formal description of the tokens and use a software tool that constructs table-driven lexical analyzers given such a description Design a state diagram that describes the tokens and write a program that implements the state diagram Design a state diagram that describes the tokens and hand-construct a table-driven implementation of the state diagram Copyright © 2009 Addison-Wesley. All rights reserved.
161
State Diagram Design A naïve state diagram would have a transition from every state on every character in the source language - such a diagram would be very large! Copyright © 2009 Addison-Wesley. All rights reserved.
162
Lexical Analysis (cont.)
In many cases, transitions can be combined to simplify the state diagram When recognizing an identifier, all uppercase and lowercase letters are equivalent Use a character class that includes all letters When recognizing an integer literal, all digits are equivalent - use a digit class Copyright © 2009 Addison-Wesley. All rights reserved.
163
Lexical Analysis (cont.)
Reserved words and identifiers can be recognized together (rather than having a part of the diagram for each reserved word) Use a table lookup to determine whether a possible identifier is in fact a reserved word Copyright © 2009 Addison-Wesley. All rights reserved.
164
Lexical Analysis (cont.)
Convenient utility subprograms: getChar - gets the next character of input, puts it in nextChar, determines its class and puts the class in charClass addChar - puts the character from nextChar into the place the lexeme is being accumulated, lexeme lookup - determines whether the string in lexeme is a reserved word (returns a code) Copyright © 2009 Addison-Wesley. All rights reserved.
165
State Diagram Copyright © 2009 Addison-Wesley. All rights reserved.
166
Lexical Analyzer Implementation: SHOW front.c (pp. 176-181)
- Following is the output of the lexical analyzer of front.c when used on (sum + 47) / total Next token is: 25 Next lexeme is ( Next token is: 11 Next lexeme is sum Next token is: 21 Next lexeme is + Next token is: 10 Next lexeme is 47 Next token is: 26 Next lexeme is ) Next token is: 24 Next lexeme is / Next token is: 11 Next lexeme is total Next token is: -1 Next lexeme is EOF Copyright © 2009 Addison-Wesley. All rights reserved.
167
The Parsing Problem Goals of the parser, given an input program:
Find all syntax errors; for each, produce an appropriate diagnostic message and recover quickly Produce the parse tree, or at least a trace of the parse tree, for the program Copyright © 2009 Addison-Wesley. All rights reserved.
168
The Parsing Problem (cont.)
Two categories of parsers Top down - produce the parse tree, beginning at the root Order is that of a leftmost derivation Traces or builds the parse tree in preorder Bottom up - produce the parse tree, beginning at the leaves Order is that of the reverse of a rightmost derivation Useful parsers look only one token ahead in the input Copyright © 2009 Addison-Wesley. All rights reserved.
169
The Parsing Problem (cont.)
Top-down Parsers Given a sentential form, xA , the parser must choose the correct A-rule to get the next sentential form in the leftmost derivation, using only the first token produced by A The most common top-down parsing algorithms: Recursive descent - a coded implementation LL parsers - table driven implementation Copyright © 2009 Addison-Wesley. All rights reserved.
170
The Parsing Problem (cont.)
Bottom-up parsers Given a right sentential form, , determine what substring of is the right-hand side of the rule in the grammar that must be reduced to produce the previous sentential form in the right derivation The most common bottom-up parsing algorithms are in the LR family Copyright © 2009 Addison-Wesley. All rights reserved.
171
The Parsing Problem (cont.)
The Complexity of Parsing Parsers that work for any unambiguous grammar are complex and inefficient ( O(n3), where n is the length of the input ) Compilers use parsers that only work for a subset of all unambiguous grammars, but do it in linear time ( O(n), where n is the length of the input ) Copyright © 2009 Addison-Wesley. All rights reserved.
172
Recursive-Descent Parsing
There is a subprogram for each nonterminal in the grammar, which can parse sentences that can be generated by that nonterminal EBNF is ideally suited for being the basis for a recursive-descent parser, because EBNF minimizes the number of nonterminals Copyright © 2009 Addison-Wesley. All rights reserved.
173
Recursive-Descent Parsing (cont.)
A grammar for simple expressions: <expr> <term> {(+ | -) <term>} <term> <factor> {(* | /) <factor>} <factor> id | int_constant | ( <expr> ) Copyright © 2009 Addison-Wesley. All rights reserved.
174
Recursive-Descent Parsing (cont.)
Assume we have a lexical analyzer named lex, which puts the next token code in nextToken The coding process when there is only one RHS: For each terminal symbol in the RHS, compare it with the next input token; if they match, continue, else there is an error For each nonterminal symbol in the RHS, call its associated parsing subprogram Copyright © 2009 Addison-Wesley. All rights reserved.
175
Recursive-Descent Parsing (cont.)
/* Function expr Parses strings in the language generated by the rule: <expr> → <term> {(+ | -) <term>} */ void expr() { /* Parse the first term */ term(); /* As long as the next token is + or -, call lex to get the next token and parse the next term */ while (nextToken == ADD_OP || nextToken == SUB_OP){ lex(); term(); } } Copyright © 2009 Addison-Wesley. All rights reserved.
176
Recursive-Descent Parsing (cont.)
This particular routine does not detect errors Convention: Every parsing routine leaves the next token in nextToken Copyright © 2009 Addison-Wesley. All rights reserved.
177
Recursive-Descent Parsing (cont.)
A nonterminal that has more than one RHS requires an initial process to determine which RHS it is to parse The correct RHS is chosen on the basis of the next token of input (the lookahead) The next token is compared with the first token that can be generated by each RHS until a match is found If no match is found, it is a syntax error Copyright © 2009 Addison-Wesley. All rights reserved.
178
Recursive-Descent Parsing (cont.)
/* term Parses strings in the language generated by the rule: <term> -> <factor> {(* | /) <factor>) */ void term() { printf("Enter <term>\n"); /* Parse the first factor */ factor(); /* As long as the next token is * or /, next token and parse the next factor */ while (nextToken == MULT_OP || nextToken == DIV_OP) { lex(); } printf("Exit <term>\n"); } /* End of function term */ Copyright © 2009 Addison-Wesley. All rights reserved.
179
Recursive-Descent Parsing (cont.)
/* Function factor Parses strings in the language generated by the rule: <factor> -> id | (<expr>) */ void factor() { /* Determine which RHS */ if (nextToken) == ID_CODE || nextToken == INT_CODE) /* For the RHS id, just call lex */ lex(); /* If the RHS is (<expr>) – call lex to pass over the left parenthesis, call expr, and check for the right parenthesis */ else if (nextToken == LP_CODE) { lex(); expr(); if (nextToken == RP_CODE) lex(); else error(); } /* End of else if (nextToken == ... */ else error(); /* Neither RHS matches */ } Copyright © 2009 Addison-Wesley. All rights reserved.
180
Recursive-Descent Parsing (cont.)
- Trace of the lexical and syntax analyzers on (sum + 47) / total Next token is: 25 Next lexeme is ( Next token is: 11 Next lexeme is total Enter <expr> Enter <factor> Enter <term> Next token is: -1 Next lexeme is EOF Enter <factor> Exit <factor> Next token is: 11 Next lexeme is sum Exit <term> Enter <expr> Exit <expr> Enter <term> Enter <factor> Next token is: 21 Next lexeme is + Exit <factor> Exit <term> Next token is: 10 Next lexeme is 47 Next token is: 26 Next lexeme is ) Exit <expr> Next token is: 24 Next lexeme is / Copyright © 2009 Addison-Wesley. All rights reserved.
181
Recursive-Descent Parsing (cont.)
The LL Grammar Class The Left Recursion Problem If a grammar has left recursion, either direct or indirect, it cannot be the basis for a top-down parser A grammar can be modified to remove left recursion For each nonterminal, A, Group the A-rules as A → Aα1 | … | Aαm | β1 | β2 | … | βn where none of the β‘s begins with A 2. Replace the original A-rules with A → β1A’ | β2A’ | … | βnA’ A’ → α1A’ | α2A’ | … | αmA’ | ε Copyright © 2009 Addison-Wesley. All rights reserved.
182
Recursive-Descent Parsing (cont.)
The other characteristic of grammars that disallows top-down parsing is the lack of pairwise disjointness The inability to determine the correct RHS on the basis of one token of lookahead Def: FIRST() = {a | =>* a } (If =>* , is in FIRST()) Copyright © 2009 Addison-Wesley. All rights reserved.
183
Recursive-Descent Parsing (cont.)
Pairwise Disjointness Test: For each nonterminal, A, in the grammar that has more than one RHS, for each pair of rules, A i and A j, it must be true that FIRST(i) ⋂ FIRST(j) = Examples: A a | bB | cAb A a | aB Copyright © 2009 Addison-Wesley. All rights reserved.
184
Recursive-Descent Parsing (cont.)
Left factoring can resolve the problem Replace <variable> identifier | identifier [<expression>] with <variable> identifier <new> <new> | [<expression>] or <variable> identifier [[<expression>]] (the outer brackets are metasymbols of EBNF) Copyright © 2009 Addison-Wesley. All rights reserved.
185
Bottom-up Parsing The parsing problem is finding the correct RHS in a right-sentential form to reduce to get the previous right-sentential form in the derivation Copyright © 2009 Addison-Wesley. All rights reserved.
186
Bottom-up Parsing (cont.)
Intuition about handles: Def: is the handle of the right sentential form = w if and only if S =>*rm Aw =>rm w Def: is a phrase of the right sentential form if and only if S =>* = 1A2 =>+ 12 Def: is a simple phrase of the right sentential form if and only if S =>* = 1A2 => 12 Copyright © 2009 Addison-Wesley. All rights reserved.
187
Bottom-up Parsing (cont.)
Intuition about handles (continued): The handle of a right sentential form is its leftmost simple phrase Given a parse tree, it is now easy to find the handle Parsing can be thought of as handle pruning Copyright © 2009 Addison-Wesley. All rights reserved.
188
Bottom-up Parsing (cont.)
Shift-Reduce Algorithms Reduce is the action of replacing the handle on the top of the parse stack with its corresponding LHS Shift is the action of moving the next token to the top of the parse stack Copyright © 2009 Addison-Wesley. All rights reserved.
189
Bottom-up Parsing (cont.)
Advantages of LR parsers: They will work for nearly all grammars that describe programming languages. They work on a larger class of grammars than other bottom-up algorithms, but are as efficient as any other bottom-up parser. They can detect syntax errors as soon as it is possible. The LR class of grammars is a superset of the class parsable by LL parsers. Copyright © 2009 Addison-Wesley. All rights reserved.
190
Bottom-up Parsing (cont.)
LR parsers must be constructed with a tool Knuth’s insight: A bottom-up parser could use the entire history of the parse, up to the current point, to make parsing decisions There were only a finite and relatively small number of different parse situations that could have occurred, so the history could be stored in a parser state, on the parse stack Copyright © 2009 Addison-Wesley. All rights reserved.
191
Bottom-up Parsing (cont.)
An LR configuration stores the state of an LR parser (S0X1S1X2S2…XmSm, aiai+1…an$) Copyright © 2009 Addison-Wesley. All rights reserved.
192
Bottom-up Parsing (cont.)
LR parsers are table driven, where the table has two components, an ACTION table and a GOTO table The ACTION table specifies the action of the parser, given the parser state and the next token Rows are state names; columns are terminals The GOTO table specifies which state to put on top of the parse stack after a reduction action is done Rows are state names; columns are nonterminals Copyright © 2009 Addison-Wesley. All rights reserved.
193
Structure of An LR Parser
Copyright © 2009 Addison-Wesley. All rights reserved.
194
Bottom-up Parsing (cont.)
Initial configuration: (S0, a1…an$) Parser actions: If ACTION[Sm, ai] = Shift S, the next configuration is: (S0X1S1X2S2…XmSmaiS, ai+1…an$) If ACTION[Sm, ai] = Reduce A and S = GOTO[Sm-r, A], where r = the length of , the next configuration is (S0X1S1X2S2…Xm-rSm-rAS, aiai+1…an$) Copyright © 2009 Addison-Wesley. All rights reserved.
195
Bottom-up Parsing (cont.)
Parser actions (continued): If ACTION[Sm, ai] = Accept, the parse is complete and no errors were found. If ACTION[Sm, ai] = Error, the parser calls an error-handling routine. Copyright © 2009 Addison-Wesley. All rights reserved.
196
LR Parsing Table Copyright © 2009 Addison-Wesley. All rights reserved.
197
Bottom-up Parsing (cont.)
A parser table can be generated from a given grammar with a tool, e.g., yacc Copyright © 2009 Addison-Wesley. All rights reserved.
198
Summary Syntax analysis is a common part of language implementation
A lexical analyzer is a pattern matcher that isolates small-scale parts of a program Detects syntax errors Produces a parse tree A recursive-descent parser is an LL parser EBNF Parsing problem for bottom-up parsers: find the substring of current sentential form The LR family of shift-reduce parsers is the most common bottom-up parsing approach Copyright © 2009 Addison-Wesley. All rights reserved.
199
Names, Bindings, and Scopes
Chapter 5 Names, Bindings, and Scopes
200
Chapter 5 Topics Introduction Names Variables The Concept of Binding
Scope Scope and Lifetime Referencing Environments Named Constants Copyright © 2009 Addison-Wesley. All rights reserved.
201
Introduction Imperative languages are abstractions of von Neumann architecture Memory Processor Variables characterized by attributes To design a type, must consider scope, lifetime, type checking, initialization, and type compatibility Copyright © 2009 Addison-Wesley. All rights reserved.
202
Names Design issues for names: Are names case sensitive?
Are special words reserved words or keywords? Copyright © 2009 Addison-Wesley. All rights reserved.
203
Names (continued) Length If too short, they cannot be connotative
Language examples: FORTRAN 95: maximum of 31 C99: no limit but only the first 63 are significant; also, external names are limited to a maximum of 31 C#, Ada, and Java: no limit, and all are significant C++: no limit, but implementers often impose one Copyright © 2009 Addison-Wesley. All rights reserved.
204
Names (continued) Special characters
PHP: all variable names must begin with dollar signs Perl: all variable names begin with special characters, which specify the variable’s type Ruby: variable names that begin are instance variables; those that begin with are class variables Copyright © 2009 Addison-Wesley. All rights reserved.
205
Names (continued) Case sensitivity
Disadvantage: readability (names that look alike are different) Names in the C-based languages are case sensitive Names in others are not Worse in C++, Java, and C# because predefined names are mixed case (e.g. IndexOutOfBoundsException) Copyright © 2009 Addison-Wesley. All rights reserved.
206
Names (continued) Special words
An aid to readability; used to delimit or separate statement clauses A keyword is a word that is special only in certain contexts, e.g., in Fortran Real VarName (Real is a data type followed with a name, therefore Real is a keyword) Real = 3.4 (Real is a variable) A reserved word is a special word that cannot be used as a user-defined name Potential problem with reserved words: If there are too many, many collisions occur (e.g., COBOL has 300 reserved words!) Copyright © 2009 Addison-Wesley. All rights reserved.
207
Variables A variable is an abstraction of a memory cell
Variables can be characterized as a sextuple of attributes: Name Address Value Type Lifetime Scope Copyright © 2009 Addison-Wesley. All rights reserved.
208
Variables Attributes Name - not all variables have them
Address - the memory address with which it is associated A variable may have different addresses at different times during execution A variable may have different addresses at different places in a program If two variable names can be used to access the same memory location, they are called aliases Aliases are created via pointers, reference variables, C and C++ unions Aliases are harmful to readability (program readers must remember all of them) Copyright © 2009 Addison-Wesley. All rights reserved.
209
Variables Attributes (continued)
Type - determines the range of values of variables and the set of operations that are defined for values of that type; in the case of floating point, type also determines the precision Value - the contents of the location with which the variable is associated - The l-value of a variable is its address - The r-value of a variable is its value Abstract memory cell - the physical cell or collection of cells associated with a variable Copyright © 2009 Addison-Wesley. All rights reserved.
210
The Concept of Binding A binding is an association, such as between an attribute and an entity, or between an operation and a symbol Binding time is the time at which a binding takes place. Copyright © 2009 Addison-Wesley. All rights reserved.
211
Possible Binding Times
Language design time -- bind operator symbols to operations Language implementation time-- bind floating point type to a representation Compile time -- bind a variable to a type in C or Java Load time -- bind a C or C++ static variable to a memory cell) Runtime -- bind a nonstatic local variable to a memory cell Copyright © 2009 Addison-Wesley. All rights reserved.
212
Static and Dynamic Binding
A binding is static if it first occurs before run time and remains unchanged throughout program execution. A binding is dynamic if it first occurs during execution or can change during execution of the program Copyright © 2009 Addison-Wesley. All rights reserved.
213
Type Binding How is a type specified?
When does the binding take place? If static, the type may be specified by either an explicit or an implicit declaration Copyright © 2009 Addison-Wesley. All rights reserved.
214
Explicit/Implicit Declaration
An explicit declaration is a program statement used for declaring the types of variables An implicit declaration is a default mechanism for specifying types of variables (the first appearance of the variable in the program) FORTRAN, BASIC, and Perl provide implicit declarations (Fortran has both explicit and implicit) Advantage: writability Disadvantage: reliability (less trouble with Perl) Copyright © 2009 Addison-Wesley. All rights reserved.
215
Dynamic Type Binding Dynamic Type Binding (JavaScript and PHP)
Specified through an assignment statement e.g., JavaScript list = [2, 4.33, 6, 8]; list = 17.3; Advantage: flexibility (generic program units) Disadvantages: High cost (dynamic type checking and interpretation) Type error detection by the compiler is difficult Copyright © 2009 Addison-Wesley. All rights reserved.
216
Variable Attributes (continued)
Type Inferencing (ML, Miranda, and Haskell) Rather than by assignment statement, types are determined (by the compiler) from the context of the reference Storage Bindings & Lifetime Allocation - getting a cell from some pool of available cells Deallocation - putting a cell back into the pool The lifetime of a variable is the time during which it is bound to a particular memory cell Copyright © 2009 Addison-Wesley. All rights reserved.
217
Categories of Variables by Lifetimes
Static--bound to memory cells before execution begins and remains bound to the same memory cell throughout execution, e.g., C and C++ static variables Advantages: efficiency (direct addressing), history-sensitive subprogram support Disadvantage: lack of flexibility (no recursion) Copyright © 2009 Addison-Wesley. All rights reserved.
218
Categories of Variables by Lifetimes
Stack-dynamic--Storage bindings are created for variables when their declaration statements are elaborated. (A declaration is elaborated when the executable code associated with it is executed) If scalar, all attributes except address are statically bound local variables in C subprograms and Java methods Advantage: allows recursion; conserves storage Disadvantages: Overhead of allocation and deallocation Subprograms cannot be history sensitive Inefficient references (indirect addressing) Copyright © 2009 Addison-Wesley. All rights reserved.
219
Categories of Variables by Lifetimes
Explicit heap-dynamic -- Allocated and deallocated by explicit directives, specified by the programmer, which take effect during execution Referenced only through pointers or references, e.g. dynamic objects in C++ (via new and delete), all objects in Java Advantage: provides for dynamic storage management Disadvantage: inefficient and unreliable Copyright © 2009 Addison-Wesley. All rights reserved.
220
Categories of Variables by Lifetimes
Implicit heap-dynamic--Allocation and deallocation caused by assignment statements all variables in APL; all strings and arrays in Perl, JavaScript, and PHP Advantage: flexibility (generic code) Disadvantages: Inefficient, because all attributes are dynamic Loss of error detection Copyright © 2009 Addison-Wesley. All rights reserved.
221
Variable Attributes: Scope
The scope of a variable is the range of statements over which it is visible The nonlocal variables of a program unit are those that are visible but not declared there The scope rules of a language determine how references to names are associated with variables Copyright © 2009 Addison-Wesley. All rights reserved.
222
Static Scope Based on program text
To connect a name reference to a variable, you (or the compiler) must find the declaration Search process: search declarations, first locally, then in increasingly larger enclosing scopes, until one is found for the given name Enclosing static scopes (to a specific scope) are called its static ancestors; the nearest static ancestor is called a static parent Some languages allow nested subprogram definitions, which create nested static scopes (e.g., Ada, JavaScript, Fortran 2003, and PHP) Copyright © 2009 Addison-Wesley. All rights reserved.
223
Scope (continued) Variables can be hidden from a unit by having a "closer" variable with the same name Ada allows access to these "hidden" variables E.g., unit.name Copyright © 2009 Addison-Wesley. All rights reserved.
224
Blocks - Note: legal in C and C++, but not in Java
A method of creating static scopes inside program units--from ALGOL 60 Example in C: void sub() { int count; while (...) { count++; ... } … - Note: legal in C and C++, but not in Java and C# - too error-prone Copyright © 2009 Addison-Wesley. All rights reserved.
225
Declaration Order C99, C++, Java, and C# allow variable declarations to appear anywhere a statement can appear In C99, C++, and Java, the scope of all local variables is from the declaration to the end of the block In C#, the scope of any variable declared in a block is the whole block, regardless of the position of the declaration in the block However, a variable still must be declared before it can be used Copyright © 2009 Addison-Wesley. All rights reserved.
226
Declaration Order (continued)
In C++, Java, and C#, variables can be declared in for statements The scope of such variables is restricted to the for construct Copyright © 2009 Addison-Wesley. All rights reserved.
227
Global Scope C, C++, PHP, and Python support a program structure that consists of a sequence of function definitions in a file These languages allow variable declarations to appear outside function definitions C and C++have both declarations (just attributes) and definitions (attributes and storage) A declaration outside a function definition specifies that it is defined in another file Copyright © 2009 Addison-Wesley. All rights reserved.
228
Global Scope (continued)
PHP Programs are embedded in XHTML markup documents, in any number of fragments, some statements and some function definitions The scope of a variable (implicitly) declared in a function is local to the function The scope of a variable implicitly declared outside functions is from the declaration to the end of the program, but skips over any intervening functions Global variables can be accessed in a function through the $GLOBALS array or by declaring it global Copyright © 2009 Addison-Wesley. All rights reserved.
229
Global Scope (continued)
Python A global variable can be referenced in functions, but can be assigned in a function only if it has been declared to be global in the function Copyright © 2009 Addison-Wesley. All rights reserved.
230
Evaluation of Static Scoping
Works well in many situations Problems: In most cases, too much access is possible As a program evolves, the initial structure is destroyed and local variables often become global; subprograms also gravitate toward become global, rather than nested Copyright © 2009 Addison-Wesley. All rights reserved.
231
Dynamic Scope Based on calling sequences of program units, not their textual layout (temporal versus spatial) References to variables are connected to declarations by searching back through the chain of subprogram calls that forced execution to this point Copyright © 2009 Addison-Wesley. All rights reserved.
232
Scope Example Big calls Sub1 Sub1 calls Sub2 Sub2 uses X Big
- declaration of X Sub1 - declaration of X - ... call Sub2 Sub2 - reference to X - call Sub1 … Big calls Sub1 Sub1 calls Sub2 Sub2 uses X Copyright © 2009 Addison-Wesley. All rights reserved.
233
Scope Example Static scoping Dynamic scoping
Reference to X is to Big's X Dynamic scoping Reference to X is to Sub1's X Evaluation of Dynamic Scoping: Advantage: convenience Disadvantages: While a subprogram is executing, its variables are visible to all subprograms it calls Impossible to statically type check 3. Poor readability- it is not possible to statically determine the type of a variable Copyright © 2009 Addison-Wesley. All rights reserved.
234
Scope and Lifetime Scope and lifetime are sometimes closely related, but are different concepts Consider a static variable in a C or C++ function Copyright © 2009 Addison-Wesley. All rights reserved.
235
Referencing Environments
The referencing environment of a statement is the collection of all names that are visible in the statement In a static-scoped language, it is the local variables plus all of the visible variables in all of the enclosing scopes A subprogram is active if its execution has begun but has not yet terminated In a dynamic-scoped language, the referencing environment is the local variables plus all visible variables in all active subprograms Copyright © 2009 Addison-Wesley. All rights reserved.
236
Named Constants A named constant is a variable that is bound to a value only when it is bound to storage Advantages: readability and modifiability Used to parameterize programs The binding of values to named constants can be either static (called manifest constants) or dynamic Languages: FORTRAN 95: constant-valued expressions Ada, C++, and Java: expressions of any kind C# has two kinds, readonly and const - the values of const named constants are bound at compile time - The values of readonly named constants are dynamically bound Copyright © 2009 Addison-Wesley. All rights reserved.
237
Summary Case sensitivity and the relationship of names to special words represent design issues of names Variables are characterized by the sextuples: name, address, value, type, lifetime, scope Binding is the association of attributes with program entities Scalar variables are categorized as: static, stack dynamic, explicit heap dynamic, implicit heap dynamic Strong typing means detecting all type errors Copyright © 2009 Addison-Wesley. All rights reserved.
238
Chapter 6 Data Types
239
Chapter 6 Topics Introduction Primitive Data Types
Character String Types User-Defined Ordinal Types Array Types Associative Arrays Record Types Union Types Pointer and Reference Types Copyright © 2009 Addison-Wesley. All rights reserved.
240
Introduction A data type defines a collection of data objects and a set of predefined operations on those objects A descriptor is the collection of the attributes of a variable An object represents an instance of a user-defined (abstract data) type One design issue for all data types: What operations are defined and how are they specified? Copyright © 2009 Addison-Wesley. All rights reserved.
241
Primitive Data Types Almost all programming languages provide a set of primitive data types Primitive data types: Those not defined in terms of other data types Some primitive data types are merely reflections of the hardware Others require only a little non-hardware support for their implementation Copyright © 2009 Addison-Wesley. All rights reserved.
242
Primitive Data Types: Integer
Almost always an exact reflection of the hardware so the mapping is trivial There may be as many as eight different integer types in a language Java’s signed integer sizes: byte, short, int, long Copyright © 2009 Addison-Wesley. All rights reserved.
243
Primitive Data Types: Floating Point
Model real numbers, but only as approximations Languages for scientific use support at least two floating-point types (e.g., float and double; sometimes more Usually exactly like the hardware, but not always IEEE Floating-Point Standard 754 Copyright © 2009 Addison-Wesley. All rights reserved.
244
Primitive Data Types: Complex
Some languages support a complex type, e.g., C99, Fortran, and Python Each value consists of two floats, the real part and the imaginary part Literal form (in Python): (7 + 3j), where 7 is the real part and 3 is the imaginary part Copyright © 2009 Addison-Wesley. All rights reserved.
245
Primitive Data Types: Decimal
For business applications (money) Essential to COBOL C# offers a decimal data type Store a fixed number of decimal digits, in coded form (BCD) Advantage: accuracy Disadvantages: limited range, wastes memory Copyright © 2009 Addison-Wesley. All rights reserved.
246
Primitive Data Types: Boolean
Simplest of all Range of values: two elements, one for “true” and one for “false” Could be implemented as bits, but often as bytes Advantage: readability Copyright © 2009 Addison-Wesley. All rights reserved.
247
Primitive Data Types: Character
Stored as numeric codings Most commonly used coding: ASCII An alternative, 16-bit coding: Unicode (UCS-2) Includes characters from most natural languages Originally used in Java C# and JavaScript also support Unicode 32-bit Unicode (UCS-4) Supported by Fortran, starting with 2003 Copyright © 2009 Addison-Wesley. All rights reserved.
248
Character String Types
Values are sequences of characters Design issues: Is it a primitive type or just a special kind of array? Should the length of strings be static or dynamic? Copyright © 2009 Addison-Wesley. All rights reserved.
249
Character String Types Operations
Typical operations: Assignment and copying Comparison (=, >, etc.) Catenation Substring reference Pattern matching Copyright © 2009 Addison-Wesley. All rights reserved.
250
Character String Type in Certain Languages
C and C++ Not primitive Use char arrays and a library of functions that provide operations SNOBOL4 (a string manipulation language) Primitive Many operations, including elaborate pattern matching Fortran and Python Primitive type with assignment and several operations Java Primitive via the String class Perl, JavaScript, Ruby, and PHP - Provide built-in pattern matching, using regular expressions Copyright © 2009 Addison-Wesley. All rights reserved.
251
Character String Length Options
Static: COBOL, Java’s String class Limited Dynamic Length: C and C++ In these languages, a special character is used to indicate the end of a string’s characters, rather than maintaining the length Dynamic (no maximum): SNOBOL4, Perl, JavaScript Ada supports all three string length options Copyright © 2009 Addison-Wesley. All rights reserved.
252
Character String Type Evaluation
Aid to writability As a primitive type with static length, they are inexpensive to provide--why not have them? Dynamic length is nice, but is it worth the expense? Copyright © 2009 Addison-Wesley. All rights reserved.
253
Character String Implementation
Static length: compile-time descriptor Limited dynamic length: may need a run-time descriptor for length (but not in C and C++) Dynamic length: need run-time descriptor; allocation/de-allocation is the biggest implementation problem Copyright © 2009 Addison-Wesley. All rights reserved.
254
Compile- and Run-Time Descriptors
Compile-time descriptor for static strings Run-time descriptor for limited dynamic strings Copyright © 2009 Addison-Wesley. All rights reserved.
255
User-Defined Ordinal Types
An ordinal type is one in which the range of possible values can be easily associated with the set of positive integers Examples of primitive ordinal types in Java integer char boolean Copyright © 2009 Addison-Wesley. All rights reserved.
256
Enumeration Types All possible values, which are named constants, are provided in the definition C# example enum days {mon, tue, wed, thu, fri, sat, sun}; Design issues Is an enumeration constant allowed to appear in more than one type definition, and if so, how is the type of an occurrence of that constant checked? Are enumeration values coerced to integer? Any other type coerced to an enumeration type? Copyright © 2009 Addison-Wesley. All rights reserved.
257
Evaluation of Enumerated Type
Aid to readability, e.g., no need to code a color as a number Aid to reliability, e.g., compiler can check: operations (don’t allow colors to be added) No enumeration variable can be assigned a value outside its defined range Ada, C#, and Java 5.0 provide better support for enumeration than C++ because enumeration type variables in these languages are not coerced into integer types Copyright © 2009 Addison-Wesley. All rights reserved.
258
Subrange Types An ordered contiguous subsequence of an ordinal type
Example: is a subrange of integer type Ada’s design type Days is (mon, tue, wed, thu, fri, sat, sun); subtype Weekdays is Days range mon..fri; subtype Index is Integer range ; Day1: Days; Day2: Weekday; Day2 := Day1; Copyright © 2009 Addison-Wesley. All rights reserved.
259
Subrange Evaluation Aid to readability Reliability
Make it clear to the readers that variables of subrange can store only certain range of values Reliability Assigning a value to a subrange variable that is outside the specified range is detected as an error Copyright © 2009 Addison-Wesley. All rights reserved.
260
Implementation of User-Defined Ordinal Types
Enumeration types are implemented as integers Subrange types are implemented like the parent types with code inserted (by the compiler) to restrict assignments to subrange variables Copyright © 2009 Addison-Wesley. All rights reserved.
261
Array Types An array is an aggregate of homogeneous data elements in which an individual element is identified by its position in the aggregate, relative to the first element. Copyright © 2009 Addison-Wesley. All rights reserved.
262
Array Design Issues What types are legal for subscripts?
Are subscripting expressions in element references range checked? When are subscript ranges bound? When does allocation take place? What is the maximum number of subscripts? Can array objects be initialized? Are any kind of slices supported? Copyright © 2009 Addison-Wesley. All rights reserved.
263
Array Indexing Indexing (or subscripting) is a mapping from indices to elements array_name (index_value_list) an element Index Syntax FORTRAN, PL/I, Ada use parentheses Ada explicitly uses parentheses to show uniformity between array references and function calls because both are mappings Most other languages use brackets Copyright © 2009 Addison-Wesley. All rights reserved.
264
Arrays Index (Subscript) Types
FORTRAN, C: integer only Ada: integer or enumeration (includes Boolean and char) Java: integer types only Index range checking - C, C++, Perl, and Fortran do not specify range checking - Java, ML, C# specify range checking - In Ada, the default is to require range checking, but it can be turned off Copyright © 2009 Addison-Wesley. All rights reserved.
265
Subscript Binding and Array Categories
Static: subscript ranges are statically bound and storage allocation is static (before run-time) Advantage: efficiency (no dynamic allocation) Fixed stack-dynamic: subscript ranges are statically bound, but the allocation is done at declaration time Advantage: space efficiency Copyright © 2009 Addison-Wesley. All rights reserved.
266
Subscript Binding and Array Categories (continued)
Stack-dynamic: subscript ranges are dynamically bound and the storage allocation is dynamic (done at run-time) Advantage: flexibility (the size of an array need not be known until the array is to be used) Fixed heap-dynamic: similar to fixed stack-dynamic: storage binding is dynamic but fixed after allocation (i.e., binding is done when requested and storage is allocated from heap, not stack) Copyright © 2009 Addison-Wesley. All rights reserved.
267
Subscript Binding and Array Categories (continued)
Heap-dynamic: binding of subscript ranges and storage allocation is dynamic and can change any number of times Advantage: flexibility (arrays can grow or shrink during program execution) Copyright © 2009 Addison-Wesley. All rights reserved.
268
Subscript Binding and Array Categories (continued)
C and C++ arrays that include static modifier are static C and C++ arrays without static modifier are fixed stack-dynamic C and C++ provide fixed heap-dynamic arrays C# includes a second array class ArrayList that provides fixed heap-dynamic Perl, JavaScript, Python, and Ruby support heap-dynamic arrays Copyright © 2009 Addison-Wesley. All rights reserved.
269
Array Initialization Some language allow initialization at the time of storage allocation C, C++, Java, C# example int list [] = {4, 5, 7, 83} Character strings in C and C++ char name [] = “freddie”; Arrays of strings in C and C++ char *names [] = {“Bob”, “Jake”, “Joe”]; Java initialization of String objects String[] names = {“Bob”, “Jake”, “Joe”}; Copyright © 2009 Addison-Wesley. All rights reserved.
270
Heterogeneous Arrays A heterogeneous array is one in which the elements need not be of the same type Supported by Perl, Python, JavaScript, and Ruby Copyright © 2009 Addison-Wesley. All rights reserved.
271
Array Initialization C-based languages Ada Python List comprehensions
int list [] = {1, 3, 5, 7} char *names [] = {“Mike”, “Fred”,“Mary Lou”}; Ada List : array (1..5) of Integer := (1 => 17, 3 => 34, others => 0); Python List comprehensions list = [x ** 2 for x in range(12) if x % 3 == 0] puts [0, 9, 36, 81] in list Copyright © 2009 Addison-Wesley. All rights reserved.
272
Arrays Operations APL provides the most powerful array processing operations for vectors and matrixes as well as unary operators (for example, to reverse column elements) Ada allows array assignment but also catenation Python’s array assignments, but they are only reference changes. Python also supports array catenation and element membership operations Ruby also provides array catenation Fortran provides elemental operations because they are between pairs of array elements For example, + operator between two arrays results in an array of the sums of the element pairs of the two arrays Copyright © 2009 Addison-Wesley. All rights reserved.
273
Rectangular and Jagged Arrays
A rectangular array is a multi-dimensioned array in which all of the rows have the same number of elements and all columns have the same number of elements A jagged matrix has rows with varying number of elements Possible when multi-dimensioned arrays actually appear as arrays of arrays C, C++, and Java support jagged arrays Fortran, Ada, and C# support rectangular arrays (C# also supports jagged arrays) Copyright © 2009 Addison-Wesley. All rights reserved.
274
Slices A slice is some substructure of an array; nothing more than a referencing mechanism Slices are only useful in languages that have array operations Copyright © 2009 Addison-Wesley. All rights reserved.
275
Slice Examples Fortran 95 Ruby supports slices with the slice method
Integer, Dimension (10) :: Vector Integer, Dimension (3, 3) :: Mat Integer, Dimension (3, 3) :: Cube Vector (3:6) is a four element array Ruby supports slices with the slice method list.slice(2, 2) returns the third and fourth elements of list Copyright © 2009 Addison-Wesley. All rights reserved.
276
Slices Examples in Fortran 95
Copyright © 2009 Addison-Wesley. All rights reserved.
277
Implementation of Arrays
Access function maps subscript expressions to an address in the array Access function for single-dimensioned arrays: address(list[k]) = address (list[lower_bound]) + ((k-lower_bound) * element_size) Copyright © 2009 Addison-Wesley. All rights reserved.
278
Accessing Multi-dimensioned Arrays
Two common ways: Row major order (by rows) – used in most languages column major order (by columns) – used in Fortran Copyright © 2009 Addison-Wesley. All rights reserved.
279
Locating an Element in a Multi-dimensioned Array
General format Location (a[I,j]) = address of a [row_lb,col_lb] + (((I - row_lb) * n) + (j - col_lb)) * element_size Copyright © 2009 Addison-Wesley. All rights reserved.
280
Compile-Time Descriptors
Single-dimensioned array Multi-dimensional array Copyright © 2009 Addison-Wesley. All rights reserved.
281
Associative Arrays An associative array is an unordered collection of data elements that are indexed by an equal number of values called keys User-defined keys must be stored Design issues: - What is the form of references to elements? - Is the size static or dynamic? Built-in type in Perl, Python, Ruby, and Lua In Lua, they are supported by tables Copyright © 2009 Addison-Wesley. All rights reserved.
282
Associative Arrays in Perl
Names begin with %; literals are delimited by parentheses %hi_temps = ("Mon" => 77, "Tue" => 79, “Wed” => 65, …); Subscripting is done using braces and keys $hi_temps{"Wed"} = 83; Elements can be removed with delete delete $hi_temps{"Tue"}; Copyright © 2009 Addison-Wesley. All rights reserved.
283
Record Types A record is a possibly heterogeneous aggregate of data elements in which the individual elements are identified by names Design issues: What is the syntactic form of references to the field? Are elliptical references allowed Copyright © 2009 Addison-Wesley. All rights reserved.
284
Definition of Records in COBOL
COBOL uses level numbers to show nested records; others use recursive definition 01 EMP-REC. 02 EMP-NAME. 05 FIRST PIC X(20). 05 MID PIC X(10). 05 LAST PIC X(20). 02 HOURLY-RATE PIC 99V99. Copyright © 2009 Addison-Wesley. All rights reserved.
285
Definition of Records in Ada
Record structures are indicated in an orthogonal way type Emp_Rec_Type is record First: String (1..20); Mid: String (1..10); Last: String (1..20); Hourly_Rate: Float; end record; Emp_Rec: Emp_Rec_Type; Copyright © 2009 Addison-Wesley. All rights reserved.
286
References to Records Record field references
1. COBOL field_name OF record_name_1 OF ... OF record_name_n 2. Others (dot notation) record_name_1.record_name_ record_name_n.field_name Fully qualified references must include all record names Elliptical references allow leaving out record names as long as the reference is unambiguous, for example in COBOL FIRST, FIRST OF EMP-NAME, and FIRST of EMP-REC are elliptical references to the employee’s first name Copyright © 2009 Addison-Wesley. All rights reserved.
287
Operations on Records Assignment is very common if the types are identical Ada allows record comparison Ada records can be initialized with aggregate literals COBOL provides MOVE CORRESPONDING Copies a field of the source record to the corresponding field in the target record Copyright © 2009 Addison-Wesley. All rights reserved.
288
Evaluation and Comparison to Arrays
Records are used when collection of data values is heterogeneous Access to array elements is much slower than access to record fields, because subscripts are dynamic (field names are static) Dynamic subscripts could be used with record field access, but it would disallow type checking and it would be much slower Copyright © 2009 Addison-Wesley. All rights reserved.
289
Implementation of Record Type
Offset address relative to the beginning of the records is associated with each field Copyright © 2009 Addison-Wesley. All rights reserved.
290
Unions Types A union is a type whose variables are allowed to store different type values at different times during execution Design issues Should type checking be required? Should unions be embedded in records? Copyright © 2009 Addison-Wesley. All rights reserved.
291
Discriminated vs. Free Unions
Fortran, C, and C++ provide union constructs in which there is no language support for type checking; the union in these languages is called free union Type checking of unions require that each union include a type indicator called a discriminant Supported by Ada Copyright © 2009 Addison-Wesley. All rights reserved.
292
Ada Union Types type Shape is (Circle, Triangle, Rectangle);
type Colors is (Red, Green, Blue); type Figure (Form: Shape) is record Filled: Boolean; Color: Colors; case Form is when Circle => Diameter: Float; when Triangle => Leftside, Rightside: Integer; Angle: Float; when Rectangle => Side1, Side2: Integer; end case; end record; Copyright © 2009 Addison-Wesley. All rights reserved.
293
Ada Union Type Illustrated
A discriminated union of three shape variables Copyright © 2009 Addison-Wesley. All rights reserved.
294
Evaluation of Unions Free unions are unsafe
Do not allow type checking Java and C# do not support unions Reflective of growing concerns for safety in programming language Ada’s descriminated unions are safe Copyright © 2009 Addison-Wesley. All rights reserved.
295
Pointer and Reference Types
A pointer type variable has a range of values that consists of memory addresses and a special value, nil Provide the power of indirect addressing Provide a way to manage dynamic memory A pointer can be used to access a location in the area where storage is dynamically created (usually called a heap) Copyright © 2009 Addison-Wesley. All rights reserved.
296
Design Issues of Pointers
What are the scope of and lifetime of a pointer variable? What is the lifetime of a heap-dynamic variable? Are pointers restricted as to the type of value to which they can point? Are pointers used for dynamic storage management, indirect addressing, or both? Should the language support pointer types, reference types, or both? Copyright © 2009 Addison-Wesley. All rights reserved.
297
Pointer Operations Two fundamental operations: assignment and dereferencing Assignment is used to set a pointer variable’s value to some useful address Dereferencing yields the value stored at the location represented by the pointer’s value Dereferencing can be explicit or implicit C++ uses an explicit operation via * j = *ptr sets j to the value located at ptr Copyright © 2009 Addison-Wesley. All rights reserved.
298
Pointer Assignment Illustrated
The assignment operation j = *ptr Copyright © 2009 Addison-Wesley. All rights reserved.
299
Problems with Pointers
Dangling pointers (dangerous) A pointer points to a heap-dynamic variable that has been deallocated Lost heap-dynamic variable An allocated heap-dynamic variable that is no longer accessible to the user program (often called garbage) Pointer p1 is set to point to a newly created heap-dynamic variable Pointer p1 is later set to point to another newly created heap-dynamic variable The process of losing heap-dynamic variables is called memory leakage Copyright © 2009 Addison-Wesley. All rights reserved.
300
Pointers in Ada Some dangling pointers are disallowed because dynamic objects can be automatically deallocated at the end of pointer's type scope The lost heap-dynamic variable problem is not eliminated by Ada (possible with UNCHECKED_DEALLOCATION) Copyright © 2009 Addison-Wesley. All rights reserved.
301
Pointers in C and C++ Extremely flexible but must be used with care
Pointers can point at any variable regardless of when or where it was allocated Used for dynamic storage management and addressing Pointer arithmetic is possible Explicit dereferencing and address-of operators Domain type need not be fixed (void *) void * can point to any type and can be type checked (cannot be de-referenced) Copyright © 2009 Addison-Wesley. All rights reserved.
302
Pointer Arithmetic in C and C++
float stuff[100]; float *p; p = stuff; *(p+5) is equivalent to stuff[5] and p[5] *(p+i) is equivalent to stuff[i] and p[i] Copyright © 2009 Addison-Wesley. All rights reserved.
303
Reference Types C++ includes a special kind of pointer type called a reference type that is used primarily for formal parameters Advantages of both pass-by-reference and pass-by-value Java extends C++’s reference variables and allows them to replace pointers entirely References are references to objects, rather than being addresses C# includes both the references of Java and the pointers of C++ Copyright © 2009 Addison-Wesley. All rights reserved.
304
Evaluation of Pointers
Dangling pointers and dangling objects are problems as is heap management Pointers are like goto's--they widen the range of cells that can be accessed by a variable Pointers or references are necessary for dynamic data structures--so we can't design a language without them Copyright © 2009 Addison-Wesley. All rights reserved.
305
Representations of Pointers
Large computers use single values Intel microprocessors use segment and offset Copyright © 2009 Addison-Wesley. All rights reserved.
306
Dangling Pointer Problem
Tombstone: extra heap cell that is a pointer to the heap-dynamic variable The actual pointer variable points only at tombstones When heap-dynamic variable de-allocated, tombstone remains but set to nil Costly in time and space . Locks-and-keys: Pointer values are represented as (key, address) pairs Heap-dynamic variables are represented as variable plus cell for integer lock value When heap-dynamic variable allocated, lock value is created and placed in lock cell and key cell of pointer Copyright © 2009 Addison-Wesley. All rights reserved.
307
Heap Management A very complex run-time process
Single-size cells vs. variable-size cells Two approaches to reclaim garbage Reference counters (eager approach): reclamation is gradual Mark-sweep (lazy approach): reclamation occurs when the list of variable space becomes empty Copyright © 2009 Addison-Wesley. All rights reserved.
308
Reference Counter Reference counters: maintain a counter in every cell that store the number of pointers currently pointing at the cell Disadvantages: space required, execution time required, complications for cells connected circularly Advantage: it is intrinsically incremental, so significant delays in the application execution are avoided Copyright © 2009 Addison-Wesley. All rights reserved.
309
Mark-Sweep The run-time system allocates storage cells as requested and disconnects pointers from cells as necessary; mark-sweep then begins Every heap cell has an extra bit used by collection algorithm All cells initially set to garbage All pointers traced into heap, and reachable cells marked as not garbage All garbage cells returned to list of available cells Disadvantages: in its original form, it was done too infrequently. When done, it caused significant delays in application execution. Contemporary mark-sweep algorithms avoid this by doing it more often—called incremental mark-sweep Copyright © 2009 Addison-Wesley. All rights reserved.
310
Marking Algorithm Copyright © 2009 Addison-Wesley. All rights reserved.
311
Variable-Size Cells All the difficulties of single-size cells plus more Required by most programming languages If mark-sweep is used, additional problems occur The initial setting of the indicators of all cells in the heap is difficult The marking process in nontrivial Maintaining the list of available space is another source of overhead Copyright © 2009 Addison-Wesley. All rights reserved.
312
Type Checking Generalize the concept of operands and operators to include subprograms and assignments Type checking is the activity of ensuring that the operands of an operator are of compatible types A compatible type is one that is either legal for the operator, or is allowed under language rules to be implicitly converted, by compiler- generated code, to a legal type This automatic conversion is called a coercion. A type error is the application of an operator to an operand of an inappropriate type Copyright © 2009 Addison-Wesley. All rights reserved.
313
Type Checking (continued)
If all type bindings are static, nearly all type checking can be static If type bindings are dynamic, type checking must be dynamic A programming language is strongly typed if type errors are always detected Advantage of strong typing: allows the detection of the misuses of variables that result in type errors Copyright © 2009 Addison-Wesley. All rights reserved.
314
Strong Typing Language examples:
FORTRAN 95 is not: parameters, EQUIVALENCE C and C++ are not: parameter type checking can be avoided; unions are not type checked Ada is, almost (UNCHECKED CONVERSION is loophole) (Java and C# are similar to Ada) Copyright © 2009 Addison-Wesley. All rights reserved.
315
Strong Typing (continued)
Coercion rules strongly affect strong typing--they can weaken it considerably (C++ versus Ada) Although Java has just half the assignment coercions of C++, its strong typing is still far less effective than that of Ada Copyright © 2009 Addison-Wesley. All rights reserved.
316
Name Type Equivalence Name type equivalence means the two variables have equivalent types if they are in either the same declaration or in declarations that use the same type name Easy to implement but highly restrictive: Subranges of integer types are not equivalent with integer types Formal parameters must be the same type as their corresponding actual parameters Copyright © 2009 Addison-Wesley. All rights reserved.
317
Structure Type Equivalence
Structure type equivalence means that two variables have equivalent types if their types have identical structures More flexible, but harder to implement Copyright © 2009 Addison-Wesley. All rights reserved.
318
Type Equivalence (continued)
Consider the problem of two structured types: Are two record types equivalent if they are structurally the same but use different field names? Are two array types equivalent if they are the same except that the subscripts are different? (e.g. [1..10] and [0..9]) Are two enumeration types equivalent if their components are spelled differently? With structural type equivalence, you cannot differentiate between types of the same structure (e.g. different units of speed, both float) Copyright © 2009 Addison-Wesley. All rights reserved.
319
Theory and Data Types Type theory is a broad area of study in mathematics, logic, computer science, and philosophy Two branches of type theory in computer science: Practical – data types in commercial languages Abstract – typed lambda calculus A type system is a set of types and the rules that govern their use in programs Copyright © 2009 Addison-Wesley. All rights reserved.
320
Theory and Data Types (continued)
Formal model of a type system is a set of types and a collection of functions that define the type rules Either an attribute grammar or a type map could be used for the functions Finite mappings – model arrays and functions Cartesian products – model tuples and records Set unions – model union types Subsets – model subtypes Copyright © 2009 Addison-Wesley. All rights reserved.
321
Summary The data types of a language are a large part of what determines that language’s style and usefulness The primitive data types of most imperative languages include numeric, character, and Boolean types The user-defined enumeration and subrange types are convenient and add to the readability and reliability of programs Arrays and records are included in most languages Pointers are used for addressing flexibility and to control dynamic storage management Copyright © 2009 Addison-Wesley. All rights reserved.
322
Expressions and Assignment Statements
Chapter 7 Expressions and Assignment Statements
323
Chapter 7 Topics Introduction Arithmetic Expressions
Overloaded Operators Type Conversions Relational and Boolean Expressions Short-Circuit Evaluation Assignment Statements Mixed-Mode Assignment Copyright © 2009 Addison-Wesley. All rights reserved.
324
Introduction Expressions are the fundamental means of specifying computations in a programming language To understand expression evaluation, need to be familiar with the orders of operator and operand evaluation Essence of imperative languages is dominant role of assignment statements Copyright © 2009 Addison-Wesley. All rights reserved.
325
Arithmetic Expressions
Arithmetic evaluation was one of the motivations for the development of the first programming languages Arithmetic expressions consist of operators, operands, parentheses, and function calls Copyright © 2009 Addison-Wesley. All rights reserved.
326
Arithmetic Expressions: Design Issues
Design issues for arithmetic expressions Operator precedence rules? Operator associativity rules? Order of operand evaluation? Operand evaluation side effects? Operator overloading? Type mixing in expressions? Copyright © 2009 Addison-Wesley. All rights reserved.
327
Arithmetic Expressions: Operators
A unary operator has one operand A binary operator has two operands A ternary operator has three operands Copyright © 2009 Addison-Wesley. All rights reserved.
328
Arithmetic Expressions: Operator Precedence Rules
The operator precedence rules for expression evaluation define the order in which “adjacent” operators of different precedence levels are evaluated Typical precedence levels parentheses unary operators ** (if the language supports it) *, / +, - Copyright © 2009 Addison-Wesley. All rights reserved.
329
Arithmetic Expressions: Operator Associativity Rule
The operator associativity rules for expression evaluation define the order in which adjacent operators with the same precedence level are evaluated Typical associativity rules Left to right, except **, which is right to left Sometimes unary operators associate right to left (e.g., in FORTRAN) APL is different; all operators have equal precedence and all operators associate right to left Precedence and associativity rules can be overriden with parentheses Copyright © 2009 Addison-Wesley. All rights reserved.
330
Ruby Expressions All arithmetic, relational, and assignment operators, as well as array indexing, shifts, and bit-wise logic operators, are implemented as methods - One result of this is that these operators can all be overriden by application programs Copyright © 2009 Addison-Wesley. All rights reserved.
331
Arithmetic Expressions: Conditional Expressions
C-based languages (e.g., C, C++) An example: average = (count == 0)? 0 : sum / count Evaluates as if written like if (count == 0) average = 0 else average = sum /count Copyright © 2009 Addison-Wesley. All rights reserved.
332
Arithmetic Expressions: Operand Evaluation Order
Variables: fetch the value from memory Constants: sometimes a fetch from memory; sometimes the constant is in the machine language instruction Parenthesized expressions: evaluate all operands and operators first The most interesting case is when an operand is a function call Copyright © 2009 Addison-Wesley. All rights reserved.
333
Arithmetic Expressions: Potentials for Side Effects
Functional side effects: when a function changes a two-way parameter or a non-local variable Problem with functional side effects: When a function referenced in an expression alters another operand of the expression; e.g., for a parameter change: a = 10; /* assume that fun changes its parameter */ b = a + fun(&a); Copyright © 2009 Addison-Wesley. All rights reserved.
334
Functional Side Effects
Two possible solutions to the problem Write the language definition to disallow functional side effects No two-way parameters in functions No non-local references in functions Advantage: it works! Disadvantage: inflexibility of one-way parameters and lack of non-local references Write the language definition to demand that operand evaluation order be fixed Disadvantage: limits some compiler optimizations Java requires that operands appear to be evaluated in left-to-right order Copyright © 2009 Addison-Wesley. All rights reserved.
335
Overloaded Operators Use of an operator for more than one purpose is called operator overloading Some are common (e.g., + for int and float) Some are potential trouble (e.g., * in C and C++) Loss of compiler error detection (omission of an operand should be a detectable error) Some loss of readability Copyright © 2009 Addison-Wesley. All rights reserved.
336
Overloaded Operators (continued)
C++ and C# allow user-defined overloaded operators Potential problems: Users can define nonsense operations Readability may suffer, even when the operators make sense Copyright © 2009 Addison-Wesley. All rights reserved.
337
Type Conversions A narrowing conversion is one that converts an object to a type that cannot include all of the values of the original type e.g., float to int A widening conversion is one in which an object is converted to a type that can include at least approximations to all of the values of the original type e.g., int to float Copyright © 2009 Addison-Wesley. All rights reserved.
338
Type Conversions: Mixed Mode
A mixed-mode expression is one that has operands of different types A coercion is an implicit type conversion Disadvantage of coercions: They decrease in the type error detection ability of the compiler In most languages, all numeric types are coerced in expressions, using widening conversions In Ada, there are virtually no coercions in expressions Copyright © 2009 Addison-Wesley. All rights reserved.
339
Explicit Type Conversions
Called casting in C-based languages Examples C: (int)angle Ada: Float (Sum) Note that Ada’s syntax is similar to that of function calls Copyright © 2009 Addison-Wesley. All rights reserved.
340
Type Conversions: Errors in Expressions
Causes Inherent limitations of arithmetic e.g., division by zero Limitations of computer arithmetic e.g. overflow Often ignored by the run-time system Copyright © 2009 Addison-Wesley. All rights reserved.
341
Relational and Boolean Expressions
Relational Expressions Use relational operators and operands of various types Evaluate to some Boolean representation Operator symbols used vary somewhat among languages (!=, /=, ~=, .NE., <>, #) JavaScript and PHP have two additional relational operator, === and !== - Similar to their cousins, == and !=, except that they do not coerce their operands Copyright © 2009 Addison-Wesley. All rights reserved.
342
Relational and Boolean Expressions
Operands are Boolean and the result is Boolean Example operators FORTRAN FORTRAN 90 C Ada .AND and && and .OR or || or .NOT not ! not xor Copyright © 2009 Addison-Wesley. All rights reserved.
343
Relational and Boolean Expressions: No Boolean Type in C
C89 has no Boolean type--it uses int type with 0 for false and nonzero for true One odd characteristic of C’s expressions: a < b < c is a legal expression, but the result is not what you might expect: Left operator is evaluated, producing 0 or 1 The evaluation result is then compared with the third operand (i.e., c) Copyright © 2009 Addison-Wesley. All rights reserved.
344
Short Circuit Evaluation
An expression in which the result is determined without evaluating all of the operands and/or operators Example: (13*a) * (b/13–1) If a is zero, there is no need to evaluate (b/13-1) Problem with non-short-circuit evaluation index = 1; while (index <= length) && (LIST[index] != value) index++; When index=length, LIST [index] will cause an indexing problem (assuming LIST has length -1 elements) Copyright © 2009 Addison-Wesley. All rights reserved.
345
Short Circuit Evaluation (continued)
C, C++, and Java: use short-circuit evaluation for the usual Boolean operators (&& and ||), but also provide bitwise Boolean operators that are not short circuit (& and |) Ada: programmer can specify either (short-circuit is specified with and then and or else) Short-circuit evaluation exposes the potential problem of side effects in expressions e.g. (a > b) || (b++ / 3) Copyright © 2009 Addison-Wesley. All rights reserved.
346
Assignment Statements
The general syntax <target_var> <assign_operator> <expression> The assignment operator = FORTRAN, BASIC, the C-based languages := ALGOLs, Pascal, Ada = can be bad when it is overloaded for the relational operator for equality (that’s why the C-based languages use == as the relational operator) Copyright © 2009 Addison-Wesley. All rights reserved.
347
Assignment Statements: Conditional Targets
Conditional targets (Perl) ($flag ? $total : $subtotal) = 0 Which is equivalent to if ($flag){ $total = 0 } else { $subtotal = 0 } Copyright © 2009 Addison-Wesley. All rights reserved.
348
Assignment Statements: Compound Operators
A shorthand method of specifying a commonly needed form of assignment Introduced in ALGOL; adopted by C Example a = a + b is written as a += b Copyright © 2009 Addison-Wesley. All rights reserved.
349
Assignment Statements: Unary Assignment Operators
Unary assignment operators in C-based languages combine increment and decrement operations with assignment Examples sum = ++count (count incremented, added to sum) sum = count++ (count incremented, added to sum) count++ (count incremented) -count++ (count incremented then negated) Copyright © 2009 Addison-Wesley. All rights reserved.
350
Assignment as an Expression
In C, C++, and Java, the assignment statement produces a result and can be used as operands An example: while ((ch = getchar())!= EOF){…} ch = getchar() is carried out; the result (assigned to ch) is used as a conditional value for the while statement Copyright © 2009 Addison-Wesley. All rights reserved.
351
List Assignments Perl and Ruby support list assignments e.g.,
($first, $second, $third) = (20, 30, 40); Copyright © 2009 Addison-Wesley. All rights reserved.
352
Mixed-Mode Assignment
Assignment statements can also be mixed-mode In Fortran, C, and C++, any numeric type value can be assigned to any numeric type variable In Java, only widening assignment coercions are done In Ada, there is no assignment coercion Copyright © 2009 Addison-Wesley. All rights reserved.
353
Summary Expressions Operator precedence and associativity
Operator overloading Mixed-type expressions Various forms of assignment Copyright © 2009 Addison-Wesley. All rights reserved.
354
Statement-Level Control Structures
Chapter 8 Statement-Level Control Structures
355
Chapter 8 Topics Introduction Selection Statements
Iterative Statements Unconditional Branching Guarded Commands Conclusions Copyright © 2009 Addison-Wesley. All rights reserved.
356
Levels of Control Flow Within expressions (Chapter 7)
Among program units (Chapter 9) Among program statements (this chapter) Copyright © 2009 Addison-Wesley. All rights reserved.
357
Control Statements: Evolution
FORTRAN I control statements were based directly on IBM 704 hardware Much research and argument in the 1960s about the issue One important result: It was proven that all algorithms represented by flowcharts can be coded with only two-way selection and pretest logical loops Copyright © 2009 Addison-Wesley. All rights reserved.
358
Control Structure A control structure is a control statement and the statements whose execution it controls Design question Should a control structure have multiple entries? Copyright © 2009 Addison-Wesley. All rights reserved.
359
Selection Statements A selection statement provides the means of choosing between two or more paths of execution Two general categories: Two-way selectors Multiple-way selectors Copyright © 2009 Addison-Wesley. All rights reserved.
360
Two-Way Selection Statements
General form: if control_expression then clause else clause Design Issues: What is the form and type of the control expression? How are the then and else clauses specified? How should the meaning of nested selectors be specified? Copyright © 2009 Addison-Wesley. All rights reserved.
361
The Control Expression
If the then reserved word or some other syntactic marker is not used to introduce the then clause, the control expression is placed in parentheses In C89, C99, Python, and C++, the control expression can be arithmetic In languages such as Ada, Java, Ruby, and C#, the control expression must be Boolean Copyright © 2009 Addison-Wesley. All rights reserved.
362
Clause Form In many contemporary languages, the then and else clauses can be single statements or compound statements In Perl, all clauses must be delimited by braces (they must be compound) In Fortran 95, Ada, and Ruby, clauses are statement sequences Python uses indentation to define clauses if x > y : x = y print "case 1" Copyright © 2009 Addison-Wesley. All rights reserved.
363
Nesting Selectors Java example Which if gets the else?
if (sum == 0) if (count == 0) result = 0; else result = 1; Which if gets the else? Java's static semantics rule: else matches with the nearest if Copyright © 2009 Addison-Wesley. All rights reserved.
364
Nesting Selectors (continued)
To force an alternative semantics, compound statements may be used: if (sum == 0) { if (count == 0) result = 0; } else result = 1; The above solution is used in C, C++, and C# Perl requires that all then and else clauses to be compound Copyright © 2009 Addison-Wesley. All rights reserved.
365
Nesting Selectors (continued)
Statement sequences as clauses: Ruby if sum == 0 then if count == 0 then result = 0 else result = 1 end Copyright © 2009 Addison-Wesley. All rights reserved.
366
Nesting Selectors (continued)
Python if sum == 0 : if count == 0 : result = 0 else : result = 1 Copyright © 2009 Addison-Wesley. All rights reserved.
367
Multiple-Way Selection Statements
Allow the selection of one of any number of statements or statement groups Design Issues: What is the form and type of the control expression? How are the selectable segments specified? Is execution flow through the structure restricted to include just a single selectable segment? How are case values specified? What is done about unrepresented expression values? Copyright © 2009 Addison-Wesley. All rights reserved.
368
Multiple-Way Selection: Examples
C, C++, and Java switch (expression) { case const_expr_1: stmt_1; … case const_expr_n: stmt_n; [default: stmt_n+1] } Copyright © 2009 Addison-Wesley. All rights reserved.
369
Multiple-Way Selection: Examples
Design choices for C’s switch statement Control expression can be only an integer type Selectable segments can be statement sequences, blocks, or compound statements Any number of segments can be executed in one execution of the construct (there is no implicit branch at the end of selectable segments) default clause is for unrepresented values (if there is no default, the whole statement does nothing) Copyright © 2009 Addison-Wesley. All rights reserved.
370
Multiple-Way Selection: Examples
Differs from C in that it has a static semantics rule that disallows the implicit execution of more than one segment Each selectable segment must end with an unconditional branch (goto or break) Also, in C# the control expression and the case constants can be strings Copyright © 2009 Addison-Wesley. All rights reserved.
371
Multiple-Way Selection: Examples
Ada case expression is when choice list => stmt_sequence; … when others => stmt_sequence;] end case; More reliable than C’s switch (once a stmt_sequence execution is completed, control is passed to the first statement after the case statement Copyright © 2009 Addison-Wesley. All rights reserved.
372
Multiple-Way Selection: Examples
Ada design choices: 1. Expression can be any ordinal type 2. Segments can be single or compound 3. Only one segment can be executed per execution of the construct 4. Unrepresented values are not allowed Constant List Forms: 1. A list of constants 2. Can include: - Subranges - Boolean OR operators (|) Copyright © 2009 Addison-Wesley. All rights reserved.
373
Multiple-Way Selection: Examples
Ruby has two forms of case statements 1. One form uses when conditions leap = case when year % 400 == 0 then true when year % 100 == 0 then false else year % 4 == 0 end 2. The other uses a case value and when values case in_val when -1 then neg_count++ when 0 then zero_count++ when 1 then pos_count++ else puts "Error – in_val is out of range" Copyright © 2009 Addison-Wesley. All rights reserved.
374
Multiple-Way Selection Using if
Multiple Selectors can appear as direct extensions to two-way selectors, using else-if clauses, for example in Python: if count < 10 : bag1 = True elif count < 100 : bag2 = True elif count < 1000 : bag3 = True Copyright © 2009 Addison-Wesley. All rights reserved.
375
Multiple-Way Selection Using if
The Python example can be written as a Ruby case case when count < 10 then bag1 = true when count < 100 then bag2 = true when count < 1000 then bag3 = true end Copyright © 2009 Addison-Wesley. All rights reserved.
376
Iterative Statements The repeated execution of a statement or compound statement is accomplished either by iteration or recursion General design issues for iteration control statements: 1. How is iteration controlled? 2. Where is the control mechanism in the loop? Copyright © 2009 Addison-Wesley. All rights reserved.
377
Counter-Controlled Loops
A counting iterative statement has a loop variable, and a means of specifying the initial and terminal, and stepsize values Design Issues: What are the type and scope of the loop variable? Should it be legal for the loop variable or loop parameters to be changed in the loop body, and if so, does the change affect loop control? Should the loop parameters be evaluated only once, or once for every iteration? Copyright © 2009 Addison-Wesley. All rights reserved.
378
Iterative Statements: Examples
FORTRAN 95 syntax DO label var = start, finish [, stepsize] Stepsize can be any value but zero Parameters can be expressions Design choices: 1. Loop variable must be INTEGER 2. The loop variable cannot be changed in the loop, but the parameters can; because they are evaluated only once, it does not affect loop control 3. Loop parameters are evaluated only once Copyright © 2009 Addison-Wesley. All rights reserved.
379
Iterative Statements: Examples
FORTRAN 95 : a second form: [name:] Do variable = initial, terminal [,stepsize] … End Do [name] - Cannot branch into either of Fortran’s Do statements Copyright © 2009 Addison-Wesley. All rights reserved.
380
Iterative Statements: Examples
Ada for var in [reverse] discrete_range loop end loop Design choices: - Type of the loop variable is that of the discrete range (A discrete range is a sub-range of an integer or enumeration type). - Loop variable does not exist outside the loop - The loop variable cannot be changed in the loop, but the discrete range can; it does not affect loop control - The discrete range is evaluated just once Cannot branch into the loop body Copyright © 2009 Addison-Wesley. All rights reserved.
381
Iterative Statements: Examples
C-based languages for ([expr_1] ; [expr_2] ; [expr_3]) statement - The expressions can be whole statements, or even statement sequences, with the statements separated by commas The value of a multiple-statement expression is the value of the last statement in the expression If the second expression is absent, it is an infinite loop Design choices: - There is no explicit loop variable - Everything can be changed in the loop - The first expression is evaluated once, but the other two are evaluated with each iteration Copyright © 2009 Addison-Wesley. All rights reserved.
382
Iterative Statements: Examples
C++ differs from C in two ways: The control expression can also be Boolean The initial expression can include variable definitions (scope is from the definition to the end of the loop body) Java and C# Differs from C++ in that the control expression must be Boolean Copyright © 2009 Addison-Wesley. All rights reserved.
383
Iterative Statements: Examples
Python for loop_variable in object: - loop body [else: - else clause] The object is often a range, which is either a list of values in brackets ([2, 4, 6]), or a call to the range function (range(5), which returns 0, 1, 2, 3, 4 The loop variable takes on the values specified in the given range, one for each iteration The else clause, which is optional, is executed if the loop terminates normally Copyright © 2009 Addison-Wesley. All rights reserved.
384
Iterative Statements: Logically-Controlled Loops
Repetition control is based on a Boolean expression Design issues: Pretest or posttest? Should the logically controlled loop be a special case of the counting loop statement or a separate statement? Copyright © 2009 Addison-Wesley. All rights reserved.
385
Iterative Statements: Logically-Controlled Loops: Examples
C and C++ have both pretest and posttest forms, in which the control expression can be arithmetic: while (ctrl_expr) do loop body loop body while (ctrl_expr) Java is like C and C++, except the control expression must be Boolean (and the body can only be entered at the beginning -- Java has no goto Copyright © 2009 Addison-Wesley. All rights reserved.
386
Iterative Statements: Logically-Controlled Loops: Examples
Ada has a pretest version, but no posttest FORTRAN 95 has neither Perl and Ruby have two pretest logical loops, while and until. Perl also has two posttest loops Copyright © 2009 Addison-Wesley. All rights reserved.
387
Iterative Statements: User-Located Loop Control Mechanisms
Sometimes it is convenient for the programmers to decide a location for loop control (other than top or bottom of the loop) Simple design for single loops (e.g., break) Design issues for nested loops Should the conditional be part of the exit? Should control be transferable out of more than one loop? Copyright © 2009 Addison-Wesley. All rights reserved.
388
Iterative Statements: User-Located Loop Control Mechanisms break and continue
C , C++, Python, Ruby, and C# have unconditional unlabeled exits (break) Java and Perl have unconditional labeled exits (break in Java, last in Perl) C, C++, and Python have an unlabeled control statement, continue, that skips the remainder of the current iteration, but does not exit the loop Java and Perl have labeled versions of continue Copyright © 2009 Addison-Wesley. All rights reserved.
389
Iterative Statements: Iteration Based on Data Structures
Number of elements of in a data structure control loop iteration Control mechanism is a call to an iterator function that returns the next element in some chosen order, if there is one; else loop is terminate C's for can be used to build a user-defined iterator: for (p=root; p==NULL; traverse(p)){ } Copyright © 2009 Addison-Wesley. All rights reserved.
390
Iterative Statements: Iteration Based on Data Structures (continued)
PHP - current points at one element of the array - next moves current to the next element - reset moves current to the first element Java - For any collection that implements the Iterator interface - next moves the pointer into the collection - hasNext is a predicate - remove deletes an element Perl has a built-in iterator for arrays and hashes, foreach Copyright © 2009 Addison-Wesley. All rights reserved.
391
Iterative Statements: Iteration Based on Data Structures (continued)
Java 5.0 (uses for, although it is called foreach) - For arrays and any other class that implements Iterable interface, e.g., ArrayList for (String myElement : myList) { … } C#’s foreach statement iterates on the elements of arrays and other collections: Strings[] = strList = {"Bob", "Carol", "Ted"}; foreach (Strings name in strList) Console.WriteLine ("Name: {0}", name); - The notation {0} indicates the position in the string to be displayed Copyright © 2009 Addison-Wesley. All rights reserved.
392
Iterative Statements: Iteration Based on Data Structures (continued)
Lua Lua has two forms of its iterative statement, one like Fortran’s Do, and a more general form: for variable_1 [, variable_2] in iterator(table) do … end The most commonly used iterators are pairs and ipairs Copyright © 2009 Addison-Wesley. All rights reserved.
393
Unconditional Branching
Transfers execution control to a specified place in the program Represented one of the most heated debates in 1960’s and 1970’s Major concern: Readability Some languages do not support goto statement (e.g., Java) C# offers goto statement (can be used in switch statements) Loop exit statements are restricted and somewhat camouflaged goto’s Copyright © 2009 Addison-Wesley. All rights reserved.
394
Guarded Commands Designed by Dijkstra
Purpose: to support a new programming methodology that supported verification (correctness) during development Basis for two linguistic mechanisms for concurrent programming (in CSP and Ada) Basic Idea: if the order of evaluation is not important, the program should not specify one Copyright © 2009 Addison-Wesley. All rights reserved.
395
Selection Guarded Command
Form if <Boolean exp> -> <statement> [] <Boolean exp> -> <statement> ... fi Semantics: when construct is reached, Evaluate all Boolean expressions If more than one are true, choose one non-deterministically If none are true, it is a runtime error Copyright © 2009 Addison-Wesley. All rights reserved.
396
Loop Guarded Command Form Semantics: for each iteration
do <Boolean> -> <statement> [] <Boolean> -> <statement> ... od Semantics: for each iteration Evaluate all Boolean expressions If more than one are true, choose one non-deterministically; then start loop again If none are true, exit loop Copyright © 2009 Addison-Wesley. All rights reserved.
397
Guarded Commands: Rationale
Connection between control statements and program verification is intimate Verification is impossible with goto statements Verification is possible with only selection and logical pretest loops Verification is relatively simple with only guarded commands Copyright © 2009 Addison-Wesley. All rights reserved.
398
Conclusion Variety of statement-level structures
Choice of control statements beyond selection and logical pretest loops is a trade-off between language size and writability Functional and logic programming languages are quite different control structures Copyright © 2009 Addison-Wesley. All rights reserved.
399
Chapter 9 Subprograms
400
Chapter 9 Topics Introduction Fundamentals of Subprograms
Design Issues for Subprograms Local Referencing Environments Parameter-Passing Methods Parameters That Are Subprograms Overloaded Subprograms Generic Subprograms Design Issues for Functions User-Defined Overloaded Operators Coroutines Copyright © 2009 Addison-Wesley. All rights reserved.
401
Introduction Two fundamental abstraction facilities
Process abstraction Emphasized from early days Data abstraction Emphasized in the1980s Copyright © 2009 Addison-Wesley. All rights reserved.
402
Fundamentals of Subprograms
Each subprogram has a single entry point The calling program is suspended during execution of the called subprogram Control always returns to the caller when the called subprogram’s execution terminates Copyright © 2009 Addison-Wesley. All rights reserved.
403
Basic Definitions A subprogram definition describes the interface to and the actions of the subprogram abstraction - In Python, function definitions are executable; in all other languages, they are non-executable A subprogram call is an explicit request that the subprogram be executed A subprogram header is the first part of the definition, including the name, the kind of subprogram, and the formal parameters The parameter profile (aka signature) of a subprogram is the number, order, and types of its parameters The protocol is a subprogram’s parameter profile and, if it is a function, its return type Copyright © 2009 Addison-Wesley. All rights reserved.
404
Basic Definitions (continued)
Function declarations in C and C++ are often called prototypes A subprogram declaration provides the protocol, but not the body, of the subprogram A formal parameter is a dummy variable listed in the subprogram header and used in the subprogram An actual parameter represents a value or address used in the subprogram call statement Copyright © 2009 Addison-Wesley. All rights reserved.
405
Actual/Formal Parameter Correspondence
Positional The binding of actual parameters to formal parameters is by position: the first actual parameter is bound to the first formal parameter and so forth Safe and effective Keyword The name of the formal parameter to which an actual parameter is to be bound is specified with the actual parameter Advantage: Parameters can appear in any order, thereby avoiding parameter correspondence errors Disadvantage: User must know the formal parameter’s names Copyright © 2009 Addison-Wesley. All rights reserved.
406
Formal Parameter Default Values
In certain languages (e.g., C++, Python, Ruby, Ada, PHP), formal parameters can have default values (if no actual parameter is passed) In C++, default parameters must appear last because parameters are positionally associated Variable numbers of parameters C# methods can accept a variable number of parameters as long as they are of the same type—the corresponding formal parameter is an array preceded by params In Ruby, the actual parameters are sent as elements of a hash literal and the corresponding formal parameter is preceded by an asterisk. In Python, the actual is a list of values and the corresponding formal parameter is a name with an asterisk In Lua, a variable number of parameters is represented as a formal parameter with three periods; they are accessed with a for statement or with a multiple assignment from the three periods Copyright © 2009 Addison-Wesley. All rights reserved.
407
Ruby Blocks Ruby includes a number of iterator functions, which are often used to process the elements of arrays Iterators are implemented with blocks, which can also be defined by applications Blocks are attached methods calls; they can have parameters (in vertical bars); they are executed when the method executes a yield statement def fibonacci(last) first, second = 1, 1 while first <= last yield first first, second = second, first + second end puts "Fibonacci numbers less than 100 are:" fibonacci(100) {|num| print num, " "} puts Copyright © 2009 Addison-Wesley. All rights reserved.
408
Procedures and Functions
There are two categories of subprograms Procedures are collection of statements that define parameterized computations Functions structurally resemble procedures but are semantically modeled on mathematical functions They are expected to produce no side effects In practice, program functions have side effects Copyright © 2009 Addison-Wesley. All rights reserved.
409
Design Issues for Subprograms
Are local variables static or dynamic? Can subprogram definitions appear in other subprogram definitions? What parameter passing methods are provided? Are parameter types checked? If subprograms can be passed as parameters and subprograms can be nested, what is the referencing environment of a passed subprogram? Can subprograms be overloaded? Can subprogram be generic? Copyright © 2009 Addison-Wesley. All rights reserved.
410
Local Referencing Environments
Local variables can be stack-dynamic - Advantages Support for recursion Storage for locals is shared among some subprograms Disadvantages Allocation/de-allocation, initialization time Indirect addressing Subprograms cannot be history sensitive Local variables can be static Advantages and disadvantages are the opposite of those for stack-dynamic local variables Copyright © 2009 Addison-Wesley. All rights reserved.
411
Semantic Models of Parameter Passing
In mode Out mode Inout mode Copyright © 2009 Addison-Wesley. All rights reserved.
412
Models of Parameter Passing
Copyright © 2009 Addison-Wesley. All rights reserved.
413
Conceptual Models of Transfer
Physically move a path Move an access path Copyright © 2009 Addison-Wesley. All rights reserved.
414
Pass-by-Value (In Mode)
The value of the actual parameter is used to initialize the corresponding formal parameter Normally implemented by copying Can be implemented by transmitting an access path but not recommended (enforcing write protection is not easy) Disadvantages (if by physical move): additional storage is required (stored twice) and the actual move can be costly (for large parameters) Disadvantages (if by access path method): must write-protect in the called subprogram and accesses cost more (indirect addressing) Copyright © 2009 Addison-Wesley. All rights reserved.
415
Pass-by-Result (Out Mode)
When a parameter is passed by result, no value is transmitted to the subprogram; the corresponding formal parameter acts as a local variable; its value is transmitted to caller’s actual parameter when control is returned to the caller, by physical move Require extra storage location and copy operation Potential problem: sub(p1, p1); whichever formal parameter is copied back will represent the current value of p1 Copyright © 2009 Addison-Wesley. All rights reserved.
416
Pass-by-Value-Result (inout Mode)
A combination of pass-by-value and pass-by-result Sometimes called pass-by-copy Formal parameters have local storage Disadvantages: Those of pass-by-result Those of pass-by-value Copyright © 2009 Addison-Wesley. All rights reserved.
417
Pass-by-Reference (Inout Mode)
Pass an access path Also called pass-by-sharing Advantage: Passing process is efficient (no copying and no duplicated storage) Disadvantages Slower accesses (compared to pass-by-value) to formal parameters Potentials for unwanted side effects (collisions) Unwanted aliases (access broadened) Copyright © 2009 Addison-Wesley. All rights reserved.
418
Pass-by-Name (Inout Mode)
By textual substitution Formals are bound to an access method at the time of the call, but actual binding to a value or address takes place at the time of a reference or assignment Allows flexibility in late binding Copyright © 2009 Addison-Wesley. All rights reserved.
419
Implementing Parameter-Passing Methods
In most language parameter communication takes place thru the run-time stack Pass-by-reference are the simplest to implement; only an address is placed in the stack A subtle but fatal error can occur with pass-by-reference and pass-by-value-result: a formal parameter corresponding to a constant can mistakenly be changed Copyright © 2009 Addison-Wesley. All rights reserved.
420
Parameter Passing Methods of Major Languages
C Pass-by-value Pass-by-reference is achieved by using pointers as parameters C++ A special pointer type called reference type for pass-by-reference Java All parameters are passed are passed by value Object parameters are passed by reference Ada Three semantics modes of parameter transmission: in, out, in out; in is the default mode Formal parameters declared out can be assigned but not referenced; those declared in can be referenced but not assigned; in out parameters can be referenced and assigned Copyright © 2009 Addison-Wesley. All rights reserved.
421
Parameter Passing Methods of Major Languages (continued)
Fortran 95 - Parameters can be declared to be in, out, or inout mode C# - Default method: pass-by-value Pass-by-reference is specified by preceding both a formal parameter and its actual parameter with ref PHP: very similar to C# Perl: all actual parameters are implicitly placed in a predefined array Python and Ruby use pass-by-assignment (all data values are objects) Copyright © 2009 Addison-Wesley. All rights reserved.
422
Type Checking Parameters
Considered very important for reliability FORTRAN 77 and original C: none Pascal, FORTRAN 90, Java, and Ada: it is always required ANSI C and C++: choice is made by the user Prototypes Relatively new languages Perl, JavaScript, and PHP do not require type checking In Python and Ruby, variables do not have types (objects do), so parameter type checking is not possible Copyright © 2009 Addison-Wesley. All rights reserved.
423
Multidimensional Arrays as Parameters
If a multidimensional array is passed to a subprogram and the subprogram is separately compiled, the compiler needs to know the declared size of that array to build the storage mapping function Copyright © 2009 Addison-Wesley. All rights reserved.
424
Multidimensional Arrays as Parameters: C and C++
Programmer is required to include the declared sizes of all but the first subscript in the actual parameter Disallows writing flexible subprograms Solution: pass a pointer to the array and the sizes of the dimensions as other parameters; the user must include the storage mapping function in terms of the size parameters Copyright © 2009 Addison-Wesley. All rights reserved.
425
Multidimensional Arrays as Parameters: Ada
Ada – not a problem Constrained arrays – size is part of the array’s type Unconstrained arrays - declared size is part of the object declaration Copyright © 2009 Addison-Wesley. All rights reserved.
426
Multidimensional Arrays as Parameters: Fortran
Formal parameter that are arrays have a declaration after the header For single-dimension arrays, the subscript is irrelevant For multidimensional arrays, the sizes are sent as parameters and used in the declaration of the formal parameter, so those variables are used in the storage mapping function Copyright © 2009 Addison-Wesley. All rights reserved.
427
Multidimensional Arrays as Parameters: Java and C#
Similar to Ada Arrays are objects; they are all single-dimensioned, but the elements can be arrays Each array inherits a named constant (length in Java, Length in C#) that is set to the length of the array when the array object is created Copyright © 2009 Addison-Wesley. All rights reserved.
428
Design Considerations for Parameter Passing
Two important considerations Efficiency One-way or two-way data transfer But the above considerations are in conflict Good programming suggest limited access to variables, which means one-way whenever possible But pass-by-reference is more efficient to pass structures of significant size Copyright © 2009 Addison-Wesley. All rights reserved.
429
Parameters that are Subprogram Names
It is sometimes convenient to pass subprogram names as parameters Issues: Are parameter types checked? What is the correct referencing environment for a subprogram that was sent as a parameter? Copyright © 2009 Addison-Wesley. All rights reserved.
430
Parameters that are Subprogram Names: Parameter Type Checking
C and C++: functions cannot be passed as parameters but pointers to functions can be passed and their types include the types of the parameters, so parameters can be type checked FORTRAN 95 type checks Ada does not allow subprogram parameters; an alternative is provided via Ada’s generic facility Java does not allow method names to be passed as parameters Copyright © 2009 Addison-Wesley. All rights reserved.
431
Parameters that are Subprogram Names: Referencing Environment
Shallow binding: The environment of the call statement that enacts the passed subprogram - Most natural for dynamic-scoped languages Deep binding: The environment of the definition of the passed subprogram - Most natural for static-scoped languages Ad hoc binding: The environment of the call statement that passed the subprogram Copyright © 2009 Addison-Wesley. All rights reserved.
432
Overloaded Subprograms
An overloaded subprogram is one that has the same name as another subprogram in the same referencing environment Every version of an overloaded subprogram has a unique protocol C++, Java, C#, and Ada include predefined overloaded subprograms In Ada, the return type of an overloaded function can be used to disambiguate calls (thus two overloaded functions can have the same parameters) Ada, Java, C++, and C# allow users to write multiple versions of subprograms with the same name Copyright © 2009 Addison-Wesley. All rights reserved.
433
Generic Subprograms A generic or polymorphic subprogram takes parameters of different types on different activations Overloaded subprograms provide ad hoc polymorphism A subprogram that takes a generic parameter that is used in a type expression that describes the type of the parameters of the subprogram provides parametric polymorphism - A cheap compile-time substitute for dynamic binding Copyright © 2009 Addison-Wesley. All rights reserved.
434
Generic Subprograms (continued)
Ada Versions of a generic subprogram are created by the compiler when explicitly instantiated by a declaration statement Generic subprograms are preceded by a generic clause that lists the generic variables, which can be types or other subprograms Copyright © 2009 Addison-Wesley. All rights reserved.
435
Generic Subprograms (continued)
Versions of a generic subprogram are created implicitly when the subprogram is named in a call or when its address is taken with the & operator Generic subprograms are preceded by a template clause that lists the generic variables, which can be type names or class names Copyright © 2009 Addison-Wesley. All rights reserved.
436
Generic Subprograms (continued)
Java Differences between generics in Java 5.0 and those of C++ and Ada: 1. Generic parameters in Java 5.0 must be classes 2. Java 5.0 generic methods are instantiated just once as truly generic methods 3. Restrictions can be specified on the range of classes that can be passed to the generic method as generic parameters 4. Wildcard types of generic parameters Copyright © 2009 Addison-Wesley. All rights reserved.
437
Generic Subprograms (continued)
C# Supports generic methods that are similar to those of Java One difference: actual type parameters in a call can be omitted if the compiler can infer the unspecified type Copyright © 2009 Addison-Wesley. All rights reserved.
438
Examples of parametric polymorphism: C++
template <class Type> Type max(Type first, Type second) { return first > second ? first : second; } The above template can be instantiated for any type for which operator > is defined int max (int first, int second) { return first > second? first : second; Copyright © 2009 Addison-Wesley. All rights reserved.
439
Design Issues for Functions
Are side effects allowed? Parameters should always be in-mode to reduce side effect (like Ada) What types of return values are allowed? Most imperative languages restrict the return types C allows any type except arrays and functions C++ is like C but also allows user-defined types Ada subprograms can return any type (but Ada subprograms are not types, so they cannot be returned) Java and C# methods can return any type (but because methods are not types, they cannot be returned) Python and Ruby treat methods as first-class objects, so they can be returned, as well as any other class Lua allows functions to return multiple values Copyright © 2009 Addison-Wesley. All rights reserved.
440
User-Defined Overloaded Operators
Operators can be overloaded in Ada, C++, Python, and Ruby An Ada example function "*" (A,B: in Vec_Type): return Integer is Sum: Integer := 0; begin for Index in A'range loop Sum := Sum + A(Index) * B(Index) end loop return sum; end "*"; … c = a * b; -- a, b, and c are of type Vec_Type Copyright © 2009 Addison-Wesley. All rights reserved.
441
Coroutines A coroutine is a subprogram that has multiple entries and controls them itself – supported directly in Lua Also called symmetric control: caller and called coroutines are on a more equal basis A coroutine call is named a resume The first resume of a coroutine is to its beginning, but subsequent calls enter at the point just after the last executed statement in the coroutine Coroutines repeatedly resume each other, possibly forever Coroutines provide quasi-concurrent execution of program units (the coroutines); their execution is interleaved, but not overlapped Copyright © 2009 Addison-Wesley. All rights reserved.
442
Coroutines Illustrated: Possible Execution Controls
Copyright © 2009 Addison-Wesley. All rights reserved.
443
Coroutines Illustrated: Possible Execution Controls
Copyright © 2009 Addison-Wesley. All rights reserved.
444
Coroutines Illustrated: Possible Execution Controls with Loops
Copyright © 2009 Addison-Wesley. All rights reserved.
445
Summary A subprogram definition describes the actions represented by the subprogram Subprograms can be either functions or procedures Local variables in subprograms can be stack-dynamic or static Three models of parameter passing: in mode, out mode, and inout mode Some languages allow operator overloading Subprograms can be generic A coroutine is a special subprogram with multiple entries Copyright © 2009 Addison-Wesley. All rights reserved.
446
Implementing Subprograms
Chapter 10 Implementing Subprograms
447
Chapter 10 Topics The General Semantics of Calls and Returns
Implementing “Simple” Subprograms Implementing Subprograms with Stack-Dynamic Local Variables Nested Subprograms Blocks Implementing Dynamic Scoping Copyright © 2009 Addison-Wesley. All rights reserved.
448
The General Semantics of Calls and Returns
The subprogram call and return operations of a language are together called its subprogram linkage General semantics of subprogram calls Parameter passing methods Stack-dynamic allocation of local variables Save the execution status of calling program Transfer of control and arrange for the return If subprogram nesting is supported, access to nonlocal variables must be arranged Copyright © 2009 Addison-Wesley. All rights reserved.
449
The General Semantics of Calls and Returns
General semantics of subprogram returns: In mode and inout mode parameters must have their values returned Deallocation of stack-dynamic locals Restore the execution status Return control to the caller Copyright © 2009 Addison-Wesley. All rights reserved.
450
Implementing “Simple” Subprograms: Call Semantics
- Save the execution status of the caller - Pass the parameters - Pass the return address to the callee - Transfer control to the callee Copyright © 2009 Addison-Wesley. All rights reserved.
451
Implementing “Simple” Subprograms: Return Semantics
If pass-by-value-result or out mode parameters are used, move the current values of those parameters to their corresponding actual parameters If it is a function, move the functional value to a place the caller can get it Restore the execution status of the caller Transfer control back to the caller Required storage: Status information, parameters, return address, return value for functions Copyright © 2009 Addison-Wesley. All rights reserved.
452
Implementing “Simple” Subprograms: Parts
Two separate parts: the actual code and the non-code part (local variables and data that can change) The format, or layout, of the non-code part of an executing subprogram is called an activation record An activation record instance is a concrete example of an activation record (the collection of data for a particular subprogram activation) Copyright © 2009 Addison-Wesley. All rights reserved.
453
An Activation Record for “Simple” Subprograms
Copyright © 2009 Addison-Wesley. All rights reserved.
454
Code and Activation Records of a Program with “Simple” Subprograms
Copyright © 2009 Addison-Wesley. All rights reserved.
455
Implementing Subprograms with Stack-Dynamic Local Variables
More complex activation record The compiler must generate code to cause implicit allocation and deallocation of local variables Recursion must be supported (adds the possibility of multiple simultaneous activations of a subprogram) Copyright © 2009 Addison-Wesley. All rights reserved.
456
Typical Activation Record for a Language with Stack-Dynamic Local Variables
Copyright © 2009 Addison-Wesley. All rights reserved.
457
Implementing Subprograms with Stack-Dynamic Local Variables: Activation Record
The activation record format is static, but its size may be dynamic The dynamic link points to the top of an instance of the activation record of the caller An activation record instance is dynamically created when a subprogram is called Activation record instances reside on the run-time stack The Environment Pointer (EP) must be maintained by the run-time system. It always points at the base of the activation record instance of the currently executing program unit Copyright © 2009 Addison-Wesley. All rights reserved.
458
An Example: C Function void sub(float total, int part) { int list[5];
float sum; … } [4] [3] [2] [1] [0] Copyright © 2009 Addison-Wesley. All rights reserved.
459
An Example Without Recursion
void A(int x) { int y; ... C(y); } void B(float r) { int s, t; A(s); void C(int q) { void main() { float p; B(p); main calls B B calls A A calls C Copyright © 2009 Addison-Wesley. All rights reserved.
460
An Example Without Recursion
Copyright © 2009 Addison-Wesley. All rights reserved.
461
Dynamic Chain and Local Offset
The collection of dynamic links in the stack at a given time is called the dynamic chain, or call chain Local variables can be accessed by their offset from the beginning of the activation record, whose address is in the EP. This offset is called the local_offset The local_offset of a local variable can be determined by the compiler at compile time Copyright © 2009 Addison-Wesley. All rights reserved.
462
An Example With Recursion
The activation record used in the previous example supports recursion, e.g. int factorial (int n) { < if (n <= 1) return 1; else return (n * factorial(n - 1)); < } void main() { int value; value = factorial(3); < Copyright © 2009 Addison-Wesley. All rights reserved.
463
Activation Record for factorial
Copyright © 2009 Addison-Wesley. All rights reserved.
464
Nested Subprograms Some non-C-based static-scoped languages (e.g., Fortran 95, Ada, Python, JavaScript, Ruby, and Lua) use stack-dynamic local variables and allow subprograms to be nested All variables that can be non-locally accessed reside in some activation record instance in the stack The process of locating a non-local reference: Find the correct activation record instance Determine the correct offset within that activation record instance Copyright © 2009 Addison-Wesley. All rights reserved.
465
Locating a Non-local Reference
Finding the offset is easy Finding the correct activation record instance Static semantic rules guarantee that all non-local variables that can be referenced have been allocated in some activation record instance that is on the stack when the reference is made Copyright © 2009 Addison-Wesley. All rights reserved.
466
Static Scoping A static chain is a chain of static links that connects certain activation record instances The static link in an activation record instance for subprogram A points to one of the activation record instances of A's static parent The static chain from an activation record instance connects it to all of its static ancestors Static_depth is an integer associated with a static scope whose value is the depth of nesting of that scope Copyright © 2009 Addison-Wesley. All rights reserved.
467
Static Scoping (continued)
The chain_offset or nesting_depth of a nonlocal reference is the difference between the static_depth of the reference and that of the scope when it is declared A reference to a variable can be represented by the pair: (chain_offset, local_offset), where local_offset is the offset in the activation record of the variable being referenced Copyright © 2009 Addison-Wesley. All rights reserved.
468
Example Ada Program procedure Main_2 is X : Integer;
procedure Bigsub is A, B, C : Integer; procedure Sub1 is A, D : Integer; begin -- of Sub1 A := B + C; < end; -- of Sub1 procedure Sub2(X : Integer) is B, E : Integer; procedure Sub3 is C, E : Integer; begin -- of Sub3 Sub1; E := B + A: < end; -- of Sub3 begin -- of Sub2 Sub3; A := D + E; < end; -- of Sub2 } begin -- of Bigsub Sub2(7); end; -- of Bigsub begin Bigsub; end; of Main_2 } Copyright © 2009 Addison-Wesley. All rights reserved.
469
Example Ada Program (continued)
Call sequence for Main_2 Main_2 calls Bigsub Bigsub calls Sub2 Sub2 calls Sub3 Sub3 calls Sub1 Copyright © 2009 Addison-Wesley. All rights reserved.
470
Stack Contents at Position 1
Copyright © 2009 Addison-Wesley. All rights reserved.
471
Static Chain Maintenance
At the call, - The activation record instance must be built - The dynamic link is just the old stack top pointer - The static link must point to the most recent ari of the static parent - Two methods: Search the dynamic chain Treat subprogram calls and definitions like variable references and definitions Copyright © 2009 Addison-Wesley. All rights reserved.
472
Evaluation of Static Chains
Problems: 1. A nonlocal areference is slow if the nesting depth is large 2. Time-critical code is difficult: a. Costs of nonlocal references are difficult to determine b. Code changes can change the nesting depth, and therefore the cost Copyright © 2009 Addison-Wesley. All rights reserved.
473
Displays An alternative to static chains that solves the problems with that approach Static links are stored in a single array called a display The contents of the display at any given time is a list of addresses of the accessible activation record instances Copyright © 2009 Addison-Wesley. All rights reserved.
474
Blocks Blocks are user-specified local scopes for variables
An example in C {int temp; temp = list [upper]; list [upper] = list [lower]; list [lower] = temp } The lifetime of temp in the above example begins when control enters the block An advantage of using a local variable like temp is that it cannot interfere with any other variable with the same name Copyright © 2009 Addison-Wesley. All rights reserved.
475
Implementing Blocks Two Methods:
Treat blocks as parameter-less subprograms that are always called from the same location Every block has an activation record; an instance is created every time the block is executed 2. Since the maximum storage required for a block can be statically determined, this amount of space can be allocated after the local variables in the activation record Copyright © 2009 Addison-Wesley. All rights reserved.
476
Implementing Dynamic Scoping
Deep Access: non-local references are found by searching the activation record instances on the dynamic chain - Length of the chain cannot be statically determined - Every activation record instance must have variable names Shallow Access: put locals in a central place One stack for each variable name Central table with an entry for each variable name Copyright © 2009 Addison-Wesley. All rights reserved.
477
Using Shallow Access to Implement Dynamic Scoping
void sub3() { int x, z; x = u + v; … } void sub2() { int w, x; void sub1() { int v, w; void main() { int v, u; Copyright © 2009 Addison-Wesley. All rights reserved.
478
Summary Subprogram linkage semantics requires many action by the implementation Simple subprograms have relatively basic actions Stack-dynamic languages are more complex Subprograms with stack-dynamic local variables and nested subprograms have two components actual code activation record Copyright © 2009 Addison-Wesley. All rights reserved.
479
Summary (continued) Activation record instances contain formal parameters and local variables among other things Static chains are the primary method of implementing accesses to non-local variables in static-scoped languages with nested subprograms Access to non-local variables in dynamic-scoped languages can be implemented by use of the dynamic chain or thru some central variable table method Copyright © 2009 Addison-Wesley. All rights reserved.
480
Abstract Data Types and Encapsulation Concepts
Chapter 11 Abstract Data Types and Encapsulation Concepts
481
Chapter 11 Topics The Concept of Abstraction
Introduction to Data Abstraction Design Issues for Abstract Data Types Language Examples Parameterized Abstract Data Types Encapsulation Constructs Naming Encapsulations Copyright © 2009 Addison-Wesley. All rights reserved.
482
The Concept of Abstraction
An abstraction is a view or representation of an entity that includes only the most significant attributes The concept of abstraction is fundamental in programming (and computer science) Nearly all programming languages support process abstraction with subprograms Nearly all programming languages designed since 1980 support data abstraction Copyright © 2009 Addison-Wesley. All rights reserved.
483
Introduction to Data Abstraction
An abstract data type is a user-defined data type that satisfies the following two conditions: The representation of, and operations on, objects of the type are defined in a single syntactic unit The representation of objects of the type is hidden from the program units that use these objects, so the only operations possible are those provided in the type's definition Copyright © 2009 Addison-Wesley. All rights reserved.
484
Advantages of Data Abstraction
Advantage of the first condition Program organization, modifiability (everything associated with a data structure is together), and separate compilation Advantage the second condition Reliability--by hiding the data representations, user code cannot directly access objects of the type or depend on the representation, allowing the representation to be changed without affecting user code Copyright © 2009 Addison-Wesley. All rights reserved.
485
Language Requirements for ADTs
A syntactic unit in which to encapsulate the type definition A method of making type names and subprogram headers visible to clients, while hiding actual definitions Some primitive operations must be built into the language processor Copyright © 2009 Addison-Wesley. All rights reserved.
486
Design Issues What is the form of the container for the interface to the type? Can abstract types be parameterized? What access controls are provided? Copyright © 2009 Addison-Wesley. All rights reserved.
487
Language Examples: Ada
The encapsulation construct is called a package Specification package (the interface) Body package (implementation of the entities named in the specification) Information Hiding The spec package has two parts, public and private The name of the abstract type appears in the public part of the specification package. This part may also include representations of unhidden types The representation of the abstract type appears in a part of the specification called the private part More restricted form with limited private types Private types have built-in operations for assignment and comparison Limited private types have NO built-in operations Copyright © 2009 Addison-Wesley. All rights reserved.
488
Language Examples: Ada (continued)
Reasons for the public/private spec package: 1. The compiler must be able to see the representation after seeing only the spec package (it cannot see the private part) 2. Clients must see the type name, but not the representation (they also cannot see the private part) Copyright © 2009 Addison-Wesley. All rights reserved.
489
Language Examples: Ada (continued)
Having part of the implementation details (the representation) in the spec package and part (the method bodies) in the body package is not good One solution: make all ADTs pointers Problems with this: 1. Difficulties with pointers 2. Object comparisons 3. Control of object allocation is lost Copyright © 2009 Addison-Wesley. All rights reserved.
490
An Example in Ada package Stack_Pack is
type stack_type is limited private; max_size: constant := 100; function empty(stk: in stack_type) return Boolean; procedure push(stk: in out stack_type; elem:in Integer); procedure pop(stk: in out stack_type); function top(stk: in stack_type) return Integer; private -- hidden from clients type list_type is array (1..max_size) of Integer; type stack_type is record list: list_type; topsub: Integer range 0..max_size) := 0; end record; end Stack_Pack Copyright © 2009 Addison-Wesley. All rights reserved.
491
Language Examples: C++
Based on C struct type and Simula 67 classes The class is the encapsulation device All of the class instances of a class share a single copy of the member functions Each instance of a class has its own copy of the class data members Instances can be static, stack dynamic, or heap dynamic Copyright © 2009 Addison-Wesley. All rights reserved.
492
Language Examples: C++ (continued)
Information Hiding Private clause for hidden entities Public clause for interface entities Protected clause for inheritance (Chapter 12) Copyright © 2009 Addison-Wesley. All rights reserved.
493
Language Examples: C++ (continued)
Constructors: Functions to initialize the data members of instances (they do not create the objects) May also allocate storage if part of the object is heap-dynamic Can include parameters to provide parameterization of the objects Implicitly called when an instance is created Can be explicitly called Name is the same as the class name Copyright © 2009 Addison-Wesley. All rights reserved.
494
Language Examples: C++ (continued)
Destructors Functions to cleanup after an instance is destroyed; usually just to reclaim heap storage Implicitly called when the object’s lifetime ends Can be explicitly called Name is the class name, preceded by a tilde (~) Copyright © 2009 Addison-Wesley. All rights reserved.
495
An Example in C++ class Stack { private:
int *stackPtr, maxLen, topPtr; public: Stack() { // a constructor stackPtr = new int [100]; maxLen = 99; topPtr = -1; }; ~Stack () {delete [] stackPtr;}; void push (int num) {…}; void pop () {…}; int top () {…}; int empty () {…}; } Copyright © 2009 Addison-Wesley. All rights reserved.
496
A Stack class header file
// Stack.h - the header file for the Stack class #include <iostream.h> class Stack { private: //** These members are visible only to other //** members and friends (see Section ) int *stackPtr; int maxLen; int topPtr; public: //** These members are visible to clients Stack(); //** A constructor ~Stack(); //** A destructor void push(int); void pop(); int top(); int empty(); } Copyright © 2009 Addison-Wesley. All rights reserved.
497
The code file for Stack // Stack.cpp - the implementation file for the Stack class #include <iostream.h> #include "Stack.h" using std::cout; Stack::Stack() { //** A constructor stackPtr = new int [100]; maxLen = 99; topPtr = -1; } Stack::~Stack() {delete [] stackPtr;}; //** A destructor void Stack::push(int number) { if (topPtr == maxLen) cerr << "Error in push--stack is full\n"; else stackPtr[++topPtr] = number; ... Copyright © 2009 Addison-Wesley. All rights reserved.
498
Evaluation of ADTs in C++ and Ada
C++ support for ADTs is similar to expressive power of Ada Both provide effective mechanisms for encapsulation and information hiding Ada packages are more general encapsulations; classes are types Copyright © 2009 Addison-Wesley. All rights reserved.
499
Language Examples: C++ (continued)
Friend functions or classes - to provide access to private members to some unrelated units or functions Necessary in C++ Copyright © 2009 Addison-Wesley. All rights reserved.
500
Language Examples: Java
Similar to C++, except: All user-defined types are classes All objects are allocated from the heap and accessed through reference variables Individual entities in classes have access control modifiers (private or public), rather than clauses Java has a second scoping mechanism, package scope, which can be used in place of friends All entities in all classes in a package that do not have access control modifiers are visible throughout the package Copyright © 2009 Addison-Wesley. All rights reserved.
501
An Example in Java class StackClass { private:
private int [] *stackRef; private int [] maxLen, topIndex; public StackClass() { // a constructor stackRef = new int [100]; maxLen = 99; topPtr = -1; }; public void push (int num) {…}; public void pop () {…}; public int top () {…}; public boolean empty () {…}; } Copyright © 2009 Addison-Wesley. All rights reserved.
502
Language Examples: C# Based on C++ and Java
Adds two access modifiers, internal and protected internal All class instances are heap dynamic Default constructors are available for all classes Garbage collection is used for most heap objects, so destructors are rarely used structs are lightweight classes that do not support inheritance Copyright © 2009 Addison-Wesley. All rights reserved.
503
Language Examples: C# (continued)
Common solution to need for access to data members: accessor methods (getter and setter) C# provides properties as a way of implementing getters and setters without requiring explicit method calls Copyright © 2009 Addison-Wesley. All rights reserved.
504
C# Property Example public class Weather {
public int DegreeDays { //** DegreeDays is a property get {return degreeDays;} set { if(value < 0 || value > 30) Console.WriteLine( "Value is out of range: {0}", value); else degreeDays = value;} } private int degreeDays; ... Weather w = new Weather(); int degreeDaysToday, oldDegreeDays; w.DegreeDays = degreeDaysToday; oldDegreeDays = w.DegreeDays; Copyright © 2009 Addison-Wesley. All rights reserved.
505
Abstract Data Types in Ruby
Encapsulation construct is the class Local variables have “normal” names Instance variable names begin with “at” signs Class variable names begin with two “at” signs Instance methods have the syntax of Ruby functions (def … end) Constructors are named initialize (only one per class)—implicitly called when new is called If more constructors are needed, they must have different names and they must explicitly call new Class members can be marked private or public, with public being the default Classes are dynamic Copyright © 2009 Addison-Wesley. All rights reserved.
506
Abstract Data Types in Ruby (continued)
class StackClass { def initialize @stackRef = Array.new @maxLen = 100 @topIndex = -1 end def push(number) … end def pop … end def top … end def empty … end Copyright © 2009 Addison-Wesley. All rights reserved.
507
Parameterized Abstract Data Types
Parameterized ADTs allow designing an ADT that can store any type elements (among other things) – only an issue for static typed languages Also known as generic classes C++, Ada, Java 5.0, and C# 2005 provide support for parameterized ADTs Copyright © 2009 Addison-Wesley. All rights reserved.
508
Parameterized ADTs in Ada
Ada Generic Packages Make the stack type more flexible by making the element type and the size of the stack generic generic Max_Size: Positive; type Elem_Type is private; package Generic_Stack is Type Stack_Type is limited private; function Top(Stk: in out StackType) return Elem_type; … end Generic_Stack; Package Integer_Stack is new Generic_Stack(100,Integer); Package Float_Stack is new Generic_Stack(100,Float); Copyright © 2009 Addison-Wesley. All rights reserved.
509
Parameterized ADTs in C++
Classes can be somewhat generic by writing parameterized constructor functions class Stack { … Stack (int size) { stk_ptr = new int [size]; max_len = size - 1; top = -1; }; } Stack stk(100); Copyright © 2009 Addison-Wesley. All rights reserved.
510
Parameterized ADTs in C++ (continued)
The stack element type can be parameterized by making the class a templated class template <class Type> class Stack { private: Type *stackPtr; const int maxLen; int topPtr; public: Stack() { stackPtr = new Type[100]; maxLen = 99; topPtr = -1; } … } Copyright © 2009 Addison-Wesley. All rights reserved.
511
Parameterized Classes in Java 5.0
Generic parameters must be classes Most common generic types are the collection types, such as LinkedList and ArrayList Eliminate the need to cast objects that are removed Eliminate the problem of having multiple types in a structure Copyright © 2009 Addison-Wesley. All rights reserved.
512
Parameterized Classes in C# 2005
Similar to those of Java 5.0 Elements of parameterized structures can be accessed through indexing Copyright © 2009 Addison-Wesley. All rights reserved.
513
Encapsulation Constructs
Large programs have two special needs: Some means of organization, other than simply division into subprograms Some means of partial compilation (compilation units that are smaller than the whole program) Obvious solution: a grouping of subprograms that are logically related into a unit that can be separately compiled (compilation units) Such collections are called encapsulation Copyright © 2009 Addison-Wesley. All rights reserved.
514
Nested Subprograms Organizing programs by nesting subprogram definitions inside the logically larger subprograms that use them Nested subprograms are supported in Ada, Fortran 95, Python, and Ruby Copyright © 2009 Addison-Wesley. All rights reserved.
515
Encapsulation in C Files containing one or more subprograms can be independently compiled The interface is placed in a header file Problem: the linker does not check types between a header and associated implementation #include preprocessor specification – used to include header files in applications Copyright © 2009 Addison-Wesley. All rights reserved.
516
Encapsulation in C++ Can define header and code files, similar to those of C Or, classes can be used for encapsulation The class is used as the interface (prototypes) The member definitions are defined in a separate file Friends provide a way to grant access to private members of a class Copyright © 2009 Addison-Wesley. All rights reserved.
517
Ada Packages Ada specification packages can include any number of data and subprogram declarations Ada packages can be compiled separately A package’s specification and body parts can be compiled separately Copyright © 2009 Addison-Wesley. All rights reserved.
518
C# Assemblies A collection of files that appear to be a single dynamic link library or executable Each file contains a module that can be separately compiled A DLL is a collection of classes and methods that are individually linked to an executing program C# has an access modifier called internal; an internal member of a class is visible to all classes in the assembly in which it appears Copyright © 2009 Addison-Wesley. All rights reserved.
519
Naming Encapsulations
Large programs define many global names; need a way to divide into logical groupings A naming encapsulation is used to create a new scope for names C++ Namespaces Can place each library in its own namespace and qualify names used outside with the namespace C# also includes namespaces Copyright © 2009 Addison-Wesley. All rights reserved.
520
Naming Encapsulations (continued)
Java Packages Packages can contain more than one class definition; classes in a package are partial friends Clients of a package can use fully qualified name or use the import declaration Ada Packages Packages are defined in hierarchies which correspond to file hierarchies Visibility from a program unit is gained with the with clause Copyright © 2009 Addison-Wesley. All rights reserved.
521
Naming Encapsulations (continued)
Ruby classes are name encapsulations, but Ruby also has modules Typically encapsulate collections of constants and methods Modules cannot be instantiated or subclassed, and they cannot define variables Methods defined in a module must include the module’s name Access to the contents of a module is requested with the require method - Copyright © 2009 Addison-Wesley. All rights reserved.
522
Summary The concept of ADTs and their use in program design was a milestone in the development of languages Two primary features of ADTs are the packaging of data with their associated operations and information hiding Ada provides packages that simulate ADTs C++ data abstraction is provided by classes Java’s data abstraction is similar to C++ Ada, C++, Java 5.0, and C# 2005 support parameterized ADTs C++, C#, Java, Ada, and Ruby provide naming encapsulations Copyright © 2009 Addison-Wesley. All rights reserved.
523
Support for Object-Oriented Programming
Chapter 12 Support for Object-Oriented Programming
524
Chapter 12 Topics Introduction Object-Oriented Programming
Design Issues for Object-Oriented Languages Support for Object-Oriented Programming in Smalltalk Support for Object-Oriented Programming in C++ Support for Object-Oriented Programming in Java Support for Object-Oriented Programming in C# Support for Object-Oriented Programming in Ada 95 Support for Object-Oriented Programming in Ruby Implementation of Object-Oriented Constructs Copyright © 2009 Addison-Wesley. All rights reserved.
525
Introduction Many object-oriented programming (OOP) languages
Some support procedural and data-oriented programming (e.g., Ada 95 and C++) Some support functional program (e.g., CLOS) Newer languages do not support other paradigms but use their imperative structures (e.g., Java and C#) Some are pure OOP language (e.g., Smalltalk & Ruby) Copyright © 2009 Addison-Wesley. All rights reserved.
526
Object-Oriented Programming
Abstract data types Inheritance Inheritance is the central theme in OOP and languages that support it Polymorphism Copyright © 2009 Addison-Wesley. All rights reserved.
527
Inheritance Productivity increases can come from reuse
ADTs are difficult to reuse—always need changes All ADTs are independent and at the same level Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to inherit common parts Inheritance addresses both of the above concerns--reuse ADTs after minor changes and define classes in a hierarchy Copyright © 2009 Addison-Wesley. All rights reserved.
528
Object-Oriented Concepts
ADTs are usually called classes Class instances are called objects A class that inherits is a derived class or a subclass The class from which another class inherits is a parent class or superclass Subprograms that define operations on objects are called methods Copyright © 2009 Addison-Wesley. All rights reserved.
529
Object-Oriented Concepts (continued)
Calls to methods are called messages The entire collection of methods of an object is called its message protocol or message interface Messages have two parts--a method name and the destination object In the simplest case, a class inherits all of the entities of its parent Copyright © 2009 Addison-Wesley. All rights reserved.
530
Object-Oriented Concepts (continued)
Inheritance can be complicated by access controls to encapsulated entities A class can hide entities from its subclasses A class can hide entities from its clients A class can also hide entities for its clients while allowing its subclasses to see them Besides inheriting methods as is, a class can modify an inherited method The new one overrides the inherited one The method in the parent is overriden Copyright © 2009 Addison-Wesley. All rights reserved.
531
Object-Oriented Concepts (continued)
There are two kinds of variables in a class: Class variables - one/class Instance variables - one/object There are two kinds of methods in a class: Class methods – accept messages to the class Instance methods – accept messages to objects Single vs. Multiple Inheritance One disadvantage of inheritance for reuse: Creates interdependencies among classes that complicate maintenance Copyright © 2009 Addison-Wesley. All rights reserved.
532
Dynamic Binding A polymorphic variable can be defined in a class that is able to reference (or point to) objects of the class and objects of any of its descendants When a class hierarchy includes classes that override methods and such methods are called through a polymorphic variable, the binding to the correct method will be dynamic Allows software systems to be more easily extended during both development and maintenance Copyright © 2009 Addison-Wesley. All rights reserved.
533
Dynamic Binding Concepts
An abstract method is one that does not include a definition (it only defines a protocol) An abstract class is one that includes at least one virtual method An abstract class cannot be instantiated Copyright © 2009 Addison-Wesley. All rights reserved.
534
Design Issues for OOP Languages
The Exclusivity of Objects Are Subclasses Subtypes? Type Checking and Polymorphism Single and Multiple Inheritance Object Allocation and Deallocation Dynamic and Static Binding Nested Classes Initialization of Objects Copyright © 2009 Addison-Wesley. All rights reserved.
535
The Exclusivity of Objects
Everything is an object Advantage - elegance and purity Disadvantage - slow operations on simple objects Add objects to a complete typing system Advantage - fast operations on simple objects Disadvantage - results in a confusing type system (two kinds of entities) Include an imperative-style typing system for primitives but make everything else objects Advantage - fast operations on simple objects and a relatively small typing system Disadvantage - still some confusion because of the two type systems Copyright © 2009 Addison-Wesley. All rights reserved.
536
Are Subclasses Subtypes?
Does an “is-a” relationship hold between a parent class object and an object of the subclass? If a derived class is-a parent class, then objects of the derived class must behave the same as the parent class object A derived class is a subtype if it has an is-a relationship with its parent class Subclass can only add variables and methods and override inherited methods in “compatible” ways Copyright © 2009 Addison-Wesley. All rights reserved.
537
Type Checking and Polymorphism
Polymorphism may require dynamic type checking of parameters and the return value Dynamic type checking is costly and delays error detection If overriding methods are restricted to having the same parameter types and return type, the checking can be static Copyright © 2009 Addison-Wesley. All rights reserved.
538
Single and Multiple Inheritance
Multiple inheritance allows a new class to inherit from two or more classes Disadvantages of multiple inheritance: Language and implementation complexity (in part due to name collisions) Potential inefficiency - dynamic binding costs more with multiple inheritance (but not much) Advantage: Sometimes it is quite convenient and valuable Copyright © 2009 Addison-Wesley. All rights reserved.
539
Allocation and DeAllocation of Objects
From where are objects allocated? If they behave line the ADTs, they can be allocated from anywhere Allocated from the run-time stack Explicitly create on the heap (via new) If they are all heap-dynamic, references can be uniform thru a pointer or reference variable Simplifies assignment - dereferencing can be implicit If objects are stack dynamic, there is a problem with regard to subtypes Is deallocation explicit or implicit? Copyright © 2009 Addison-Wesley. All rights reserved.
540
Dynamic and Static Binding
Should all binding of messages to methods be dynamic? If none are, you lose the advantages of dynamic binding If all are, it is inefficient Allow the user to specify Copyright © 2009 Addison-Wesley. All rights reserved.
541
Nested Classes If a new class is needed by only one class, there is no reason to define so it can be seen by other classes Can the new class be nested inside the class that uses it? In some cases, the new class is nested inside a subprogram rather than directly in another class Other issues: Which facilities of the nesting class should be visible to the nested class and vice versa Copyright © 2009 Addison-Wesley. All rights reserved.
542
Copyright © 2009 Addison-Wesley. All rights reserved.
543
Initialization of Objects
Are objects initialized to values when they are created? Implicit or explicit initialization How are parent class members initialized when a subclass object is created? Copyright © 2009 Addison-Wesley. All rights reserved.
544
Support for OOP in Smalltalk
Smalltalk is a pure OOP language Everything is an object All objects have local memory All computation is through objects sending messages to objects None of the appearances of imperative languages All objected are allocated from the heap All deallocation is implicit Copyright © 2009 Addison-Wesley. All rights reserved.
545
Support for OOP in Smalltalk (continued)
Type Checking and Polymorphism All binding of messages to methods is dynamic The process is to search the object to which the message is sent for the method; if not found, search the superclass, etc. up to the system class which has no superclass The only type checking in Smalltalk is dynamic and the only type error occurs when a message is sent to an object that has no matching method Copyright © 2009 Addison-Wesley. All rights reserved.
546
Support for OOP in Smalltalk (continued)
Inheritance A Smalltalk subclass inherits all of the instance variables, instance methods, and class methods of its superclass All subclasses are subtypes (nothing can be hidden) All inheritance is implementation inheritance No multiple inheritance Copyright © 2009 Addison-Wesley. All rights reserved.
547
Support for OOP in Smalltalk (continued)
Evaluation of Smalltalk The syntax of the language is simple and regular Good example of power provided by a small language Slow compared with conventional compiled imperative languages Dynamic binding allows type errors to go undetected until run time Introduced the graphical user interface Greatest impact: advancement of OOP Copyright © 2009 Addison-Wesley. All rights reserved.
548
Support for OOP in C++ General Characteristics:
Evolved from C and SIMULA 67 Among the most widely used OOP languages Mixed typing system Constructors and destructors Elaborate access controls to class entities Copyright © 2009 Addison-Wesley. All rights reserved.
549
Support for OOP in C++ (continued)
Inheritance A class need not be the subclass of any class Access controls for members are Private (visible only in the class and friends) (disallows subclasses from being subtypes) Public (visible in subclasses and clients) Protected (visible in the class and in subclasses, but not clients) Copyright © 2009 Addison-Wesley. All rights reserved.
550
Support for OOP in C++ (continued)
In addition, the subclassing process can be declared with access controls (private or public), which define potential changes in access by subclasses Private derivation - inherited public and protected members are private in the subclasses Public derivation public and protected members are also public and protected in subclasses Copyright © 2009 Addison-Wesley. All rights reserved.
551
Inheritance Example in C++
class base_class { private: int a; float x; protected: int b; float y; public: int c; float z; }; class subclass_1 : public base_class { … }; // In this one, b and y are protected and // c and z are public class subclass_2 : private base_class { … }; // In this one, b, y, c, and z are private, // and no derived class has access to any // member of base_class Copyright © 2009 Addison-Wesley. All rights reserved.
552
Reexportation in C++ A member that is not accessible in a subclass (because of private derivation) can be declared to be visible there using the scope resolution operator (::), e.g., class subclass_3 : private base_class { base_class :: c; … } Copyright © 2009 Addison-Wesley. All rights reserved.
553
Reexportation (continued)
One motivation for using private derivation A class provides members that must be visible, so they are defined to be public members; a derived class adds some new members, but does not want its clients to see the members of the parent class, even though they had to be public in the parent class definition Copyright © 2009 Addison-Wesley. All rights reserved.
554
Support for OOP in C++ (continued)
Multiple inheritance is supported If there are two inherited members with the same name, they can both be referenced using the scope resolution operator Copyright © 2009 Addison-Wesley. All rights reserved.
555
Support for OOP in C++ (continued)
Dynamic Binding A method can be defined to be virtual, which means that they can be called through polymorphic variables and dynamically bound to messages A pure virtual function has no definition at all A class that has at least one pure virtual function is an abstract class Copyright © 2009 Addison-Wesley. All rights reserved.
556
Support for OOP in C++ (continued)
Evaluation C++ provides extensive access controls (unlike Smalltalk) C++ provides multiple inheritance In C++, the programmer must decide at design time which methods will be statically bound and which must be dynamically bound Static binding is faster! Smalltalk type checking is dynamic (flexible, but somewhat unsafe) Because of interpretation and dynamic binding, Smalltalk is ~10 times slower than C++ Copyright © 2009 Addison-Wesley. All rights reserved.
557
Support for OOP in Java Because of its close relationship to C++, focus is on the differences from that language General Characteristics All data are objects except the primitive types All primitive types have wrapper classes that store one data value All objects are heap-dynamic, are referenced through reference variables, and most are allocated with new A finalize method is implicitly called when the garbage collector is about to reclaim the storage occupied by the object Copyright © 2009 Addison-Wesley. All rights reserved.
558
Support for OOP in Java (continued)
Inheritance Single inheritance supported only, but there is an abstract class category that provides some of the benefits of multiple inheritance (interface) An interface can include only method declarations and named constants, e.g., public interface Comparable <T> { public int comparedTo (T b); } Methods can be final (cannot be overriden) Copyright © 2009 Addison-Wesley. All rights reserved.
559
Support for OOP in Java (continued)
Dynamic Binding In Java, all messages are dynamically bound to methods, unless the method is final (i.e., it cannot be overriden, therefore dynamic binding serves no purpose) Static binding is also used if the methods is static or private both of which disallow overriding Copyright © 2009 Addison-Wesley. All rights reserved.
560
Support for OOP in Java (continued)
Several varieties of nested classes All are hidden from all classes in their package, except for the nesting class Nonstatic classes nested directly are called innerclasses An innerclass can access members of its nesting class A static nested class cannot access members of its nesting class Nested classes can be anonymous A local nested class is defined in a method of its nesting class No access specifier is used Copyright © 2009 Addison-Wesley. All rights reserved.
561
Support for OOP in Java (continued)
Evaluation Design decisions to support OOP are similar to C++ No support for procedural programming No parentless classes Dynamic binding is used as “normal” way to bind method calls to method definitions Uses interfaces to provide a simple form of support for multiple inheritance Copyright © 2009 Addison-Wesley. All rights reserved.
562
Support for OOP in C# General characteristics
Support for OOP similar to Java Includes both classes and structs Classes are similar to Java’s classes structs are less powerful stack-dynamic constructs (e.g., no inheritance) Copyright © 2009 Addison-Wesley. All rights reserved.
563
Support for OOP in C# (continued)
Inheritance Uses the syntax of C++ for defining classes A method inherited from parent class can be replaced in the derived class by marking its definition with new The parent class version can still be called explicitly with the prefix base: base.Draw() Copyright © 2009 Addison-Wesley. All rights reserved.
564
Support for OOP in C# Dynamic binding
To allow dynamic binding of method calls to methods: The base class method is marked virtual The corresponding methods in derived classes are marked override Abstract methods are marked abstract and must be implemented in all subclasses All C# classes are ultimately derived from a single root class, Object Copyright © 2009 Addison-Wesley. All rights reserved.
565
Support for OOP in C# (continued)
Nested Classes A C# class that is directly nested in a nesting class behaves like a Java static nested class C# does not support nested classes that behave like the non-static classes of Java Copyright © 2009 Addison-Wesley. All rights reserved.
566
Support for OOP in C# Evaluation
C# is the most recently designed C-based OO language The differences between C#’s and Java’s support for OOP are relatively minor Copyright © 2009 Addison-Wesley. All rights reserved.
567
Support for OOP in Ada 95 General Characteristics
OOP was one of the most important extensions to Ada 83 Encapsulation container is a package that defines a tagged type A tagged type is one in which every object includes a tag to indicate during execution its type (the tags are internal) Tagged types can be either private types or records No constructors or destructors are implicitly called Copyright © 2009 Addison-Wesley. All rights reserved.
568
Support for OOP in Ada 95 (continued)
Inheritance Subclasses can be derived from tagged types New entities are added to the inherited entities by placing them in a record definition All subclasses are subtypes No support for multiple inheritance A comparable effect can be achieved using generic classes Copyright © 2009 Addison-Wesley. All rights reserved.
569
Example of a Tagged Type
Package Person_Pkg is type Person is tagged private; procedure Display(P : in out Person); private type Person is tagged record Name : String(1..30); Address : String(1..30); Age : Integer; end record; end Person_Pkg; with Person_Pkg; use Person_Pkg; package Student_Pkg is type Student is new Person with Grade_Point_Average : Float; Grade_Level : Integer; procedure Display (St: in Student); end Student_Pkg; // Note: Display is being overridden from Person_Pkg Copyright © 2009 Addison-Wesley. All rights reserved.
570
Support for OOP in Ada 95 (continued)
Dynamic Binding Dynamic binding is done using polymorphic variables called classwide types For the tagged type Prtdon, the classwide type is Person‘ class Other bindings are static Any method may be dynamically bound Purely abstract base types can be defined in Ada 95 by including the reserved word abstract Copyright © 2009 Addison-Wesley. All rights reserved.
571
Support for OOP in Ada 95 (continued)
Evaluation Ada offers complete support for OOP C++ offers better form of inheritance than Ada Ada includes no initialization of objects (e.g., constructors) Dynamic binding in C-based OOP languages is restricted to pointers and/or references to objects; Ada has no such restriction and is thus more orthogonal Copyright © 2009 Addison-Wesley. All rights reserved.
572
Support for OOP in Ruby General Characteristics
Everything is an object All computation is through message passing Class definitions are executable, allowing secondary definitions to add members to existing definitions Method definitions are also executable All variables are type-less references to objects Access control is different for data and methods It is private for all data and cannot be changed Methods can be either public, private, or protected Method access is checked at runtime Getters and setters can be defined by shortcuts Copyright © 2009 Addison-Wesley. All rights reserved.
573
Support for OOP in Ruby (continued)
Inheritance Access control to inherited methods can be different than in the parent class Subclasses are not necessarily subtypes Mixins can be created with modules, providing a kind of multiple inheritance Dynamic Binding All variables are typeless and polymorphic Evaluation Does not support abstract classes Does not fully support multiple inheritance Access controls are weaker than those of other languages that support OOP Copyright © 2009 Addison-Wesley. All rights reserved.
574
Implementing OO Constructs
Two interesting and challenging parts Storage structures for instance variables Dynamic binding of messages to methods Copyright © 2009 Addison-Wesley. All rights reserved.
575
Instance Data Storage Class instance records (CIRs) store the state of an object Static (built at compile time) If a class has a parent, the subclass instance variables are added to the parent CIR Because CIR is static, access to all instance variables is done as it is in records Efficient Copyright © 2009 Addison-Wesley. All rights reserved.
576
Dynamic Binding of Methods Calls
Methods in a class that are statically bound need not be involved in the CIR; methods that will be dynamically bound must have entries in the CIR Calls to dynamically bound methods can be connected to the corresponding code thru a pointer in the CIR The storage structure is sometimes called virtual method tables (vtable) Method calls can be represented as offsets from the beginning of the vtable Copyright © 2009 Addison-Wesley. All rights reserved.
577
Summary OO programming involves three fundamental concepts: ADTs, inheritance, dynamic binding Major design issues: exclusivity of objects, subclasses and subtypes, type checking and polymorphism, single and multiple inheritance, dynamic binding, explicit and implicit de-allocation of objects, and nested classes Smalltalk is a pure OOL C++ has two distinct type system (hybrid) Java is not a hybrid language like C++; it supports only OO programming C# is based on C++ and Java Ruby is a new pure OOP language; provides some new ideas in support for OOP JavaScript is not an OOP language but provides interesting variations Implementing OOP involves some new data structures Copyright © 2009 Addison-Wesley. All rights reserved.
578
Chapter 13 Concurrency
579
Chapter 13 Topics Introduction
Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Ada Support for Concurrency Java Threads C# Threads Statement-Level Concurrency Copyright © 2009 Addison-Wesley. All rights reserved.
580
Introduction Concurrency can occur at four levels:
Machine instruction level High-level language statement level Unit level Program level Because there are no language issues in instruction- and program-level concurrency, they are not addressed here Copyright © 2009 Addison-Wesley. All rights reserved.
581
Multiprocessor Architectures
Late 1950s - one general-purpose processor and one or more special-purpose processors for input and output operations Early 1960s - multiple complete processors, used for program-level concurrency Mid-1960s - multiple partial processors, used for instruction-level concurrency Single-Instruction Multiple-Data (SIMD) machines Multiple-Instruction Multiple-Data (MIMD) machines Independent processors that can be synchronized (unit-level concurrency) Copyright © 2009 Addison-Wesley. All rights reserved.
582
Categories of Concurrency
A thread of control in a program is the sequence of program points reached as control flows through the program Categories of Concurrency: Physical concurrency - Multiple independent processors ( multiple threads of control) Logical concurrency - The appearance of physical concurrency is presented by time-sharing one processor (software can be designed as if there were multiple threads of control) Coroutines (quasi-concurrency) have a single thread of control Copyright © 2009 Addison-Wesley. All rights reserved.
583
Motivations for Studying Concurrency
Involves a different way of designing software that can be very useful—many real-world situations involve concurrency Multiprocessor computers capable of physical concurrency are now widely used Copyright © 2009 Addison-Wesley. All rights reserved.
584
Introduction to Subprogram-Level Concurrency
A task or process is a program unit that can be in concurrent execution with other program units Tasks differ from ordinary subprograms in that: A task may be implicitly started When a program unit starts the execution of a task, it is not necessarily suspended When a task’s execution is completed, control may not return to the caller Tasks usually work together Copyright © 2009 Addison-Wesley. All rights reserved.
585
Two General Categories of Tasks
Heavyweight tasks execute in their own address space Lightweight tasks all run in the same address space – more efficient A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way Copyright © 2009 Addison-Wesley. All rights reserved.
586
Task Synchronization A mechanism that controls the order in which tasks execute Two kinds of synchronization Cooperation synchronization Competition synchronization Task communication is necessary for synchronization, provided by: - Shared nonlocal variables - Parameters - Message passing Copyright © 2009 Addison-Wesley. All rights reserved.
587
Kinds of synchronization
Cooperation: Task A must wait for task B to complete some specific activity before task A can continue its execution, e.g., the producer-consumer problem Competition: Two or more tasks must use some resource that cannot be simultaneously used, e.g., a shared counter Competition is usually provided by mutually exclusive access (approaches are discussed later) Copyright © 2009 Addison-Wesley. All rights reserved.
588
Need for Competition Synchronization
Copyright © 2009 Addison-Wesley. All rights reserved.
589
Scheduler Providing synchronization requires a mechanism for delaying task execution Task execution control is maintained by a program called the scheduler, which maps task execution onto available processors Copyright © 2009 Addison-Wesley. All rights reserved.
590
Task Execution States New - created but not yet started
Rready - ready to run but not currently running (no available processor) Running Blocked - has been running, but cannot now continue (usually waiting for some event to occur) Dead - no longer active in any sense Copyright © 2009 Addison-Wesley. All rights reserved.
591
Liveness and Deadlock Liveness is a characteristic that a program unit may or may not have - In sequential code, it means the unit will eventually complete its execution In a concurrent environment, a task can easily lose its liveness If all tasks in a concurrent environment lose their liveness, it is called deadlock Copyright © 2009 Addison-Wesley. All rights reserved.
592
Design Issues for Concurrency
Competition and cooperation synchronization Controlling task scheduling How and when tasks start and end execution How and when are tasks created Copyright © 2009 Addison-Wesley. All rights reserved.
593
Methods of Providing Synchronization
Semaphores Monitors Message Passing Copyright © 2009 Addison-Wesley. All rights reserved.
594
Semaphores Dijkstra A semaphore is a data structure consisting of a counter and a queue for storing task descriptors Semaphores can be used to implement guards on the code that accesses shared data structures Semaphores have only two operations, wait and release (originally called P and V by Dijkstra) Semaphores can be used to provide both competition and cooperation synchronization Copyright © 2009 Addison-Wesley. All rights reserved.
595
Cooperation Synchronization with Semaphores
Example: A shared buffer The buffer is implemented as an ADT with the operations DEPOSIT and FETCH as the only ways to access the buffer Use two semaphores for cooperation: emptyspots and fullspots The semaphore counters are used to store the numbers of empty spots and full spots in the buffer Copyright © 2009 Addison-Wesley. All rights reserved.
596
Cooperation Synchronization with Semaphores (continued)
DEPOSIT must first check emptyspots to see if there is room in the buffer If there is room, the counter of emptyspots is decremented and the value is inserted If there is no room, the caller is stored in the queue of emptyspots When DEPOSIT is finished, it must increment the counter of fullspots Copyright © 2009 Addison-Wesley. All rights reserved.
597
Cooperation Synchronization with Semaphores (continued)
FETCH must first check fullspots to see if there is a value If there is a full spot, the counter of fullspots is decremented and the value is removed If there are no values in the buffer, the caller must be placed in the queue of fullspots When FETCH is finished, it increments the counter of emptyspots The operations of FETCH and DEPOSIT on the semaphores are accomplished through two semaphore operations named wait and release Copyright © 2009 Addison-Wesley. All rights reserved.
598
Semaphores: Wait Operation
wait(aSemaphore) if aSemaphore’s counter > 0 then decrement aSemaphore’s counter else put the caller in aSemaphore’s queue attempt to transfer control to a ready task -- if the task ready queue is empty, -- deadlock occurs end Copyright © 2009 Addison-Wesley. All rights reserved.
599
Semaphores: Release Operation
release(aSemaphore) if aSemaphore’s queue is empty then increment aSemaphore’s counter else put the calling task in the task ready queue transfer control to a task from aSemaphore’s queue end Copyright © 2009 Addison-Wesley. All rights reserved.
600
Producer Code semaphore fullspots, emptyspots; fullstops.count = 0;
emptyspots.count = BUFLEN; task producer; loop -- produce VALUE –- wait (emptyspots); {wait for space} DEPOSIT(VALUE); release(fullspots); {increase filled} end loop; end producer; Copyright © 2009 Addison-Wesley. All rights reserved.
601
Consumer Code task consumer; loop
wait (fullspots);{wait till not empty}} FETCH(VALUE); release(emptyspots); {increase empty} -- consume VALUE –- end loop; end consumer; Copyright © 2009 Addison-Wesley. All rights reserved.
602
Competition Synchronization with Semaphores
A third semaphore, named access, is used to control access (competition synchronization) The counter of access will only have the values 0 and 1 Such a semaphore is called a binary semaphore Note that wait and release must be atomic! Copyright © 2009 Addison-Wesley. All rights reserved.
603
Producer Code semaphore access, fullspots, emptyspots;
access.count = 0; fullstops.count = 0; emptyspots.count = BUFLEN; task producer; loop -- produce VALUE –- wait(emptyspots); {wait for space} wait(access); {wait for access) DEPOSIT(VALUE); release(access); {relinquish access} release(fullspots); {increase filled} end loop; end producer; Copyright © 2009 Addison-Wesley. All rights reserved.
604
Consumer Code task consumer; loop
wait(fullspots);{wait till not empty} wait(access); {wait for access} FETCH(VALUE); release(access); {relinquish access} release(emptyspots); {increase empty} -- consume VALUE –- end loop; end consumer; Copyright © 2009 Addison-Wesley. All rights reserved.
605
Evaluation of Semaphores
Misuse of semaphores can cause failures in cooperation synchronization, e.g., the buffer will overflow if the wait of fullspots is left out Misuse of semaphores can cause failures in competition synchronization, e.g., the program will deadlock if the release of access is left out Copyright © 2009 Addison-Wesley. All rights reserved.
606
Monitors Ada, Java, C# The idea: encapsulate the shared data and its operations to restrict access A monitor is an abstract data type for shared data Copyright © 2009 Addison-Wesley. All rights reserved.
607
Competition Synchronization
Shared data is resident in the monitor (rather than in the client units) All access resident in the monitor Monitor implementation guarantee synchronized access by allowing only one access at a time Calls to monitor procedures are implicitly queued if the monitor is busy at the time of the call Copyright © 2009 Addison-Wesley. All rights reserved.
608
Cooperation Synchronization
Cooperation between processes is still a programming task Programmer must guarantee that a shared buffer does not experience underflow or overflow Copyright © 2009 Addison-Wesley. All rights reserved.
609
Evaluation of Monitors
A better way to provide competition synchronization than are semaphores Semaphores can be used to implement monitors Monitors can be used to implement semaphores Support for cooperation synchronization is very similar as with semaphores, so it has the same problems Copyright © 2009 Addison-Wesley. All rights reserved.
610
Message Passing Message passing is a general model for concurrency
It can model both semaphores and monitors It is not just for competition synchronization Central idea: task communication is like seeing a doctor--most of the time she waits for you or you wait for her, but when you are both ready, you get together, or rendezvous Copyright © 2009 Addison-Wesley. All rights reserved.
611
Message Passing Rendezvous
To support concurrent tasks with message passing, a language needs: - A mechanism to allow a task to indicate when it is willing to accept messages - A way to remember who is waiting to have its message accepted and some “fair” way of choosing the next message When a sender task’s message is accepted by a receiver task, the actual message transmission is called a rendezvous Copyright © 2009 Addison-Wesley. All rights reserved.
612
Ada Support for Concurrency
The Ada 83 Message-Passing Model Ada tasks have specification and body parts, like packages; the spec has the interface, which is the collection of entry points: task Task_Example is entry ENTRY_1 (Item : in Integer); end Task_Example; Copyright © 2009 Addison-Wesley. All rights reserved.
613
Task Body The body task describes the action that takes place when a rendezvous occurs A task that sends a message is suspended while waiting for the message to be accepted and during the rendezvous Entry points in the spec are described with accept clauses in the body accept entry_name (formal parameters) do … end entry_name Copyright © 2009 Addison-Wesley. All rights reserved.
614
Example of a Task Body task body Task_Example is begin loop
accept Entry_1 (Item: in Float) do ... end Entry_1; end loop; end Task_Example; Copyright © 2009 Addison-Wesley. All rights reserved.
615
Ada Message Passing Semantics
The task executes to the top of the accept clause and waits for a message During execution of the accept clause, the sender is suspended accept parameters can transmit information in either or both directions Every accept clause has an associated queue to store waiting messages Copyright © 2009 Addison-Wesley. All rights reserved.
616
Rendezvous Time Lines Copyright © 2009 Addison-Wesley. All rights reserved.
617
Message Passing: Server/Actor Tasks
A task that has accept clauses, but no other code is called a server task (the example above is a server task) A task without accept clauses is called an actor task An actor task can send messages to other tasks Note: A sender must know the entry name of the receiver, but not vice versa (asymmetric) Copyright © 2009 Addison-Wesley. All rights reserved.
618
Graphical Representation of a Rendezvous
Copyright © 2009 Addison-Wesley. All rights reserved.
619
Multiple Entry Points Tasks can have more than one entry point
The specification task has an entry clause for each The task body has an accept clause for each entry clause, placed in a select clause, which is in a loop Copyright © 2009 Addison-Wesley. All rights reserved.
620
A Task with Multiple Entries
task body Teller is loop select accept Drive_Up(formal params) do ... end Drive_Up; or accept Walk_Up(formal params) do end Walk_Up; end select; end loop; end Teller; Copyright © 2009 Addison-Wesley. All rights reserved.
621
Semantics of Tasks with Multiple accept Clauses
If exactly one entry queue is nonempty, choose a message from it If more than one entry queue is nonempty, choose one, nondeterministically, from which to accept a message If all are empty, wait The construct is often called a selective wait Extended accept clause - code following the clause, but before the next clause Executed concurrently with the caller Copyright © 2009 Addison-Wesley. All rights reserved.
622
Cooperation Synchronization with Message Passing
Provided by Guarded accept clauses when not Full(Buffer) => accept Deposit (New_Value) do ... end An accept clause with a with a when clause is either open or closed A clause whose guard is true is called open A clause whose guard is false is called closed A clause without a guard is always open Copyright © 2009 Addison-Wesley. All rights reserved.
623
Semantics of select with Guarded accept Clauses:
select first checks the guards on all clauses If exactly one is open, its queue is checked for messages If more than one are open, non-deterministically choose a queue among them to check for messages If all are closed, it is a runtime error A select clause can include an else clause to avoid the error When the else clause completes, the loop repeats Copyright © 2009 Addison-Wesley. All rights reserved.
624
Example of a Task with Guarded accept Clauses
Note: The station may be out of gas and there may or may not be a position available in the garage task Gas_Station_Attendant is entry Service_Island (Car : Car_Type); entry Garage (Car : Car_Type); end Gas_Station_Attendant; Copyright © 2009 Addison-Wesley. All rights reserved.
625
Example of a Task with Guarded accept Clauses
task body Gas_Station_Attendant is begin loop select when Gas_Available => accept Service_Island (Car : Car_Type) do Fill_With_Gas (Car); end Service_Island; or when Garage_Available => accept Garage (Car : Car_Type) do Fix (Car); end Garage; else Sleep; end select; end loop; end Gas_Station_Attendant; Copyright © 2009 Addison-Wesley. All rights reserved.
626
Competition Synchronization with Message Passing
Modeling mutually exclusive access to shared data Example--a shared buffer Encapsulate the buffer and its operations in a task Competition synchronization is implicit in the semantics of accept clauses Only one accept clause in a task can be active at any given time Copyright © 2009 Addison-Wesley. All rights reserved.
627
Task Termination The execution of a task is completed if control has reached the end of its code body If a task has created no dependent tasks and is completed, it is terminated If a task has created dependent tasks and is completed, it is not terminated until all its dependent tasks are terminated Copyright © 2009 Addison-Wesley. All rights reserved.
628
The terminate Clause A terminate clause in a select is just a terminate statement A terminate clause is selected when no accept clause is open When a terminate is selected in a task, the task is terminated only when its master and all of the dependents of its master are either completed or are waiting at a terminate A block or subprogram is not left until all of its dependent tasks are terminated Copyright © 2009 Addison-Wesley. All rights reserved.
629
Message Passing Priorities
The priority of any task can be set with the pragma priority pragma Priority (expression); The priority of a task applies to it only when it is in the task ready queue Copyright © 2009 Addison-Wesley. All rights reserved.
630
Binary Semaphores For situations where the data to which access is to be controlled is NOT encapsulated in a task task Binary_Semaphore is entry Wait; entry release; end Binary_Semaphore; task body Binary_Semaphore is begin loop accept Wait; accept Release; end loop; end Binary_Semaphore; Copyright © 2009 Addison-Wesley. All rights reserved.
631
Concurrency in Ada 95 Ada 95 includes Ada 83 features for concurrency, plus two new features Protected objects: A more efficient way of implementing shared data to allow access to a shared data structure to be done without rendezvous Asynchronous communication Copyright © 2009 Addison-Wesley. All rights reserved.
632
Ada 95: Protected Objects
A protected object is similar to an abstract data type Access to a protected object is either through messages passed to entries, as with a task, or through protected subprograms A protected procedure provides mutually exclusive read-write access to protected objects A protected function provides concurrent read-only access to protected objects Copyright © 2009 Addison-Wesley. All rights reserved.
633
Asynchronous Communication
Provided through asynchronous select structures An asynchronous select has two triggering alternatives, an entry clause or a delay The entry clause is triggered when sent a message The delay clause is triggered when its time limit is reached Copyright © 2009 Addison-Wesley. All rights reserved.
634
Evaluation of the Ada Message passing model of concurrency is powerful and general Protected objects are a better way to provide synchronized shared data In the absence of distributed processors, the choice between monitors and tasks with message passing is somewhat a matter of taste For distributed systems, message passing is a better model for concurrency Copyright © 2009 Addison-Wesley. All rights reserved.
635
Java Threads The concurrent units in Java are methods named run
A run method code can be in concurrent execution with other such methods The process in which the run methods execute is called a thread Class myThread extends Thread public void run () {…} } … Thread myTh = new MyThread (); myTh.start(); Copyright © 2009 Addison-Wesley. All rights reserved.
636
Controlling Thread Execution
The Thread class has several methods to control the execution of threads The yield is a request from the running thread to voluntarily surrender the processor The sleep method can be used by the caller of the method to block the thread The join method is used to force a method to delay its execution until the run method of another thread has completed its execution Copyright © 2009 Addison-Wesley. All rights reserved.
637
Thread Priorities A thread’s default priority is the same as the thread that create it If main creates a thread, its default priority is NORM_PRIORITY Threads defined two other priority constants, MAX_PRIORITY and MIN_PRIORITY The priority of a thread can be changed with the methods setPriority Copyright © 2009 Addison-Wesley. All rights reserved.
638
Competition Synchronization with Java Threads
A method that includes the synchronized modifier disallows any other method from running on the object while it is in execution … public synchronized void deposit( int i) {…} public synchronized int fetch() {…} The above two methods are synchronized which prevents them from interfering with each other If only a part of a method must be run without interference, it can be synchronized thru synchronized statement synchronized (expression) statement Copyright © 2009 Addison-Wesley. All rights reserved.
639
Cooperation Synchronization with Java Threads
Cooperation synchronization in Java is achieved via wait, notify, and notifyAll methods All methods are defined in Object, which is the root class in Java, so all objects inherit them The wait method must be called in a loop The notify method is called to tell one waiting thread that the event it was waiting has happened The notifyAll method awakens all of the threads on the object’s wait list Copyright © 2009 Addison-Wesley. All rights reserved.
640
Java’s Thread Evaluation
Java’s support for concurrency is relatively simple but effective Not as powerful as Ada’s tasks Copyright © 2009 Addison-Wesley. All rights reserved.
641
C# Threads Loosely based on Java but there are significant differences
Basic thread operations Any method can run in its own thread A thread is created by creating a Thread object Creating a thread does not start its concurrent execution; it must be requested through the Start method A thread can be made to wait for another thread to finish with Join A thread can be suspended with Sleep A thread can be terminated with Abort Copyright © 2009 Addison-Wesley. All rights reserved.
642
Synchronizing Threads
Three ways to synchronize C# threads The Interlocked class Used when the only operations that need to be synchronized are incrementing or decrementing of an integer The lock statement Used to mark a critical section of code in a thread lock (expression) {… } The Monitor class Provides four methods that can be used to provide more sophisticated synchronization Copyright © 2009 Addison-Wesley. All rights reserved.
643
C#’s Concurrency Evaluation
An advance over Java threads, e.g., any method can run its own thread Thread termination is cleaner than in Java Synchronization is more sophisticated Copyright © 2009 Addison-Wesley. All rights reserved.
644
Statement-Level Concurrency
Objective: Provide a mechanism that the programmer can use to inform compiler of ways it can map the program onto multiprocessor architecture Minimize communication among processors and the memories of the other processors Copyright © 2009 Addison-Wesley. All rights reserved.
645
High-Performance Fortran
A collection of extensions that allow the programmer to provide information to the compiler to help it optimize code for multiprocessor computers Specify the number of processors, the distribution of data over the memories of those processors, and the alignment of data Copyright © 2009 Addison-Wesley. All rights reserved.
646
Primary HPF Specifications
Number of processors !HPF$ PROCESSORS procs (n) Distribution of data !HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list kind can be BLOCK (distribute data to processors in blocks) or CYCLIC (distribute data to processors one element at a time) Relate the distribution of one array with that of another ALIGN array1_element WITH array2_element Copyright © 2009 Addison-Wesley. All rights reserved.
647
Statement-Level Concurrency Example
REAL list_1(1000), list_2(1000) INTEGER list_3(500), list_4(501) !HPF$ PROCESSORS proc (10) !HPF$ DISTRIBUTE (BLOCK) ONTO procs :: list_1, list_2 !HPF$ ALIGN list_1(index) WITH list_4 (index+1) … list_1 (index) = list_2(index) list_3(index) = list_4(index+1) Copyright © 2009 Addison-Wesley. All rights reserved.
648
Statement-Level Concurrency (continued)
FORALL statement is used to specify a list of statements that may be executed concurrently FORALL (index = 1:1000) list_1(index) = list_2(index) Specifies that all 1,000 RHSs of the assignments can be evaluated before any assignment takes place Copyright © 2009 Addison-Wesley. All rights reserved.
649
Summary Concurrent execution can be at the instruction, statement, or subprogram level Physical concurrency: when multiple processors are used to execute concurrent units Logical concurrency: concurrent united are executed on a single processor Two primary facilities to support subprogram concurrency: competition synchronization and cooperation synchronization Mechanisms: semaphores, monitors, rendezvous, threads High-Performance Fortran provides statements for specifying how data is to be distributed over the memory units connected to multiple processors Copyright © 2009 Addison-Wesley. All rights reserved.
650
Exception Handling and Event Handling
Chapter 14 Exception Handling and Event Handling
651
Chapter 14 Topics Introduction to Exception Handling
Exception Handling in Ada Exception Handling in C++ Exception Handling in Java Introduction to Event Handling Event Handling with Java Copyright © 2009 Addison-Wesley. All rights reserved.
652
Introduction to Exception Handling
In a language without exception handling When an exception occurs, control goes to the operating system, where a message is displayed and the program is terminated In a language with exception handling Programs are allowed to trap some exceptions, thereby providing the possibility of fixing the problem and continuing Copyright © 2009 Addison-Wesley. All rights reserved.
653
Basic Concepts Many languages allow programs to trap input/output errors (including EOF) An exception is any unusual event, either erroneous or not, detectable by either hardware or software, that may require special processing The special processing that may be required after detection of an exception is called exception handling The exception handling code unit is called an exception handler Copyright © 2009 Addison-Wesley. All rights reserved.
654
Exception Handling Alternatives
An exception is raised when its associated event occurs A language that does not have exception handling capabilities can still define, detect, raise, and handle exceptions (user defined, software detected) Alternatives: Send an auxiliary parameter or use the return value to indicate the return status of a subprogram Pass a label parameter to all subprograms (error return is to the passed label) Pass an exception handling subprogram to all subprograms Copyright © 2009 Addison-Wesley. All rights reserved.
655
Advantages of Built-in Exception Handling
Error detection code is tedious to write and it clutters the program Exception handling encourages programmers to consider many different possible errors Exception propagation allows a high level of reuse of exception handling code Copyright © 2009 Addison-Wesley. All rights reserved.
656
Design Issues (continued)
How and where are exception handlers specified and what is their scope? How is an exception occurrence bound to an exception handler? Can information about the exception be passed to the handler? Where does execution continue, if at all, after an exception handler completes its execution? (continuation vs. resumption) Is some form of finalization provided? Copyright © 2009 Addison-Wesley. All rights reserved.
657
Design Issues How are user-defined exceptions specified?
Should there be default exception handlers for programs that do not provide their own? Can built-in exceptions be explicitly raised? Are hardware-detectable errors treated as exceptions that can be handled? Are there any built-in exceptions? How can exceptions be disabled, if at all? Copyright © 2009 Addison-Wesley. All rights reserved.
658
Exception Handling Control Flow
Copyright © 2009 Addison-Wesley. All rights reserved.
659
Exception Handling in Ada
The frame of an exception handler in Ada is either a subprogram body, a package body, a task, or a block Because exception handlers are usually local to the code in which the exception can be raised, they do not have parameters Copyright © 2009 Addison-Wesley. All rights reserved.
660
Ada Exception Handlers
Handler form: when exception_choice{|exception_choice} => statement_sequence ... [when others => statement_sequence] exception_choice form: exception_name | others Handlers are placed at the end of the block or unit in which they occur Copyright © 2009 Addison-Wesley. All rights reserved.
661
Binding Exceptions to Handlers
If the block or unit in which an exception is raised does not have a handler for that exception, the exception is propagated elsewhere to be handled Procedures - propagate it to the caller Blocks - propagate it to the scope in which it appears Package body - propagate it to the declaration part of the unit that declared the package (if it is a library unit, the program is terminated) Task - no propagation; if it has a handler, execute it; in either case, mark it "completed" Copyright © 2009 Addison-Wesley. All rights reserved.
662
Continuation The block or unit that raises an exception but does not handle it is always terminated (also any block or unit to which it is propagated that does not handle it) Copyright © 2009 Addison-Wesley. All rights reserved.
663
Other Design Choices User-defined Exceptions form:
exception_name_list : exception; Raising Exceptions form: raise [exception_name] (the exception name is not required if it is in a handler--in this case, it propagates the same exception) Exception conditions can be disabled with: pragma SUPPRESS(exception_list) Copyright © 2009 Addison-Wesley. All rights reserved.
664
Predefined Exceptions
CONSTRAINT_ERROR - index constraints, range constraints, etc. NUMERIC_ERROR - numeric operation cannot return a correct value (overflow, division by zero, etc.) PROGRAM_ERROR - call to a subprogram whose body has not been elaborated STORAGE_ERROR - system runs out of heap TASKING_ERROR - an error associated with tasks Copyright © 2009 Addison-Wesley. All rights reserved.
665
Evaluation The Ada design for exception handling embodies the state-of-the-art in language design in 1980 Ada was the only widely used language with exception handling until it was added to C++ Copyright © 2009 Addison-Wesley. All rights reserved.
666
Exception Handling in C++
Added to C++ in 1990 Design is based on that of CLU, Ada, and ML Copyright © 2009 Addison-Wesley. All rights reserved.
667
C++ Exception Handlers
Exception Handlers Form: try { -- code that is expected to raise an exception } catch (formal parameter) { -- handler code ... Copyright © 2009 Addison-Wesley. All rights reserved.
668
The catch Function catch is the name of all handlers--it is an overloaded name, so the formal parameter of each must be unique The formal parameter need not have a variable It can be simply a type name to distinguish the handler it is in from others The formal parameter can be used to transfer information to the handler The formal parameter can be an ellipsis, in which case it handles all exceptions not yet handled Copyright © 2009 Addison-Wesley. All rights reserved.
669
Throwing Exceptions Exceptions are all raised explicitly by the statement: throw [expression]; The brackets are metasymbols A throw without an operand can only appear in a handler; when it appears, it simply re-raises the exception, which is then handled elsewhere The type of the expression disambiguates the intended handler Copyright © 2009 Addison-Wesley. All rights reserved.
670
Unhandled Exceptions An unhandled exception is propagated to the caller of the function in which it is raised This propagation continues to the main function If no handler is found, the default handler is called Copyright © 2009 Addison-Wesley. All rights reserved.
671
Continuation After a handler completes its execution, control flows to the first statement after the last handler in the sequence of handlers of which it is an element Other design choices All exceptions are user-defined Exceptions are neither specified nor declared The default handler, unexpected, simply terminates the program; unexpected can be redefined by the user Functions can list the exceptions they may raise Without a specification, a function can raise any exception (the throw clause) Copyright © 2009 Addison-Wesley. All rights reserved.
672
Evaluation It is odd that exceptions are not named and that hardware- and system software-detectable exceptions cannot be handled Binding exceptions to handlers through the type of the parameter certainly does not promote readability Copyright © 2009 Addison-Wesley. All rights reserved.
673
Exception Handling in Java
Based on that of C++, but more in line with OOP philosophy All exceptions are objects of classes that are descendants of the Throwable class Copyright © 2009 Addison-Wesley. All rights reserved.
674
Classes of Exceptions The Java library includes two subclasses of Throwable : Error Thrown by the Java interpreter for events such as heap overflow Never handled by user programs Exception User-defined exceptions are usually subclasses of this Has two predefined subclasses, IOException and RuntimeException (e.g., ArrayIndexOutOfBoundsException and NullPointerException Copyright © 2009 Addison-Wesley. All rights reserved.
675
Java Exception Handlers
Like those of C++, except every catch requires a named parameter and all parameters must be descendants of Throwable Syntax of try clause is exactly that of C++ Exceptions are thrown with throw, as in C++, but often the throw includes the new operator to create the object, as in: throw new MyException(); Copyright © 2009 Addison-Wesley. All rights reserved.
676
Binding Exceptions to Handlers
Binding an exception to a handler is simpler in Java than it is in C++ An exception is bound to the first handler with a parameter is the same class as the thrown object or an ancestor of it An exception can be handled and rethrown by including a throw in the handler (a handler could also throw a different exception) Copyright © 2009 Addison-Wesley. All rights reserved.
677
Continuation If no handler is found in the try construct, the search is continued in the nearest enclosing try construct, etc. If no handler is found in the method, the exception is propagated to the method’s caller If no handler is found (all the way to main), the program is terminated To insure that all exceptions are caught, a handler can be included in any try construct that catches all exceptions Simply use an Exception class parameter Of course, it must be the last in the try construct Copyright © 2009 Addison-Wesley. All rights reserved.
678
Checked and Unchecked Exceptions
The Java throws clause is quite different from the throw clause of C++ Exceptions of class Error and RunTimeException and all of their descendants are called unchecked exceptions; all other exceptions are called checked exceptions Checked exceptions that may be thrown by a method must be either: Listed in the throws clause, or Handled in the method Copyright © 2009 Addison-Wesley. All rights reserved.
679
Other Design Choices A method cannot declare more exceptions in its throws clause than the method it overrides A method that calls a method that lists a particular checked exception in its throws clause has three alternatives for dealing with that exception: Catch and handle the exception Catch the exception and throw an exception that is listed in its own throws clause Declare it in its throws clause and do not handle it Copyright © 2009 Addison-Wesley. All rights reserved.
680
The finally Clause Can appear at the end of a try construct Form:
... } Purpose: To specify code that is to be executed, regardless of what happens in the try construct Copyright © 2009 Addison-Wesley. All rights reserved.
681
Example A try construct with a finally clause can be used outside exception handling try { for (index = 0; index < 100; index++) { … if (…) { return; } //** end of if } //** end of try clause finally { } //** end of try construct Copyright © 2009 Addison-Wesley. All rights reserved.
682
Assertions Statements in the program declaring a boolean expression regarding the current state of the computation When evaluated to true nothing happens When evaluated to false an AssertionError exception is thrown Can be disabled during runtime without program modification or recompilation Two forms assert condition; assert condition: expression; Copyright © 2009 Addison-Wesley. All rights reserved.
683
Evaluation The types of exceptions makes more sense than in the case of C++ The throws clause is better than that of C++ (The throw clause in C++ says little to the programmer) The finally clause is often useful The Java interpreter throws a variety of exceptions that can be handled by user programs Copyright © 2009 Addison-Wesley. All rights reserved.
684
Introduction to Event Handling
An event is created by an external action such as a user interaction through a GUI The event handler is a segment of code that is called in response to an event Copyright © 2009 Addison-Wesley. All rights reserved.
685
Java Swing GUI Components
Text box is an object of class JTextField Radio button is an object of class JRadioButton Applet’s display is a frame, a multilayered structure Content pane is one layer, where applets put output GUI components can be placed in a frame Layout manager objects are used to control the placement of components Copyright © 2009 Addison-Wesley. All rights reserved.
686
The Java Event Model User interactions with GUI components create events that can be caught by event handlers, called event listeners An event generator tells a listener of an event by sending a message An interface is used to make event-handling methods conform to a standard protocol A class that implements a listener must implement an interface for the listener Copyright © 2009 Addison-Wesley. All rights reserved.
687
The Java Event Model (continued)
One class of events is ItemEvent, which is associated with the event of clicking a checkbox, a radio button, or a list item The ItemListener interface prescribes a method, itemStateChanged, which is a handler for ItemEvent events The listener is created with addItemListener Copyright © 2009 Addison-Wesley. All rights reserved.
688
Summary Ada provides extensive exception-handling facilities with a comprehensive set of built-in exceptions. C++ includes no predefined exceptions Exceptions are bound to handlers by connecting the type of expression in the throw statement to that of the formal parameter of the catch function Java exceptions are similar to C++ exceptions except that a Java exception must be a descendant of the Throwable class. Additionally Java includes a finally clause An event is a notification that something has occurred that requires handling by an event handler Copyright © 2009 Addison-Wesley. All rights reserved.
689
Functional Programming Languages
Chapter 15 Functional Programming Languages
690
Copyright © 2009 Addison-Wesley. All rights reserved.
691
Chapter 15 Topics Introduction Mathematical Functions
Fundamentals of Functional Programming Languages The First Functional Programming Language: LISP Introduction to Scheme COMMON LISP ML Haskell Applications of Functional Languages Comparison of Functional and Imperative Languages Copyright © 2009 Addison-Wesley. All rights reserved.
692
Introduction The design of the imperative languages is based directly on the von Neumann architecture Efficiency is the primary concern, rather than the suitability of the language for software development The design of the functional languages is based on mathematical functions A solid theoretical basis that is also closer to the user, but relatively unconcerned with the architecture of the machines on which programs will run Copyright © 2009 Addison-Wesley. All rights reserved.
693
Mathematical Functions
A mathematical function is a mapping of members of one set, called the domain set, to another set, called the range set A lambda expression specifies the parameter(s) and the mapping of a function in the following form (x) x * x * x for the function cube (x) = x * x * x Copyright © 2009 Addison-Wesley. All rights reserved.
694
Lambda Expressions Lambda expressions describe nameless functions
Lambda expressions are applied to parameter(s) by placing the parameter(s) after the expression e.g., ((x) x * x * x)(2) which evaluates to 8 Copyright © 2009 Addison-Wesley. All rights reserved.
695
Functional Forms A higher-order function, or functional form, is one that either takes functions as parameters or yields a function as its result, or both Copyright © 2009 Addison-Wesley. All rights reserved.
696
Function Composition A functional form that takes two functions as parameters and yields a function whose value is the first actual parameter function applied to the application of the second Form: h f ° g which means h (x) f ( g ( x)) For f (x) x + 2 and g (x) 3 * x, h f ° g yields (3 * x)+ 2 Copyright © 2009 Addison-Wesley. All rights reserved.
697
Apply-to-all A functional form that takes a single function as a parameter and yields a list of values obtained by applying the given function to each element of a list of parameters Form: For h (x) x * x ( h, (2, 3, 4)) yields (4, 9, 16) Copyright © 2009 Addison-Wesley. All rights reserved.
698
Fundamentals of Functional Programming Languages
The objective of the design of a FPL is to mimic mathematical functions to the greatest extent possible The basic process of computation is fundamentally different in a FPL than in an imperative language In an imperative language, operations are done and the results are stored in variables for later use Management of variables is a constant concern and source of complexity for imperative programming In an FPL, variables are not necessary, as is the case in mathematics Copyright © 2009 Addison-Wesley. All rights reserved.
699
Fundamentals of Functional Programming Languages - continued
Referential Transparency - In an FPL, the evaluation of a function always produces the same result given the same parameters Tail Recursion – Writing recursive functions that can be automatically converted to iteration Copyright © 2009 Addison-Wesley. All rights reserved.
700
LISP Data Types and Structures
Data object types: originally only atoms and lists List form: parenthesized collections of sublists and/or atoms e.g., (A B (C D) E) Originally, LISP was a typeless language LISP lists are stored internally as single-linked lists Copyright © 2009 Addison-Wesley. All rights reserved.
701
LISP Interpretation Lambda notation is used to specify functions and function definitions. Function applications and data have the same form. e.g., If the list (A B C) is interpreted as data it is a simple list of three atoms, A, B, and C If it is interpreted as a function application, it means that the function named A is applied to the two parameters, B and C The first LISP interpreter appeared only as a demonstration of the universality of the computational capabilities of the notation Copyright © 2009 Addison-Wesley. All rights reserved.
702
Origins of Scheme A mid-1970s dialect of LISP, designed to be a cleaner, more modern, and simpler version than the contemporary dialects of LISP Uses only static scoping Functions are first-class entities They can be the values of expressions and elements of lists They can be assigned to variables and passed as parameters Copyright © 2009 Addison-Wesley. All rights reserved.
703
Evaluation Parameters are evaluated, in no particular order
The values of the parameters are substituted into the function body The function body is evaluated The value of the last expression in the body is the value of the function Copyright © 2009 Addison-Wesley. All rights reserved.
704
Primitive Functions Arithmetic: +, -, *, /, ABS, SQRT, REMAINDER, MIN, MAX e.g., (+ 5 2) yields 7 QUOTE - takes one parameter; returns the parameter without evaluation QUOTE is required because the Scheme interpreter, named EVAL, always evaluates parameters to function applications before applying the function. QUOTE is used to avoid parameter evaluation when it is not appropriate QUOTE can be abbreviated with the apostrophe prefix operator '(A B) is equivalent to (QUOTE (A B)) Copyright © 2009 Addison-Wesley. All rights reserved.
705
Function Definition: LAMBDA
Lambda Expressions Form is based on notation e.g., (LAMBDA (x) (* x x) x is called a bound variable Lambda expressions can be applied e.g., ((LAMBDA (x) (* x x)) 7) Copyright © 2009 Addison-Wesley. All rights reserved.
706
Special Form Function: DEFINE
A Function for Constructing Functions DEFINE - Two forms: To bind a symbol to an expression e.g., (DEFINE pi ) Example use: (DEFINE two_pi (* 2 pi)) To bind names to lambda expressions e.g., (DEFINE (square x) (* x x)) Example use: (square 5) - The evaluation process for DEFINE is different! The first parameter is never evaluated. The second parameter is evaluated and bound to the first parameter. Copyright © 2009 Addison-Wesley. All rights reserved.
707
Output Functions (DISPLAY expression) (NEWLINE)
Copyright © 2009 Addison-Wesley. All rights reserved.
708
Numeric Predicate Functions
#T is true and #F is false (sometimes () is used for false) =, <>, >, <, >=, <= EVEN?, ODD?, ZERO?, NEGATIVE? Copyright © 2009 Addison-Wesley. All rights reserved.
709
Control Flow: IF Selection- the special form, IF
(IF predicate then_exp else_exp) e.g., (IF (<> count 0) (/ sum count) 0) Copyright © 2009 Addison-Wesley. All rights reserved.
710
Control Flow: COND Multiple Selection - the special form, COND
General form: (COND (predicate_1 expr {expr}) ... (ELSE expr {expr})) Returns the value of the last expression in the first pair whose predicate evaluates to true Copyright © 2009 Addison-Wesley. All rights reserved.
711
Example of COND (DEFINE (compare x y) (COND
((> x y) “x is greater than y”) ((< x y) “y is greater than x”) (ELSE “x and y are equal”) ) Copyright © 2009 Addison-Wesley. All rights reserved.
712
List Functions: CONS and LIST
CONS takes two parameters, the first of which can be either an atom or a list and the second of which is a list; returns a new list that includes the first parameter as its first element and the second parameter as the remainder of its result e.g., (CONS 'A '(B C)) returns (A B C) LIST takes any number of parameters; returns a list with the parameters as elements Copyright © 2009 Addison-Wesley. All rights reserved.
713
List Functions: CAR and CDR
CAR takes a list parameter; returns the first element of that list e.g., (CAR '(A B C)) yields A (CAR '((A B) C D)) yields (A B) CDR takes a list parameter; returns the list after removing its first element e.g., (CDR '(A B C)) yields (B C) (CDR '((A B) C D)) yields (C D) Copyright © 2009 Addison-Wesley. All rights reserved.
714
Predicate Function: EQ?
EQ? takes two symbolic parameters; it returns #T if both parameters are atoms and the two are the same; otherwise #F e.g., (EQ? 'A 'A) yields #T (EQ? 'A 'B) yields #F Note that if EQ? is called with list parameters, the result is not reliable Also EQ? does not work for numeric atoms Copyright © 2009 Addison-Wesley. All rights reserved.
715
Predicate Functions: LIST? and NULL?
LIST? takes one parameter; it returns #T if the parameter is a list; otherwise #F NULL? takes one parameter; it returns #T if the parameter is the empty list; otherwise #F Note that NULL? returns #T if the parameter is() Copyright © 2009 Addison-Wesley. All rights reserved.
716
Example Scheme Function: member
member takes an atom and a simple list; returns #T if the atom is in the list; #F otherwise DEFINE (member atm lis) (COND ((NULL? lis) #F) ((EQ? atm (CAR lis)) #T) ((ELSE (member atm (CDR lis))) )) Copyright © 2009 Addison-Wesley. All rights reserved.
717
Example Scheme Function: equalsimp
equalsimp takes two simple lists as parameters; returns #T if the two simple lists are equal; #F otherwise (DEFINE (equalsimp lis1 lis2) (COND ((NULL? lis1) (NULL? lis2)) ((NULL? lis2) #F) ((EQ? (CAR lis1) (CAR lis2)) (equalsimp(CDR lis1)(CDR lis2))) (ELSE #F) )) Copyright © 2009 Addison-Wesley. All rights reserved.
718
Example Scheme Function: equal
equal takes two general lists as parameters; returns #T if the two lists are equal; #F otherwise (DEFINE (equal lis1 lis2) (COND ((NOT (LIST? lis1))(EQ? lis1 lis2)) ((NOT (LIST? lis2)) #F) ((NULL? lis1) (NULL? lis2)) ((NULL? lis2) #F) ((equal (CAR lis1) (CAR lis2)) (equal (CDR lis1) (CDR lis2))) (ELSE #F) )) Copyright © 2009 Addison-Wesley. All rights reserved.
719
Example Scheme Function: append
append takes two lists as parameters; returns the first parameter list with the elements of the second parameter list appended at the end (DEFINE (append lis1 lis2) (COND ((NULL? lis1) lis2) (ELSE (CONS (CAR lis1) (append (CDR lis1) lis2))) )) Copyright © 2009 Addison-Wesley. All rights reserved.
720
Example Scheme Function: LET
General form: (LET ( (name_1 expression_1) (name_2 expression_2) ... (name_n expression_n)) body ) Evaluate all expressions, then bind the values to the names; evaluate the body Copyright © 2009 Addison-Wesley. All rights reserved.
721
LET Example (DEFINE (quadratic_roots a b c) (LET ( (root_part_over_2a
(/ (SQRT (- (* b b) (* 4 a c)))(* 2 a))) (minus_b_over_2a (/ (- 0 b) (* 2 a))) (DISPLAY (+ minus_b_over_2a root_part_over_2a)) (NEWLINE) (DISPLAY (- minus_b_over_2a root_part_over_2a)) )) Copyright © 2009 Addison-Wesley. All rights reserved.
722
Tail Recursion in Scheme
Definition: A function is tail recursive if its recursive call is the last operation in the function A tail recursive function can be automatically converted by a compiler to use iteration, making it faster Scheme language definition requires that Scheme language systems convert all tail recursive functions to use iteration Copyright © 2009 Addison-Wesley. All rights reserved.
723
Tail Recursion in Scheme - continued
Example of rewriting a function to make it tail recursive, using helper a function Original: (DEFINE (factorial n) (IF (= n 0) 1 (* n (factorial (- n 1))) )) Tail recursive: (DEFINE (facthelper n factpartial) factpartial facthelper((- n 1) (* n factpartial))) (DEFINE (factorial n) (facthelper n 1)) Copyright © 2009 Addison-Wesley. All rights reserved.
724
Scheme Functional Forms
Composition The previous examples have used it (CDR (CDR '(A B C))) returns (C) Apply to All - one form in Scheme is mapcar Applies the given function to all elements of the given list; (DEFINE (mapcar fun lis) (COND ((NULL? lis) ()) (ELSE (CONS (fun (CAR lis)) (mapcar fun (CDR lis)))) )) Copyright © 2009 Addison-Wesley. All rights reserved.
725
Functions That Build Code
It is possible in Scheme to define a function that builds Scheme code and requests its interpretation This is possible because the interpreter is a user-available function, EVAL Copyright © 2009 Addison-Wesley. All rights reserved.
726
Adding a List of Numbers
((DEFINE (adder lis) (COND ((NULL? lis) 0) (ELSE (EVAL (CONS '+ lis))) )) The parameter is a list of numbers to be added; adder inserts a + operator and evaluates the resulting list Use CONS to insert the atom + into the list of numbers. Be sure that + is quoted to prevent evaluation Submit the new list to EVAL for evaluation Copyright © 2009 Addison-Wesley. All rights reserved.
727
COMMON LISP A combination of many of the features of the popular dialects of LISP around in the early 1980s A large and complex language--the opposite of Scheme Features include: records arrays complex numbers character strings powerful I/O capabilities packages with access control iterative control statements Copyright © 2009 Addison-Wesley. All rights reserved.
728
ML A static-scoped functional language with syntax that is closer to Pascal than to LISP Uses type declarations, but also does type inferencing to determine the types of undeclared variables It is strongly typed (whereas Scheme is essentially typeless) and has no type coercions Includes exception handling and a module facility for implementing abstract data types Includes lists and list operations Copyright © 2009 Addison-Wesley. All rights reserved.
729
ML Specifics Function declaration form: fun name (parameters) = body;
e.g., fun cube (x : int) = x * x * x; - The type could be attached to return value, as in fun cube (x) : int = x * x * x; - With no type specified, it would default to int (the default for numeric values) - User-defined overloaded functions are not allowed, so if we wanted a cube function for real parameters, it would need to have a different name - There are no type coercions in ML Copyright © 2009 Addison-Wesley. All rights reserved.
730
ML Specifics (continued)
ML selection if expression then then_expression else else_expression where the first expression must evaluate to a Boolean value Pattern matching is used to allow a function to operate on different parameter forms fun fact(0) = 1 | fact(n : int) : int = n * fact(n – 1) Copyright © 2009 Addison-Wesley. All rights reserved.
731
ML Specifics (continued)
Lists Literal lists are specified in brackets [3, 5, 7] [] is the empty list CONS is the binary infix operator, :: 4 :: [3, 5, 7], which evaluates to [4, 3, 5, 7] CAR is the unary operator hd CDR is the unary operator tl fun length([]) = 0 | length(h :: t) = 1 + length(t); fun append([], lis2) = lis2 | append(h :: t, lis2) = h :: append(t, lis2); Copyright © 2009 Addison-Wesley. All rights reserved.
732
ML Specifics (continued)
The val statement binds a name to a value (similar to DEFINE in Scheme) val distance = time * speed; As is the case with DEFINE, val is nothing like an assignment statement in an imperative language Copyright © 2009 Addison-Wesley. All rights reserved.
733
Haskell Similar to ML (syntax, static scoped, strongly typed, type inferencing, pattern matching) Different from ML (and most other functional languages) in that it is purely functional (e.g., no variables, no assignment statements, and no side effects of any kind) Syntax differences from ML fact 0 = 1 fact n = n * fact (n – 1) fib 0 = 1 fib 1 = 1 fib (n + 2) = fib (n + 1) + fib n Copyright © 2009 Addison-Wesley. All rights reserved.
734
Function Definitions with Different Parameter Ranges
fact n | n == 0 = 1 | n > 0 = n * fact(n – 1) sub n | n < 10 = 0 | n > 100 = 2 | otherwise = 1 square x = x * x - Works for any numeric type of x Copyright © 2009 Addison-Wesley. All rights reserved.
735
Lists List notation: Put elements in brackets
e.g., directions = ["north", "south", "east", "west"] Length: # e.g., #directions is 4 Arithmetic series with the .. operator e.g., [2, 4..10] is [2, 4, 6, 8, 10] Catenation is with ++ e.g., [1, 3] ++ [5, 7] results in [1, 3, 5, 7] CONS, CAR, CDR via the colon operator (as in Prolog) e.g., 1:[3, 5, 7] results in [1, 3, 5, 7] Copyright © 2009 Addison-Wesley. All rights reserved.
736
Factorial Revisited product [] = 1 product (a:x) = a * product x
fact n = product [1..n] Copyright © 2009 Addison-Wesley. All rights reserved.
737
List Comprehension Set notation
List of the squares of the first 20 positive integers: [n * n | n ← [1..20]] All of the factors of its given parameter: factors n = [i | i ← [1..n ̀div̀ 2], n ̀mod̀ i == 0] Copyright © 2009 Addison-Wesley. All rights reserved.
738
Quicksort sort [] = [] sort (a:x) = sort [b | b ← x; b <= a] ++
Copyright © 2009 Addison-Wesley. All rights reserved.
739
Lazy Evaluation A language is strict if it requires all actual parameters to be fully evaluated A language is nonstrict if it does not have the strict requirement Nonstrict languages are more efficient and allow some interesting capabilities – infinite lists Lazy evaluation - Only compute those values that are necessary Positive numbers positives = [0..] Determining if 16 is a square number member [] b = False member(a:x) b=(a == b)||member x b squares = [n * n | n ← [0..]] member squares 16 Copyright © 2009 Addison-Wesley. All rights reserved.
740
Member Revisited The member function could be written as:
member [] b = False member(a:x) b=(a == b)||member x b However, this would only work if the parameter to squares was a perfect square; if not, it will keep generating them forever. The following version will always work: member2 (m:x) n | m < n = member2 x n | m == n = True | otherwise = False Copyright © 2009 Addison-Wesley. All rights reserved.
741
Applications of Functional Languages
APL is used for throw-away programs LISP is used for artificial intelligence Knowledge representation Machine learning Natural language processing Modeling of speech and vision Scheme is used to teach introductory programming at some universities Copyright © 2009 Addison-Wesley. All rights reserved.
742
Comparing Functional and Imperative Languages
Efficient execution Complex semantics Complex syntax Concurrency is programmer designed Functional Languages: Simple semantics Simple syntax Inefficient execution Programs can automatically be made concurrent Copyright © 2009 Addison-Wesley. All rights reserved.
743
Summary Functional programming languages use function application, conditional expressions, recursion, and functional forms to control program execution instead of imperative features such as variables and assignments LISP began as a purely functional language and later included imperative features Scheme is a relatively simple dialect of LISP that uses static scoping exclusively COMMON LISP is a large LISP-based language ML is a static-scoped and strongly typed functional language which includes type inference, exception handling, and a variety of data structures and abstract data types Haskell is a lazy functional language supporting infinite lists and set comprehension. Purely functional languages have advantages over imperative alternatives, but their lower efficiency on existing machine architectures has prevented them from enjoying widespread use Copyright © 2009 Addison-Wesley. All rights reserved.
744
Logic Programming Languages
Chapter 16 Logic Programming Languages
745
Chapter 16 Topics Introduction
A Brief Introduction to Predicate Calculus Predicate Calculus and Proving Theorems An Overview of Logic Programming The Origins of Prolog The Basic Elements of Prolog Deficiencies of Prolog Applications of Logic Programming Copyright © 2009 Addison-Wesley. All rights reserved.
746
Introduction Logic programming languages, sometimes called declarative programming languages Express programs in a form of symbolic logic Use a logical inferencing process to produce results Declarative rather that procedural: Only specification of results are stated (not detailed procedures for producing them) Copyright © 2009 Addison-Wesley. All rights reserved.
747
Proposition A logical statement that may or may not be true
Consists of objects and relationships of objects to each other Copyright © 2009 Addison-Wesley. All rights reserved.
748
Symbolic Logic Logic which can be used for the basic needs of formal logic: Express propositions Express relationships between propositions Describe how new propositions can be inferred from other propositions Particular form of symbolic logic used for logic programming called predicate calculus Copyright © 2009 Addison-Wesley. All rights reserved.
749
Object Representation
Objects in propositions are represented by simple terms: either constants or variables Constant: a symbol that represents an object Variable: a symbol that can represent different objects at different times Different from variables in imperative languages Copyright © 2009 Addison-Wesley. All rights reserved.
750
Compound Terms Atomic propositions consist of compound terms
Compound term: one element of a mathematical relation, written like a mathematical function Mathematical function is a mapping Can be written as a table Copyright © 2009 Addison-Wesley. All rights reserved.
751
Parts of a Compound Term
Compound term composed of two parts Functor: function symbol that names the relationship Ordered list of parameters (tuple) Examples: student(jon) like(seth, OSX) like(nick, windows) like(jim, linux) Copyright © 2009 Addison-Wesley. All rights reserved.
752
Forms of a Proposition Propositions can be stated in two forms:
Fact: proposition is assumed to be true Query: truth of proposition is to be determined Compound proposition: Have two or more atomic propositions Propositions are connected by operators Copyright © 2009 Addison-Wesley. All rights reserved.
753
Logical Operators Name Symbol Example Meaning negation a not a
conjunction a b a and b disjunction a b a or b equivalence a b a is equivalent to b implication a b a b a implies b b implies a Copyright © 2009 Addison-Wesley. All rights reserved.
754
Quantifiers Name Example Meaning universal X.P For all X, P is true
existential X.P There exists a value of X such that P is true Copyright © 2009 Addison-Wesley. All rights reserved.
755
Clausal Form Too many ways to state the same thing
Use a standard form for propositions Clausal form: B1 B2 … Bn A1 A2 … Am means if all the As are true, then at least one B is true Antecedent: right side Consequent: left side Copyright © 2009 Addison-Wesley. All rights reserved.
756
Predicate Calculus and Proving Theorems
A use of propositions is to discover new theorems that can be inferred from known axioms and theorems Resolution: an inference principle that allows inferred propositions to be computed from given propositions Copyright © 2009 Addison-Wesley. All rights reserved.
757
Resolution Unification: finding values for variables in propositions that allows matching process to succeed Instantiation: assigning temporary values to variables to allow unification to succeed After instantiating a variable with a value, if matching fails, may need to backtrack and instantiate with a different value Copyright © 2009 Addison-Wesley. All rights reserved.
758
Proof by Contradiction
Hypotheses: a set of pertinent propositions Goal: negation of theorem stated as a proposition Theorem is proved by finding an inconsistency Copyright © 2009 Addison-Wesley. All rights reserved.
759
Theorem Proving Basis for logic programming
When propositions used for resolution, only restricted form can be used Horn clause - can have only two forms Headed: single atomic proposition on left side Headless: empty left side (used to state facts) Most propositions can be stated as Horn clauses Copyright © 2009 Addison-Wesley. All rights reserved.
760
Overview of Logic Programming
Declarative semantics There is a simple way to determine the meaning of each statement Simpler than the semantics of imperative languages Programming is nonprocedural Programs do not state now a result is to be computed, but rather the form of the result Copyright © 2009 Addison-Wesley. All rights reserved.
761
Example: Sorting a List
Describe the characteristics of a sorted list, not the process of rearranging a list sort(old_list, new_list) permute (old_list, new_list) sorted (new_list) sorted (list) j such that 1 j < n, list(j) list (j+1) Copyright © 2009 Addison-Wesley. All rights reserved.
762
The Origins of Prolog University of Aix-Marseille
Natural language processing University of Edinburgh Automated theorem proving Copyright © 2009 Addison-Wesley. All rights reserved.
763
Terms Edinburgh Syntax Term: a constant, variable, or structure
Constant: an atom or an integer Atom: symbolic value of Prolog Atom consists of either: a string of letters, digits, and underscores beginning with a lowercase letter a string of printable ASCII characters delimited by apostrophes Copyright © 2009 Addison-Wesley. All rights reserved.
764
Terms: Variables and Structures
Variable: any string of letters, digits, and underscores beginning with an uppercase letter Instantiation: binding of a variable to a value Lasts only as long as it takes to satisfy one complete goal Structure: represents atomic proposition functor(parameter list) Copyright © 2009 Addison-Wesley. All rights reserved.
765
Fact Statements Used for the hypotheses Headless Horn clauses
female(shelley). male(bill). father(bill, jake). Copyright © 2009 Addison-Wesley. All rights reserved.
766
Rule Statements Used for the hypotheses Headed Horn clause
Right side: antecedent (if part) May be single term or conjunction Left side: consequent (then part) Must be single term Conjunction: multiple terms separated by logical AND operations (implied) Copyright © 2009 Addison-Wesley. All rights reserved.
767
Example Rules ancestor(mary,shelley):- mother(mary,shelley). Can use variables (universal objects) to generalize meaning: parent(X,Y):- mother(X,Y). parent(X,Y):- father(X,Y). grandparent(X,Z):- parent(X,Y), parent(Y,Z). sibling(X,Y):- mother(M,X), mother(M,Y), father(F,X), father(F,Y). Copyright © 2009 Addison-Wesley. All rights reserved.
768
Goal Statements For theorem proving, theorem is in form of proposition that we want system to prove or disprove – goal statement Same format as headless Horn man(fred) Conjunctive propositions and propositions with variables also legal goals father(X,mike) Copyright © 2009 Addison-Wesley. All rights reserved.
769
Inferencing Process of Prolog
Queries are called goals If a goal is a compound proposition, each of the facts is a subgoal To prove a goal is true, must find a chain of inference rules and/or facts. For goal Q: B :- A C :- B … Q :- P Process of proving a subgoal called matching, satisfying, or resolution Copyright © 2009 Addison-Wesley. All rights reserved.
770
Approaches Bottom-up resolution, forward chaining
Begin with facts and rules of database and attempt to find sequence that leads to goal Works well with a large set of possibly correct answers Top-down resolution, backward chaining Begin with goal and attempt to find sequence that leads to set of facts in database Works well with a small set of possibly correct answers Prolog implementations use backward chaining Copyright © 2009 Addison-Wesley. All rights reserved.
771
Subgoal Strategies When goal has more than one subgoal, can use either
Depth-first search: find a complete proof for the first subgoal before working on others Breadth-first search: work on all subgoals in parallel Prolog uses depth-first search Can be done with fewer computer resources Copyright © 2009 Addison-Wesley. All rights reserved.
772
Backtracking With a goal with multiple subgoals, if fail to show truth of one of subgoals, reconsider previous subgoal to find an alternative solution: backtracking Begin search where previous search left off Can take lots of time and space because may find all possible proofs to every subgoal Copyright © 2009 Addison-Wesley. All rights reserved.
773
Simple Arithmetic Prolog supports integer variables and integer arithmetic is operator: takes an arithmetic expression as right operand and variable as left operand A is B / 17 + C Not the same as an assignment statement! Copyright © 2009 Addison-Wesley. All rights reserved.
774
Example speed(ford,100). speed(chevy,105). speed(dodge,95).
speed(volvo,80). time(ford,20). time(chevy,21). time(dodge,24). time(volvo,24). distance(X,Y) :- speed(X,Speed), time(X,Time), Y is Speed * Time. Copyright © 2009 Addison-Wesley. All rights reserved.
775
Trace Built-in structure that displays instantiations at each step
Tracing model of execution - four events: Call (beginning of attempt to satisfy goal) Exit (when a goal has been satisfied) Redo (when backtrack occurs) Fail (when goal fails) Copyright © 2009 Addison-Wesley. All rights reserved.
776
Example likes(jake,chocolate). likes(jake,apricots).
likes(darcie,licorice). likes(darcie,apricots). trace. likes(jake,X), likes(darcie,X). Copyright © 2009 Addison-Wesley. All rights reserved.
777
List Structures Other basic data structure (besides atomic propositions we have already seen): list List is a sequence of any number of elements Elements can be atoms, atomic propositions, or other terms (including other lists) [apple, prune, grape, kumquat] [] (empty list) [X | Y] (head X and tail Y) Copyright © 2009 Addison-Wesley. All rights reserved.
778
Append Example append([], List, List).
append([Head | List_1], List_2, [Head | List_3]) :- append (List_1, List_2, List_3). Copyright © 2009 Addison-Wesley. All rights reserved.
779
Reverse Example reverse([], []). reverse([Head | Tail], List) :-
reverse (Tail, Result), append (Result, [Head], List). Copyright © 2009 Addison-Wesley. All rights reserved.
780
Deficiencies of Prolog
Resolution order control The closed-world assumption The negation problem Intrinsic limitations Copyright © 2009 Addison-Wesley. All rights reserved.
781
Applications of Logic Programming
Relational database management systems Expert systems Natural language processing Copyright © 2009 Addison-Wesley. All rights reserved.
782
Summary Symbolic logic provides basis for logic programming
Logic programs should be nonprocedural Prolog statements are facts, rules, or goals Resolution is the primary activity of a Prolog interpreter Although there are a number of drawbacks with the current state of logic programming it has been used in a number of areas Copyright © 2009 Addison-Wesley. All rights reserved.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.