Business Process Reengineering


Информатика, кибернетика и программирование

While successful as a notation for mathematical algorithms it lacked important data types such as pointers or characters. Besides cleaning up or leaving out some of Algol’s more obscure features Pascal added the capability to define new data types out of simpler existing ones.



815.5 KB

0 чел.


The language Component Pascal is the culmination of several decades of research. It is the youngest member of the Algol family of languages. Algol, defined in 1960, was the first high-level language with a readable, structured, and systematically defined syntax. While successful as a notation for mathematical algorithms, it lacked important data types, such as pointers or characters.


In the late sixties, several proposals for an evolutionary successor to Algol were developed. The most successful one was Pascal, defined in 1970 by Prof. Niklaus Wirth at ETH Zürich, the Swiss Federal Institute of Technology. Besides cleaning up or leaving out some of Algol's more obscure features, Pascal added the capability to define new data types out of simpler existing ones. Pascal also supported dynamic data structures; i.e., data structures which can grow and shrink while a program is running.

Pascal received a big boost when ETH released a Pascal compiler that produced a simple intermediate code for a virtual machine (P-code), instead of true native code for a particular machine. This simplified porting Pascal to other processor architectures considerably, because only a new P-code interpreter needed be written for this purpose, not a whole new compiler. One of these projects had been undertaken at the University of California, San Diego. Remarkably, this implementation (UCSD Pascal) didn't require a large and expensive mainframe computer, it ran on the then new Apple II personal computers. This gave Pascal a second important boost. The third one came when Borland released TurboPascal, a fast and inexpensive compiler, and integrated development environment for the IBM PC. Later, Borland revived its version of Pascal when it introduced the rapid application development environment Delphi.

Pascal has greatly influenced the design and evolution of many other languages, from Ada to Visual Basic.


In the mid-seventies, inspired by a sabbatical at the Xerox Palo Alto Research Center PARC, Wirth started a project to develop a new workstation computer. This workstation should be completely programmable in a high-level language, thus the language had to provide direct access to the underlying hardware. Furthermore, it had to support team programming and modern software engineering principles, such as abstract data types. These requirements led to the programming language Modula-2 (1979).

Modula-2 retained the successful features of Pascal, and added a module system as well as a controlled way to circumvent the language's type system when doing low-level programming; e.g., when implementing device drivers. Modules could be added to the operating system at run-time. In fact, the whole operating system consisted of a collection of modules, without a distinguished kernel or similar artefact. Modules could be compiled and loaded separately, with complete type and version checking of their interfaces.

Modula-2 has made inroads in particular into safety-critical areas, such as traffic control systems.

 Simula, Smalltalk, and Cedar

Wirth's interest remained with desktop computers, however, and again an important impulse came from Xerox PARC. PARC was the place where the workstation, the laser printer, the local area network, the bitmap display, and many other enabling technologies have been invented. Also, PARC adopted and made popular several older and barely known technologies, like the mouse, interactive graphics, and object-oriented programming. The latter concept, if not the term, was first applied to a high-level language in Simula (1966), another member of the Algol language family. As its name suggests, Simula used object-orientation primarily for simulation purposes. Xerox PARC's Smalltalk language (1983), however, used it for about anything. The Smalltalk project broke new ground also in user interface design: the graphical user interface (GUI) as we know it today was developed for the Smalltalk system.

At PARC, these ideas influenced other projects, e.g., the Cedar language, a Pascal-style language. Like Smalltalk and later Oberon, Cedar was not only the name of a language but also of an operating system. The Cedar operating system was impressive and powerful, but also complex and unstable.


The Oberon project was initiated in 1985 at ETH by Wirth and his colleague Jürg Gutknecht. It was an attempt to distill the essence of Cedar into a comprehensive, but still comprehensible, workstation operating system. The resulting system became very small and efficient, working well with only 2 MB of RAM and 10 MB of disk space. An important reason for the small size of the Oberon system was its component-oriented design: instead of integrating all desirable features into one monolithic software colossus, the less frequently used software components (modules) could be implemented as extensions of the core system. Such components were only loaded when they were actually needed, and they could be shared by all applications.

Wirth realized that component-oriented programming required some features of object-oriented programming, such as information hiding, late binding, and polymorphism.

Information hiding was the great strength of Modula-2. Late binding was supported by Modula-2 in the form of procedure variables. However, polymorphism was lacking. For this reason, Wirth added type extension: a record type could be declared as an extension of another record type. An extended type could be used wherever one of its base types might be used.

But component-oriented programming is more than object-oriented programming. In a component-based system, a component may share its data structures with arbitrary other components, about which it doesn't know anything. These components usually don't know about each other's existence either. Such mutual ignorance makes the management of dynamic data structures, in particular the correct deallocation of unused memory, a fundamentally more difficult problem than in closed software systems. Consequently, it must be left to the language implementation to find out when memory is not used anymore, in order to safely reclaim it for later use. A system service which performs such an automatic storage reclamation is called a garbage collector. Garbage collection prevents two of the most evasive and downright dangerous programming errors: memory leaks (not giving free unused memory) and dangling pointers (releasing memory too early). Dangling pointers let one component destroy data structures that belong to other components. Such a violation of type safety must be prevented, because component-based systems may contain many third-party components of unknown quality (e.g., downloaded from the Internet).

While Algol-family languages always had a reputation of being safe, complete type safety (and thus garbage collection) still was a quantum leap forward. It also was the reason why complete compatibility with Modula-2 was not possible. The resulting revision of Modula-2 was called the same way as the system: Oberon.

Oberon's module system, like the one of Modula-2, provided information hiding for entire collections of types, not only for individual objects. This allowed to define and guarantee invariants spanning several cooperating objects. In other words: it allowed developers to invent higher-level safety mechanisms, by building on the basic module safety and type safety provided by a good Oberon implementation.

Orthodox object-oriented programming languages such as Smalltalk had neglected both typing (by not supporting types) and information hiding (by restricting it to objects and classes), which was a major step backwards as far as software engineering is concerned. Oberon reconciled the worlds of object-oriented and modular programming.

As a final requirement of component-oriented programming, it had to be possible to dynamically load new components. In Oberon, the unit of loading was the same as the unit of compilation: a module.

 Component Pascal

In 1992, a cooperation with Prof. H. P. Mössenböck led to a few additions to the original Oberon language. ("Oberon-2"). It became the de-facto standard of the language.

In 1997, the ETH spin-off Oberon microsystems, Inc. (with Wirth on its board of directors) made some small extensions to Oberon-2 and called it Component Pascal, to better express its focus (component-oriented programming) and its origin (Pascal). It is the industrial-strength version of Oberon, and heir to the original Pascal and Modula-2. A text describing the changes from Oberon-2 to Component Pascal can be found at www.oberon.ch/docu/component_pascal.html. The complete language report is available at www.oberon.ch/docu/language_report.html.

The main thrust of the enhancements compared to Oberon-2 was to give the designer of a framework (i.e., of module interfaces that define abstract classes for a particular problem domain) more control over the framework's custom safety properties. The benefit is that it becomes easier to ascertain the integrity of a large component-based system, which is particularly important during iterative design cycles when the framework is being developed, and later when the system architecture must be refactored to enable further evolution and maintenance.



Oberon microsystems developed the BlackBox Component Framework starting in 1992 (originally it was called Oberon/F). This component-oriented framework is written in Component Pascal, and simplifies the development of graphical user interface components. It comes bundled with several BlackBox extension components, including a word processor, a visual designer, an SQL database access facility, an integrated development environment, and the Component Pascal run-time system. The complete package is an advanced yet light-weight rapid application development (RAD) tool for components, called BlackBox Component Builder. It is light-weight because it is completely built out of Component Pascal modules - including the kernel with the garbage collector, and the Component Pascal compiler itself. This is a demonstration of the power of component software in general, and of the suitability of Component Pascal in particular.

Component Pascal Language Report

1. Introduction

Component Pascal is Oberon microsystems' refinement of the Oberon-2 language. Oberon microsystems thanks H. Mössenböck and N. Wirth for the friendly permission to use their Oberon-2 report as basis for this document.

Component Pascal is a general-purpose language in the tradition of Pascal, Modula-2 and Oberon. Its most important features are block structure, modularity, separate compilation, static typing with strong type checking (also across module boundaries), type extension with methods, dynamic loading of modules, and garbage collection.

Type extension makes Component Pascal an object-oriented language. An object is a variable of an abstract data type consisting of private data (its state) and procedures that operate on this data. Abstract data types are declared as extensible records. Component Pascal covers most terms of object-oriented languages by the established vocabulary of imperative languages in order to minimize the number of notions for similar concepts.

Complete type safety and the requirement of a dynamic object model make Component Pascal a component-oriented language.

This report is not intended as a programmer's tutorial. It is intentionally kept concise. Its function is to serve as a reference for programmers. What remains unsaid is mostly left so intentionally, either because it can be derived from stated rules of the language, or because it would require to commit the definition when a general commitment appears as unwise.

Appendix A defines some terms that are used to express the type checking rules of Component Pascal. Where they appear in the text, they are written in italics to indicate their special meaning (e.g. the same type).

It is recommended to minimize the use of procedure types and super calls, since they are considered obsolete. They are retained for the time being, in order to simplify the use of existing Oberon-2 code. Support for these features may be reduced in later product releases. In the following text, red stretches denote these obsolete features.

 2. Syntax

An extended Backus-Naur formalism (EBNF) is used to describe the syntax of Component Pascal: Alternatives are separated by |. Brackets [ and ] denote optionality of the enclosed expression, and braces { and } denote its repetition (possibly 0 times). Ordinary parentheses ( and ) are used to group symbols if necessary. Non-terminal symbols start with an upper-case letter (e.g. Statement). Terminal symbols either start with a lower-case letter (e.g. ident), or are written all in upper-case letters (e.g. BEGIN), or are denoted by strings (e.g. ":=").

 3. Vocabulary and Representation

The representation of (terminal) symbols in terms of characters is defined using ISO 8859-1, i.e. the Latin1 extension of the ASCII character set. Symbols are identifiers, numbers, strings, operators, and delimiters. The following lexical rules must be observed: Blanks and line breaks must not occur within symbols (except in comments, and blanks in strings). They are ignored unless they are essential to separate two consecutive symbols. Capital and lower-case letters are considered as distinct.

1. Identifiers are sequences of letters, digits, and underscores. The first character must not be a digit.



(letter | "_") {letter | "_" | digit}.



"A" .. "Z" | "a" .. "z" | "À".."Ö" | "Ø".."ö" | "ø".."ÿ".



"0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9".

Examples: x Scan Oberon2 GetSymbol firstLetter

2. Numbers are (unsigned) integer or real constants. The type of an integer constant is INTEGER if the constant value belongs to INTEGER, or LONGINT otherwise (see 6.1). If the constant is specified with the suffix 'H' or 'L', the representation is hexadecimal otherwise the representation is decimal. The suffix 'H' is used to specify 32-bit constants in the range -2147483648 .. 2147483647. At most 8 significant hex digits are allowed. The suffix 'L' is used to specify 64-bit constants.

A real number always contains a decimal point. Optionally it may also contain a decimal scale factor. The letter E means "times ten to the power of". A real number is always of type REAL.



integer | real.



digit {digit} | digit {hexDigit} ( "H" | "L" ).



digit {digit} "." {digit} [ScaleFactor].



"E" ["+" | "-"] digit {digit}.



digit | "A" | "B" | "C" | "D" | "E" | "F".



INTEGER 1234567




REAL 12.3


REAL 456700000


INTEGER -65536


LONGINT 4294901760

3. Character constants are denoted by the ordinal number of the character in hexadecimal notation followed by the letter X.



digit {hexDigit} "X".

4. Strings are sequences of characters enclosed in single (') or double (") quote marks. The opening quote must be the same as the closing quote and must not occur within the string. The number of characters in a string is called its length. A string of length 1 can be used wherever a character constant is allowed and vice versa.



' " ' {char} ' " ' | " ' " {char} " ' ".

Examples: "Component Pascal" "Don't worry!" "x"

5. Operators and delimiters are the special characters, character pairs, or reserved words listed below. The reserved words consist exclusively of capital letters and cannot be used as identifiers.

+    :=    ARRAY      IMPORT     RETURN

-    ^     BEGIN      IN         THEN

*    =     BY         IS         TO

/    #     CASE       LOOP       TYPE

~    <     CLOSE      MOD        UNTIL

&    >     CONST      MODULE     VAR

.    <=    DIV        NIL        WHILE

,    >=    DO         OF         WITH

;    ..    ELSE       OR

|    :     ELSIF      OUT

$          END        POINTER

(    )     EXIT       PROCEDURE

[    ]     FOR        RECORD

{    }     IF         REPEAT


6. Comments may be inserted between any two symbols in a program. They are arbitrary character sequences opened by the bracket (* and closed by *). Comments may be nested. They do not affect the meaning of a program.

 4. Declarations and Scope Rules

Every identifier occurring in a program must be introduced by a declaration, unless it is a predeclared identifier. Declarations also specify certain permanent properties of an object, such as whether it is a constant, a type, a variable, or a procedure. The identifier is then used to refer to the associated object.

The scope of an object x extends textually from the point of its declaration to the end of the block (module, procedure, or record) to which the declaration belongs and hence to which the object is local. It excludes the scopes of equally named objects which are declared in nested blocks. The scope rules are:

  1.  No identifier may denote more than one object within a given scope (i.e. no identifier may be declared twice in a block);
  2.  An object may only be referenced within its scope;
  3.  A declaration of a type T containing references to another type T1 may occur at a point where T1 is still unknown. The declaration of T1 must follow in the same block to which T is local;
  4.  Identifiers denoting record fields (see 6.3) or methods (see 10.2) are valid in record designators only.

An identifier declared in a module block may be followed by an export mark (" * " or " - ") in its declaration to indicate that it is exported. An identifier x exported by a module M may be used in other modules, if they import M (see Ch.11). The identifier is then denoted as M.x in these modules and is called a qualified identifier. Variables and record fields marked with " - " in their declaration are read-only (variables and fields) or implement-only (methods) in importing modules.



[ident "."] ident.



ident [" * " | " - "].

The following identifiers are predeclared; their meaning is defined in the indicated sections:

ABS     (10.3)        INTEGER     (6.1)

ANYPTR  (6.1)         FALSE       (6.1)

ANYREC  (6.1)         LEN         (10.3)

ASH     (10.3)        LONG        (10.3)

ASSERT  (10.3)        LONGINT     (6.1)

BITS    (10.3)        MAX         (10.3)

BOOLEAN (6.1)         MIN         (10.3)

BYTE    (6.1)         NEW         (10.3)

CAP     (10.3)        ODD         (10.3)

CHAR    (6.1)         ORD         (10.3)

CHR     (10.3)        REAL        (6.1)

DEC     (10.3)        SET         (6.1)

ENTIER  (10.3)        SHORT       (10.3)

EXCL    (10.3)        SHORTCHAR   (6.1)

HALT    (10.3)        SHORTINT    (6.1)

INC     (10.3)        SHORTREAL   (6.1)

INCL    (10.3)        SIZE        (10.3)

INF     (6.1)         TRUE        (6.1)

5. Constant Declarations

A constant declaration associates an identifier with a constant value.



IdentDef "=" ConstExpression.




A constant expression is an expression that can be evaluated by a mere textual scan without actually executing the program. Its operands are constants (Ch.8) or predeclared functions (Ch.10.3) that can be evaluated at compile time. Examples of constant declarations are:

N = 100
limit = 2*N - 1
fullSet = {MIN(SET) .. MAX(SET)}

 6. Type Declarations

A data type determines the set of values which variables of that type may assume, and the operators that are applicable. A type declaration associates an identifier with a type. In the case of structured types (arrays and records) it also defines the structure of variables of this type. A structured type cannot contain itself.



IdentDef "=" Type.



Qualident | ArrayType | RecordType | PointerType | ProcedureType.




Tree = POINTER TO Node


   key: INTEGER;

   left, right: Tree



CenterTree = POINTER TO CenterNode

CenterNode = RECORD (Node)

   width: INTEGER;

   subnode: Tree







6.1 Basic Types

The basic types are denoted by predeclared identifiers. The associated operators are defined in 8.2 and the predeclared function procedures in 10.3. The values of the given basic types are the following:


the truth values TRUE and FALSE


the characters of the Latin1 character set (0X .. 0FFX)


the characters of the Unicode character set (0X .. 0FFFFX)


the integers between MIN(BYTE) and MAX(BYTE)


the integers between MIN(SHORTINT) and MAX(SHORTINT)


the integers between MIN(INTEGER) and MAX(INTEGER)


the integers between MIN(LONGINT) and MAX(LONGINT)


the real numbers between MIN(SHORTREAL) and MAX(SHORTREAL), the value INF


the real numbers between MIN(REAL) and MAX(REAL), the value INF

10. SET

the sets of integers between 0 and MAX(SET)

Types 4 to 7 are integer types, types 8 and 9 are real types, and together they are called numeric types. They form a hierarchy; the larger type includes (the values of) the smaller type:


Types 2 and 3 are character types with the type hierarchy:



6.2 Array Types

An array is a structure consisting of a number of elements which are all of the same type, called the element type. The number of elements of an array is called its length. The elements of the array are designated by indices, which are integers between 0 and the length minus 1.



ARRAY [Length {"," Length}] OF Type.




A type of the form

ARRAY L0, L1, ..., Ln OF T

is understood as an abbreviation of




           ARRAY Ln OF T

Arrays declared without length are called open arrays. They are restricted to pointer base types (see 6.4), element types of open array types, and formal parameter types (see 10.1). Examples:




6.3 Record Types

A record type is a structure consisting of a fixed number of elements, called fields, with possibly different types. The record type declaration specifies the name and type of each field. The scope of the field identifiers extends from the point of their declaration to the end of the record type, but they are also visible within designators referring to elements of record variables (see 8.1). If a record type is exported, field identifiers that are to be visible outside the declaring module must be marked. They are called public fields; unmarked elements are called private fields.



RecAttributes RECORD ["("BaseType")"] FieldList {";" FieldList} END.









[IdentList ":" Type].



IdentDef {"," IdentDef}.

The usage of a record type is restricted by the presence or absence of one of the following attributes: ABSTRACT, EXTENSIBLE, and LIMITED.

A record type marked as ABSTRACT cannot be instantiated. No variables or fields of such a type can ever exist. Abstract types are only used as base types for other record types (see below).

Variables of a LIMITED record type can only be allocated inside the module where the record type is defined. The restriction applies to static allocation by a variable declaration (Ch. 7) as well as to dynamic allocation by the standard procedure NEW (Ch. 10.3).

Record types marked as ABSTRACT or EXTENSIBLE are extensible, i.e. a record type can be declared as an extension of such a record type. In the example



T1 is a (direct) extension of T0 and T0 is the (direct) base type of T1 (see App. A). An extended type T1 consists of the fields of its base type and of the fields which are declared in T1. All identifiers declared in the extended record must be different from the identifiers declared in its base type record(s). The base type of an abstract record must be abstract.

Alternatively, a pointer type can be specified as the base type. The record base type of the pointer is used as the base type of the declared record in this case.

Each record is implicitly an extension of the predeclared type ANYREC. ANYREC does not contain any fields and can only be used in pointer and variable parameter declarations.

Summary of attributes:

attribute    extension     allocate

none         no            yes

EXTENSIBLE   yes           yes

ABSTRACT     yes           no

LIMITED      in defining module only


Examples of record type declarations:


   day, month, year: INTEGER




   name, firstname: ARRAY 32 OF CHAR;

   age: INTEGER;

   salary: REAL



6.4 Pointer Types

Variables of a pointer type P assume as values pointers to variables of some type T. T is called the pointer base type of P and must be a record or array type. Pointer types adopt the extension relation of their pointer base types: if a type T1 is an extension of T, and P1 is of type POINTER TO T1, then P1 is also an extension of P.




If p is a variable of type P = POINTER TO T, a call of the predeclared procedure NEW(p) (see 10.3) allocates a variable of type T in free storage. If T is a record type or an array type with fixed length, the allocation has to be done with NEW(p); if T is an n-dimensional open array type the allocation has to be done with NEW(p, e0, ..., en-1) where T is allocated with lengths given by the expressions e0, ..., en-1. In either case a pointer to the allocated variable is assigned to p. p is of type P. The referenced variable p^ (pronounced as p-referenced) is of type T. Any pointer variable may assume the value NIL, which points to no variable at all.

All fields or elements of a newly allocated record or array are cleared, which implies that all embedded pointers and procedure variables are initialized to NIL.

The predeclared type ANYPTR is defined as POINTER TO ANYREC. Any pointer to a record type is therefore an extension of ANYPTR. The procedure NEW cannot be used for variables of type ANYPTR.


6.5 Procedure Types

Variables of a procedure type T have a procedure (or NIL) as value. If a procedure P is assigned to a variable of type T, the formal parameter lists (see Ch. 10.1) of P and T must match (see App. A). P must not be a predeclared procedure or a method nor may it be local to another procedure.



PROCEDURE [FormalParameters].

6.6 String Types

Values of a string type are sequences of characters terminated by a null character (0X). The length of a string is the number of characters it contains excluding the null character.

Strings are either constants or stored in an array of character type. There are no predeclared identifiers for string types because there is no need to use them in a declaration.

Constant strings which consist solely of characters in the range 0X..0FFX and strings stored in an array of SHORTCHAR are of type Shortstring, all others are of type String.


7. Variable Declarations

Variable declarations introduce variables by defining an identifier and a data type for them.



IdentList ":" Type.

Record and pointer variables have both a static type (the type with which they are declared - simply called their type) and a dynamic type (the type of their value at run-time). For pointers and variable parameters of record type the dynamic type may be an extension of their static type. The static type determines which fields of a record are accessible. The dynamic type is used to call methods (see 10.2).

Examples of variable declarations (refer to examples in Ch. 6):

i, j, k: INTEGER

x, y: REAL


s: SET

F: Function


w: ARRAY 16 OF


          name: ARRAY 32 OF CHAR;

          count: INTEGER


t, c: Tree


8. Expressions

Expressions are constructs denoting rules of computation whereby constants and current values of variables are combined to compute other values by the application of operators and function procedures. Expressions consist of operands and operators. Parentheses may be used to express specific associations of operators and operands.

8.1 Operands

With the exception of set constructors and literal constants (numbers, character constants, or strings), operands are denoted by designators. A designator consists of an identifier referring to a constant, variable, or procedure. This identifier may possibly be qualified by a module identifier (see Ch. 4 and 11) and may be followed by selectors if the designated object is an element of a structure.



Qualident {"." ident | "[" ExpressionList "]" | "^" | "$" | "(" Qualident ")" | ActualParameters}.



Expression {"," Expression}.



"(" [ExpressionList] ")".

If a designates an array, then a[e] denotes that element of a whose index is the current value of the expression e. The type of e must be an integer type. A designator of the form a[e0, e1, ..., en] stands for a[e0][e1]...[en]. If r designates a record, then r.f denotes the field f of r or the method f of the dynamic type of r (Ch. 10.2). If a or r are read-only, then also a[e] and r.f are read-only.

If p designates a pointer, p^ denotes the variable which is referenced by p. The designators p^.f, p^[e], and p^$ may be abbreviated as p.f, p[e], and p$, i.e. record, array, and string selectors imply dereferencing. Dereferencing is also implied if a pointer is assigned to a variable of a record or array type (Ch. 9.1), if a pointer is used as actual parameter for a formal parameter of a record or array type (Ch. 10.1), or if a pointer is used as argument of the standard procedure LEN (Ch. 10.3).

A type guard v(T) asserts that the dynamic type of v is T (or an extension of T), i.e. program execution is aborted, if the dynamic type of v is not T (or an extension of T). Within the designator, v is then regarded as having the static type T. The guard is applicable, if

  1.  v is an IN or VAR parameter of record type or v is a pointer to a record type, and if
  2.  T is an extension of the static type of v 

If the designated object is a constant or a variable, then the designator refers to its current value. If it is a procedure, the designator refers to that procedure unless it is followed by a (possibly empty) parameter list in which case it implies an activation of that procedure and stands for the value resulting from its execution. The actual parameters must correspond to the formal parameters as in proper procedure calls (see 10.1).

If a designates an array of character type, then a$ denotes the null terminated string contained in a. It leads to a run-time error if a does not contain a 0X character. The $ selector is applied implicitly if a is used as an operand of the concatenation (Ch. 8.2.4) or a relational operator (Ch. 8.2.5) or the standard functions LONG and SHORT (Ch. 10.3).

Examples of designators (refer to examples in Ch.7):

i                       (INTEGER)

a[i]                    (REAL)

w[3].name[i]            (CHAR)

t.left.right            (Tree)

t(CenterTree).subnode   (Tree)

w[i].name$              (String)


8.2 Operators

Four classes of operators with different precedences (binding strengths) are syntactically distinguished in expressions. The operator ~ has the highest precedence, followed by multiplication operators, addition operators, and relations. Operators of the same precedence associate from left to right. For example, x-y-z stands for (x-y)-z.



SimpleExpression [Relation SimpleExpression].



["+" | "-"] Term {AddOperator Term}.



Factor {MulOperator Factor}.



Designator | number | character | string | NIL | Set | "(" Expression ")" | "~" Factor.



"{" [Element {"," Element}] "}".



Expression [".." Expression].



"=" | "#" | "<" | "<=" | ">" | ">=" | IN | IS.



"+" | "-" | OR.



"*" | "/" | DIV | MOD | "&".

The available operators are listed in the following tables. Some operators are applicable to operands of various types, denoting different operations. In these cases, the actual operation is identified by the type of the operands. The operands must be expression compatible with respect to the operator (see App. A).


8.2.1 Logical operators


logical disjunction

p OR q

"if p then TRUE, else q"


logical conjunction

p & q

"if p then q, else FALSE"



~ p

"not p"

These operators apply to BOOLEAN operands and yield a BOOLEAN result. The second operand of a disjunction is only evaluated if the result of the first is FALSE. The second oprand of a conjunction is only evaluated if the result of the first is TRUE.


8.2.2 Arithmetic operators








real quotient


integer quotient



The operators +, -, *, and / apply to operands of numeric types. The type of the result is REAL if the operation is a division (/) or one of the operand types is a real type. Otherwise the result type is LONGINT if one of the operand types is LONGINT, or INTEGER in any other case. If the result of a real operation is too large to be represented as a real number, it is changed to the predeclared value INF with the same sign as the original result. Note that this also applies to 1.0/0.0, but not to 0.0/0.0 which has no defined result at all and leads to a run-time error. When used as monadic operators, - denotes sign inversion and + denotes the identity operation. The operators DIV and MOD apply to integer operands only. They are related by the following formulas:

x = (x DIV y) * y + (x MOD y)

0 <= (x MOD y) < y or 0 >= (x MOD y) > y

Note: x DIV y = ENTIER(x / y)


x   y   x DIV y   x MOD y

 5   3       1         2

-5   3      -2         1

5  -3      -2        -1

-5  -3       1        -2


(-5) DIV 3 = -2


-5 DIV 3 = -(5 DIV 3) = -1


8.2.3 Set operators





(x - y = x * (-y))




symmetric set difference

(x / y = (x-y) + (y-x))

Set operators apply to operands of type SET and yield a result of type SET. The monadic minus sign denotes the complement of x, i.e. -x denotes the set of integers between 0 and MAX(SET) which are not elements of x. Set operators are not associative ((a+b)-c # a+(b-c)).

A set constructor defines the value of a set by listing its elements between curly brackets. The elements must be integers in the range 0..MAX(SET). A range a..b denotes all integers i with i >= a and i <= b.


8.2.4 String operators


string concatenation

The concatenation operator applies to operands of string types. The resulting string consists of the characters of the first operand followed by the characters of the second operand. If both operands are of type Shortstring the result is of type Shortstring, otherwise the result is of type String.


8.2.5 Relations








less or equal




greater or equal


set membership


type test

Relations yield a BOOLEAN result. The relations =, #, <, <=, >, and >= apply to the numeric types, character types, and string types. The relations = and # also apply to BOOLEAN and SET, as well as to pointer and procedure types (including the value NIL). x IN s stands for "x is an element of s". x must be an integer in the range 0..MAX(SET), and s of type SET. v IS T stands for "the dynamic type of v is T (or an extension of T)" and is called a type test. It is applicable if

  1.  v is an IN or VAR parameter of record type or v is a pointer to a record type, and if
  2.  T is an extension of the static type of v 

Examples of expressions (refer to examples in Ch.7):



i DIV 3


~p OR q


(i+j) * (i-j)


s - {8, 9, 13}


i + x


a[i+j] * a[i-j]


(0<=i) & (i<100)


t.key = 0


k IN {i..j-1}


w[i].name$ <= "John"


t IS CenterTree



9. Statements

Statements denote actions. There are elementary and structured statements. Elementary statements are not composed of any parts that are themselves statements. They are the assignment, the procedure call, the return, and the exit statement. Structured statements are composed of parts that are themselves statements. They are used to express sequencing and conditional, selective, and repetitive execution. A statement may also be empty, in which case it denotes no action. The empty statement is included in order to relax punctuation rules in statement sequences.



[ Assignment | ProcedureCall | IfStatement | CaseStatement | WhileStatement | RepeatStatement | ForStatement | LoopStatement | WithStatement | EXIT | RETURN [Expression] ].

9.1 Assignments

Assignments replace the current value of a variable by a new value specified by an expression. The expression must be assignment compatible with the variable (see App. A). The assignment operator is written as ":=" and pronounced as becomes.



Designator ":=" Expression.

If an expression e of type Te is assigned to a variable v of type Tv, the following happens:

  1.  if Tv and Te are record types, the dynamic and static types of e and v are all equal and all fields of that type are assigned.
  2.  if Tv and Te are pointer types, the dynamic type of v becomes the dynamic type of e;
  3.  if Tv is an array of character type and e is a string of length m < LEN(v), v[i] becomes ei for i = 0..m-1 and v[m] becomes 0X. It leads to a run-time error if m >= LEN(v). 

Examples of assignments (refer to examples in Ch.7):

i := 0

p := i = j

x := i + 1

k := log2(i+j)

F := log2 (* see 10.1 *)

s := {2, 3, 5, 7, 11, 13}

a[i] := (x+y) * (x-y)

t.key := i

w[i+1].name := "John"

t := c


9.2 Procedure Calls

A procedure call activates a procedure. It may contain a list of actual parameters which replace the corresponding formal parameters defined in the procedure declaration (see Ch. 10). The correspondence is established by the positions of the parameters in the actual and formal parameter lists. There are two kinds of parameters: variable and value parameters.

If a formal parameter is a variable parameter, the corresponding actual parameter must be a designator denoting a variable. If it denotes an element of a structured variable, the component selectors are evaluated when the formal/actual parameter substitution takes place, i.e. before the execution of the procedure. If a formal parameter is a value parameter, the corresponding actual parameter must be an expression. This expression is evaluated before the procedure activation, and the resulting value is assigned to the formal parameter (see also 10.1).



Designator [ActualParameters].


WriteInt(i*2+1) (* see 10.1 *)


t.Insert("John") (* see 11 *)


9.3 Statement Sequences

Statement sequences denote the sequence of actions specified by the component statements which are separated by semicolons.



Statement {";" Statement}.

9.4 If Statements



IF Expression THEN StatementSequence
{ELSIF Expression THEN StatementSequence}
[ELSE StatementSequence]

If statements specify the conditional execution of guarded statement sequences. The Boolean expression preceding a statement sequence is called its guard. The guards are evaluated in sequence of occurrence, until one evaluates to TRUE, whereafter its associated statement sequence is executed. If no guard is satisfied, the statement sequence following the symbol ELSE is executed, if there is one.


IF (ch >= "A") & (ch <= "Z") THEN ReadIdentifier

ELSIF (ch >= "0") & (ch <= "9") THEN ReadNumber

ELSIF (ch = "'") OR (ch = '"') THEN ReadString

ELSE SpecialCharacter



9.5 Case Statements

Case statements specify the selection and execution of a statement sequence according to the value of an expression. First the case expression is evaluated, then that statement sequence is executed whose case label list contains the obtained value. The case expression must be of an integer or character type that includes the values of all case labels. Case labels are constants, and no value must occur more than once. If the value of the expression does not occur as a label of any case, the statement sequence following the symbol ELSE is selected, if there is one, otherwise the program is aborted.



CASE Expression OF Case {"|" Case} [ELSE StatementSequence] END.



[CaseLabelList ":" StatementSequence].



CaseLabels {"," CaseLabels}.



ConstExpression [".." ConstExpression].



"A" .. "Z": ReadIdentifier

| "0" .. "9": ReadNumber

| "'", '"': ReadString

ELSE SpecialCharacter



9.6 While Statements

While statements specify the repeated execution of a statement sequence while the Boolean expression (its guard) yields TRUE. The guard is checked before every execution of the statement sequence.



WHILE Expression DO StatementSequence END.



WHILE i > 0 DO i := i DIV 2; k := k + 1 END

WHILE (t # NIL) & (t.key # i) DO t := t.left END


9.7 Repeat Statements

A repeat statement specifies the repeated execution of a statement sequence until a condition specified by a Boolean expression is satisfied. The statement sequence is executed at least once.



REPEAT StatementSequence UNTIL Expression.

9.8 For Statements

A for statement specifies the repeated execution of a statement sequence while a progression of values is assigned to an integer variable called the control variable of the for statement.



FOR ident ":=" Expression TO Expression [BY ConstExpression]
DO StatementSequence END.

The statement

FOR v := beg TO end BY step DO statements END

is equivalent to

temp := end; v := beg;

IF step > 0 THEN

   WHILE v <= temp DO statements; v := v + step END


   WHILE v >= temp DO statements; v := v + step END


temp has the same type as v. step must be a nonzero constant expression. If step is not specified, it is assumed to be 1.


FOR i := 0 TO 79 DO k := k + a[i] END

FOR i := 79 TO 1 BY -1 DO a[i] := a[i-1] END


9.9 Loop Statements

A loop statement specifies the repeated execution of a statement sequence. It is terminated upon execution of an exit statement within that sequence (see 9.10).



LOOP StatementSequence END.




   IF i < 0 THEN EXIT END;



Loop statements are useful to express repetitions with several exit points or cases where the exit condition is in the middle of the repeated statement sequence.


9.10 Return and Exit Statements

A return statement indicates the termination of a procedure. It is denoted by the symbol RETURN, followed by an expression if the procedure is a function procedure. The type of the expression must be assignment compatible (see App. A) with the result type specified in the procedure heading (see Ch.10).

Function procedures require the presence of a return statement indicating the result value. In proper procedures, a return statement is implied by the end of the procedure body. Any explicit return statement therefore appears as an additional (probably exceptional) termination point.

An exit statement is denoted by the symbol EXIT. It specifies termination of the enclosing loop statement and continuation with the statement following that loop statement. Exit statements are contextually, although not syntactically associated with the loop statement which contains them.


9.11 With Statements

With statements execute a statement sequence depending on the result of a type test and apply a type guard to every occurrence of the tested variable within this statement sequence.



WITH Guard DO StatementSequence {"|" Guard DO StatementSequence}
[ELSE StatementSequence] END.



Qualident ":" Qualident.

If v is a variable parameter of record type or a pointer variable, and if it is of a static type T0, the statement

WITH v: T1 DO S1 | v: T2 DO S2 ELSE S3 END

has the following meaning: if the dynamic type of v is T1, then the statement sequence S1 is executed where v is regarded as if it had the static type T1; else if the dynamic type of v is T2, then S2 is executed where v is regarded as if it had the static type T2; else S3 is executed. T1 and T2 must be extensions of T0. If no type test is satisfied and if an else clause is missing the program is aborted.


WITH t: CenterTree DO i := t.width; c := t.subnode END


10. Procedure Declarations

A procedure declaration consists of a procedure heading and a procedure body. The heading specifies the procedure identifier and the formal parameters. For methods it also specifies the receiver parameter and the attributes (see 10.2). The body contains declarations and statements. The procedure identifier is repeated at the end of the procedure declaration.

There are two kinds of procedures: proper procedures and function procedures. The latter are activated by a function designator as a constituent of an expression and yield a result that is an operand of the expression. Proper procedures are activated by a procedure call. A procedure is a function procedure if its formal parameters specify a result type. The body of a function procedure must contain a return statement which defines its result.

All constants, variables, types, and procedures declared within a procedure body are local to the procedure. Since procedures may be declared as local objects too, procedure declarations may be nested. The call of a procedure within its declaration implies recursive activation.

Local variables whose types are pointer types or procedure types are initialized to NIL before the body of the procedure is executed.

Objects declared in the environment of the procedure are also visible in those parts of the procedure in which they are not concealed by a locally declared object with the same name.



ProcedureHeading ";" [ ProcedureBody ident ].



PROCEDURE [Receiver] IdentDef [FormalParameters] MethAttributes.



DeclarationSequence [BEGIN StatementSequence] END.



{CONST {ConstantDeclaration ";"} |
TYPE {TypeDeclaration ";"} |
VAR {VariableDeclaration ";"} }
{ProcedureDeclaration ";" | ForwardDeclaration ";"}.



PROCEDURE " ^ " [Receiver] IdentDef [FormalParameters] MethAttributes.

If a procedure declaration specifies a receiver parameter, the procedure is considered to be a method of the type of the receiver (see 10.2). A forward declaration serves to allow forward references to a procedure whose actual declaration appears later in the text. The formal parameter lists of the forward declaration and the actual declaration must match (see App. A) and the names of corresponding parameters must be equal.

 10.1 Formal Parameters

Formal parameters are identifiers declared in the formal parameter list of a procedure. They correspond to actual parameters specified in the procedure call. The correspondence between formal and actual parameters is established when the procedure is called. There are two kinds of parameters, value and variable parameters, the latter indicated in the formal parameter list by the presence of one of the keywords VAR, IN, or OUT. Value parameters are local variables to which the value of the corresponding actual parameter is assigned as an initial value. Variable parameters correspond to actual parameters that are variables, and they stand for these variables. Variable parameters can be used for input only (keyword IN), output only (keyword OUT), or input and output (keyword VAR). Inside the procedure, input parameters are read-only. Like local variables, output parameters of pointer types and procedure types are initialized to NIL. Other output parameters must be considered as undefined prior to the first assignment in the procedure. The scope of a formal parameter extends from its declaration to the end of the procedure block in which it is declared. A function procedure without parameters must have an empty parameter list. It must be called by a function designator whose actual parameter list is empty too. The result type of a procedure can be neither a record nor an array.



"(" [FPSection {";" FPSection}] ")" [":" Type].



[VAR | IN | OUT] ident {"," ident} ":" Type.

Let f be the formal parameter and a the corresponding actual parameter. If f is an open array, then a must be array compatible to f and the lengths of f are taken from a. Otherwise a must be parameter compatible to f (see App. A)

Examples of procedure declarations:


   VAR i: INTEGER; ch: CHAR;


   i := 0; Read(ch);

   WHILE ("0" <= ch) & (ch <= "9") DO

       i := 10*i + (ORD(ch)-ORD("0")); Read(ch)


   x := i

END ReadInt


PROCEDURE WriteInt (x: INTEGER); (*0 <= x <100000*)



   i := 0;

   REPEAT buf[i] := x MOD 10;

       x := x DIV 10; INC(i)

   UNTIL x = 0;


       Write(CHR(buf[i] + ORD("0")))

   UNTIL i = 0

END WriteInt





   i := 0;

   WHILE (i < LEN(s)) & (s[i] # 0X) DO

       Write(s[i]); INC(i)


END WriteString



   VAR y: INTEGER; (*assume x>0*)


   y := 0;

   WHILE x > 1 DO x := x DIV 2; INC(y) END;


END log2


PROCEDURE Modify (VAR n: Node);



END Modify


10.2 Methods

Globally declared procedures may be associated with a record type declared in the same module. The procedures are said to be methods bound to the record type. The binding is expressed by the type of the receiver in the heading of a procedure declaration. The receiver may be either a VAR or IN parameter of record type T or a value parameter of type POINTER TO T (where T is a record type). The method is bound to the type T and is considered local to it.



PROCEDURE [Receiver] IdentDef [FormalParameters] MethAttributes.



"(" [VAR | IN] ident ":" ident ")".




If a method M is bound to a type T0, it is implicitly also bound to any type T1 which is an extension of T0. However, if a method M' (with the same name as M) is declared to be bound to T1, this overrides the binding of M to T1. M' is considered a redefinition of M for T1. The formal parameters of M and M' must match, except if M is a function returning a pointer type. In the latter case, the function result type of M' must be an extension of the function result type of M (covariance) (see App. A). If M and T1 are exported (see Chapter 4) M' must be exported too.

The following attributes are used to restrict and document the desired usage of a method: NEW, ABSTRACT, EMPTY, and EXTENSIBLE.

NEW must be used on all newly introduced methods and must not be used on redefining methods. The attribute helps to detect inconsistencies between a record and its extension when one of the two is changed without updating the other.

Abstract and empty method declarations consist of a procedure header only. Abstract methods are never called. A record containing abstract methods must be abstract. A method redefined by an abstract method must be abstract.Calling an empty method has no effect. Empty methods may not return function results and may not have OUT parameters. A record containing new empty methods must be extensible or abstract. A method redefined by an empty method must be empty or abstract. Abstract or empty methods are usually redefined (implemented) in a record extension. They may not be called via super calls. A concrete (nonabstract) record extending an abstract record must implement all abstract methods bound to the base record.

Concrete methods (which contain a procedure body) are either extensible or final (no attribute). A final method cannot be redefined in a record extension. A record containing extensible methods must be extensible or abstract.

If v is a designator and M is a method, then v.M denotes that method M which is bound to the dynamic type of v. Note, that this may be a different method than the one bound to the static type of v. v is passed to M's receiver according to the parameter passing rules specified in Chapter 10.1.

If r is a receiver parameter declared with type T, r.M^ denotes the method M bound to the base type of T (super call). In a forward declaration of a method the receiver parameter must be of the same type as in the actual method declaration. The formal parameter lists of both declarations must match (App. A) and the names of corresponding parameters must be equal.

Methods marked with " - " are "implement-only" exported. Such a method can be redefined in any importing module but can only be called within the module containing the method declaration.


PROCEDURE (t: Tree) Insert (node: Tree), NEW, EXTENSIBLE;

   VAR p, father: Tree;


   p := t;

   REPEAT father := p;

       IF node.key = p.key THEN RETURN END;

       IF node.key < p.key THEN p := p.left ELSE p := p.right END

   UNTIL p = NIL;

   IF node.key < father.key THEN father.left := node

   ELSE father.right := node


   node.left := NIL; node.right := NIL

END Insert


PROCEDURE (t: CenterTree) Insert (node: Tree); (*redefinition*)



   t.Insert^ (node) (* calls the Insert method of Tree *)

END Insert


PROCEDURE (obj: Object) Draw (w: Window), NEW, ABSTRACT;


PROCEDURE (obj: Object) Notify (e: Event), NEW, EMPTY;


10.3 Predeclared Procedures

The following table lists the predeclared procedures. Some are generic procedures, i.e. they apply to several types of operands. v stands for a variable, x and y for expressions, and T for a type.

Function procedures


Argument type 

Result type 



numeric type

type of x

absolute value

ASH(x, y)

x, y: integer type


arithmetic shift (x * 2^y)




{i | ODD(x DIV 2^i)}


character type

type of x

x is a Latin-1 letter: corresponding capital letter


integer type


character with ordinal number x


real type


largest integer not greater than x

LEN(v, x)

v: array; x: integer constant


length of v in dimension x
(first dimension = 0)


array type


equivalent to LEN(v, 0)



length of string (not counting 0X)





















T = basic type


maximum value of type T



maximum element of a set

MAX(x, y)

integer type


the larger of x and y

real type


the larger of x and y


T = basic type


minimum value of type T




MIN(x, y)

integer type


the smaller of x and y

real type


the smaller of x and y


integer type


x MOD 2 = 1




ordinal number of x



ordinal number of x



(SUM i: i IN x: 2^i)













identity (truncation possible)








any type


number of bytes required by T

SIZE cannot be used in constant expressions because its value depends on the actual compiler implementation.


Proper procedures


Argument types 



x: Boolean expression

terminate program execution if not x

ASSERT(x, n)

x: Boolean expression; n: integer constant

terminate program execution if not x


integer type

v := v - 1

DEC(v, n)

v, n: integer type

v := v - n

EXCL(v, x)

v: SET; x: integer type

v := v - {x}


integer constant

terminate program execution


integer type

v := v + 1

INC(v, n)

v, n: integer type

v := v + n

INCL(v, x)

v: SET; x: integer type

v := v + {x}


pointer to record or fixed array

allocate v ^

NEW(v, x0, ..., xn)

v: pointer to open array; xi: integer type

allocate v ^ with lengths x0.. xn

In ASSERT(x, n) and HALT(n), the interpretation of n is left to the underlying system implementation.


10.4 Finalization

A predeclared method named FINALIZE is associated with each record type as if it were declared to be bound to the type ANYREC:


The FINALIZE procedure can be implemented for any pointer type. The method is called at some unspecified time after an object of that type (or a base type of it) has become unreachable via other pointers (not globally anchored anymore) and before the object is deallocated.

It is not recommended to reanchor an object in its finalizer and the finalizer is not called again when the object repeatedly becomes unreachable. Multiple unreachable objects are finalized in an unspecified order.


11. Modules

A module is a collection of declarations of constants, types, variables, and procedures, together with a sequence of statements for the purpose of assigning initial values to the variables. A module constitutes a text that is compilable as a unit.



MODULE ident ";"
[ImportList] DeclarationSequence
[BEGIN StatementSequence]
[CLOSE StatementSequence]
END ident ".".



IMPORT Import {"," Import} ";".



[ident ":="] ident.

The import list specifies the names of the imported modules. If a module A is imported by a module M and A exports an identifier x, then x is referred to as A.x within M. If A is imported as B := A, the object x must be referenced as B.x. This allows short alias names in qualified identifiers. A module must not import itself. Identifiers that are to be exported (i.e. that are to be visible in client modules) must be marked by an export mark in their declaration (see Chapter 4).

The statement sequence following the symbol BEGIN is executed when the module is added to a system (loaded), which is done after the imported modules have been loaded. It follows that cyclic import of modules is illegal. Individual exported procedures can be activated from the system, and these procedures serve as commands .

Variables declared in a module are cleared prior to the execution of the module body. This implies that all pointer or procedure typed variables are initialized to NIL.

The statement sequence following the symbol CLOSE is executed when the module is removed from the system.




  (* exports: Tree, Node, Insert, Search, Write, Init *)


  IMPORT StdLog;



     Tree* = POINTER TO Node;

     Node* = RECORD (* exports read-only: Node.name *)

        name-: POINTER TO ARRAY OF CHAR;

        left, right: Tree



  PROCEDURE (t: Tree) Insert* (name: ARRAY OF CHAR);

     VAR p, father: Tree;


     p := t;

     REPEAT father := p;

        IF name = p.name THEN RETURN END;

        IF name < p.name THEN p := p.left ELSE p := p.right END

     UNTIL p = NIL;

     NEW(p); p.left := NIL; p.right := NIL;

     NEW(p.name, LEN(name$)+1); p.name := name$;

     IF name < father.name THEN father.left := p ELSE father.right := p END

  END Insert;


  PROCEDURE (t: Tree) Search* (name: ARRAY OF CHAR): Tree;

     VAR p: Tree;


     p := t;

     WHILE (p # NIL) & (name # p.name) DO

        IF name < p.name THEN p := p.left ELSE p := p.right END


     RETURN p

  END Search;


  PROCEDURE (t: Tree) Write*;


     IF t.left # NIL THEN t.left.Write END;

     StdLog.String(t.name); StdLog.Ln;

     IF t.right # NIL THEN t.right.Write END

  END Write;


  PROCEDURE Init* (t: Tree);


     NEW(t.name, 1);

     t.name[0] := 0X; t.left := NIL; t.right := NIL

  END Init;



  StdLog.String("Trees loaded"); StdLog.Ln


  StdLog.String("Trees removed"); StdLog.Ln

END Trees.


Appendix A: Definition of Terms

Character types 


Integer types 


Real types 


Numeric types 

integer types, real types

String types 

Shortstring, String

Basic types 

BOOLEAN, SET, character types, numeric types

Same types

Two variables a and b with types Ta and Tb are of the same type if

  1.  Ta and Tb are both denoted by the same type identifier, or
  2.  Ta is declared in a type declaration of the form Ta = Tb, or
  3.  a and b appear in the same identifier list in a variable, record field, or formal parameter declaration.


Equal types

Two types Ta and Tb are equal if

  1.  Ta and Tb are the same type, or
  2.  Ta and Tb are open array types with equal element types, or
  3.  Ta and Tb are procedure types whose formal parameter lists match.
  4.  Ta and Tb are pointer types with equal base types.


Matching formal parameter lists

Two formal parameter lists match if

  1.  they have the same number of parameters, and
  2.  they have either equal function result types or none, and
  3.  parameters at corresponding positions have equal types, and
  4.  parameters at corresponding positions are both either value, IN, OUT, or VAR parameters.


Type inclusion

Numeric and character types include (the values of) smaller types of the same class according to the following hierarchies:




Type extension (base type)

Given a type declaration Tb = RECORD (Ta) ... END, Tb is a direct extension of Ta, and Ta is a direct base type of Tb. A type Tb is an extension of a type Ta (Ta is a base type of Tb) if

  1.  Ta and Tb are the same types, or
  2.  Tb is a direct extension of an extension of Ta, or
  3.  Ta is of type ANYREC.

If Pa = POINTER TO Ta and Pb = POINTER TO Tb, Pb is an extension of Pa (Pa is a base type of Pb) if Tb is an extension of Ta.


Assignment compatible

An expression e of type Te is assignment compatible with a variable v of type Tv if one of the following conditions hold:

  1.  Te and Tv are equal and neither abstract or extensible record nor open array types;
  2.  Te and Tv are numeric or character types and Tv includes Te;
  3.  Te and Tv are pointer types and Te is an extension of Tv;
  4.  Tv is a pointer or a procedure type and e is NIL;
  5.  Tv is an integer type and e is a constant expression whose value is contained in Tv;
  6.  Tv is an array of CHAR, Te is String or Shortstring, and LEN(e) < LEN(v);
  7.  Tv is an array of SHORTCHAR, Te is Shortstring, and LEN(e) < LEN(v);
  8.  Tv is a procedure type and e is the name of a procedure whose formal parameters match those of Tv.


Array compatible

An actual parameter a of type Ta is array compatible with a formal parameter f of type Tf if

  1.  Tf and Ta are equal types, or
  2.  Tf is an open array, Ta is any array, and their element types are array compatible, or
  3.  Tf is an open array of CHAR and Ta is String, or
  4.  Tf is an open array of SHORTCHAR and Ta is Shortstring.


Parameter compatible

An actual parameter a of type Ta is parameter compatible with a formal parameter f of type Tf if

  1.  Tf and Ta are equal types, or
  2.  f is a value parameter and Ta is assignment compatible with Tf, or
  3.  f is an IN or VAR parameter and Tf and Ta are record types and Ta is an extension of Tf, or
  4.  f is an OUT parameter and Tf and Ta are pointer types and Tf is an extension of Ta.


Expression compatible

For a given operator, the types of its operands are expression compatible if they conform to the following table. The first matching line gives the correct result type. Type T1 must be an extension of type T0:


First operand 

Second operand 

Result type 

+ - * DIV MOD




integer type

integer type


+ - * /

numeric type

numeric type









string type

string type


OR & ~




= # < <= > >=

numeric type

numeric type


character type

character type


string type

string type


= #







NIL, pointer type T0 or T1

NIL, pointer type T0 or T1


procedure type T, NIL

procedure type T, NIL



integer type




T0 type




Appendix B: Syntax of Component Pascal




MODULE ident ";" [ImportList] DeclSeq [BEGIN StatementSeq] [CLOSE StatementSequence] END ident ".".



IMPORT [ident ":="] ident {"," [ident ":="] ident} ";".



{ CONST {ConstDecl ";" } | TYPE {TypeDecl ";"} | VAR {VarDecl ";"}}
{ProcDecl ";" | ForwardDecl ";"}.



IdentDef "=" ConstExpr.



IdentDef "=" Type.



IdentList ":" Type.



PROCEDURE [Receiver] IdentDef [FormalPars]
[";" DeclSeq [BEGIN StatementSeq] END ident].



PROCEDURE "^" [Receiver] IdentDef [FormalPars].



"(" [FPSection {";" FPSection}] ")" [":" Type].



[VAR | IN | OUT] ident {"," ident} ":" Type.



"(" [VAR | IN] ident ":" ident ")".



| ARRAY [ConstExpr {"," ConstExpr}] OF Type
| [ABSTRACT | EXTENSIBLE | LIMITED] RECORD ["("Qualident")"] FieldList {";" FieldList} END
| PROCEDURE [FormalPars].



[IdentList ":" Type].



Statement {";" Statement}.



[ Designator ":=" Expr
| Designator ["(" [ExprList] ")"]
| IF Expr THEN StatementSeq {ELSIF Expr THEN StatementSeq} [ELSE StatementSeq] END
| CASE Expr OF Case {"|" Case} [ELSE StatementSeq] END
| WHILE Expr DO StatementSeq END
| REPEAT StatementSeq UNTIL Expr
| FOR ident ":=" Expr TO Expr [BY ConstExpr] DO StatementSeq END
| LOOP StatementSeq END
| WITH Guard DO StatementSeq {"|" Guard DO StatementSeq} [ELSE StatementSeq] END
| EXIT | RETURN [Expr] ].



[CaseLabels {"," CaseLabels} ":" StatementSeq].



ConstExpr [".." ConstExpr].



Qualident ":" Qualident.






SimpleExpr [Relation SimpleExpr].



["+" | "-"] Term {AddOp Term}.



Factor {MulOp Factor}.



Designator | number | character | string | NIL | Set | "(" Expr ")" | " ~ " Factor.



"{" [Element {"," Element}] "}".



Expr [".." Expr].



"=" | "#" | "<" | "<=" | ">" | ">=" | IN | IS.



"+" | "-" | OR.



" * " | "/" | DIV | MOD | "&".



Qualident {"." ident | "[" ExprList "]" | " ^ " | "$" | "(" Qualident ")" | "(" [ExprList] ")"}.



Expr {"," Expr}.



IdentDef {"," IdentDef}.



[ident "."] ident.



ident [" * " | "-"].


Appendix C: Domains of Basic Types






0X .. 0FFX


0X .. 0FFFFX


-128 .. 127


-32768 .. 32767


-2147483648 .. 2147483647


-9223372036854775808 .. 9223372036854775807


-3.4E38 .. 3.4E38,
INF (32-bit IEEE format)


-1.8E308 .. 1.8E308,
INF (64-bit IEEE format)


set of 0 .. 31


Appendix D: Mandatory Requirements for Environment

The Component Pascal definition implicitly relies on three fundamental assumptions.

  1.  There exists some run-time type information that allows to check the dynamic type of an object. This is necessary to implement type tests and type guards.
  2.  There is no DISPOSE procedure. Memory cannot be deallocated manually, since this would introduce the safety problem of dangling pointers, i.e. premature deallocation. Except for those embedded systems where no dynamic memory is used or where it can be allocated once and never needs to be released, an automatic garbage collector is required.
  3.  Modules and at least their exported procedures (commands) and exported types must be retrievable dynamically. If necessary, this may cause modules to be loaded. The programming interface used to load modules or to access the mentioned meta information is not defined by the language, but the language compiler needs to preserve this information when generating code.

Except for fully linked applications where no modules will ever be added at run-time, a linking loader for modules is required. Embedded systems are important examples of applications that can be fully linked.

An implementation that doesn't fulfill these compiler and environment requirements is not compliant with Component Pascal.

Object Lesson 1 - objects and inheritance


Welcome to the world of object-oriented programming. You are about to embark on learning a method which is fun, elegant, easy to understand, powerful and comprehensive. The intention of this course is to emphasise the fun as much, if not more, than the other characteristics. After all, what is the point of learning something to make life duller. So if we sometimes see examples which appear off the wall, do not worry. There is a serious side to object-oriented systems too.

Object modelling is useful for designing computer systems, whether those systems are to be implemented in object-oriented languages or not. Most designs are likely to need more than an object-oriented language, such as a database. Therefore, do not think that this is a wasted exercise if you cannot convince your manager to let you use C++, Smalltalk or whatever flavour of the month language is out there.

Object modelling also has a use outside of the design of computer systems. It is an excellent analysis method, and it can be used in business process reengineering, in social science research, or any complex environment where it is important to capture the structure and functionality of some world.

Course Aims - Creating Beautiful Systems

This course aims to provide you with

  1.  A simple, clear, analysis and design notation.
  2.  A good basic understanding of the concepts of object oriented systems.
  3.  A method for construction of analyses and designs.
  4.  Some discussion of the implementation of designs.

Why Design?

Even the most professional programmers feel the temptation to sit down and produce code at the earliest possible moment. Therein lie many of the ills of the software engineering industry. Design is a process which involves

  •  communication
  •  creativity
  •  negotiation
  •  agreement

It is a human process, to produce products for human consumption. Too often the communication, negotiation and agreement aspects are left out.

Object modelling provides a notation which is clear, consistent, and which can be used to communicate within a software development team, and with the clients and other third-parties which the team need to deal with.

Design methods

Traditionally systems were constructed using the waterfall method. This was based on the idea that clients would formally agree a requirements document. A design would then be put together, which would be further agreed. Then the system would be implemented, and then there would be an endless process of maintenance.

Modern ideas move away from this. Iterative methods are considered more appropriate for many system development approaches. This still follows the notion of analysis, design and implementation, but on a cyclical basis, where subsequent cycles build on earlier cycles. There are variants on this, and we will discuss later how to drive and control this process.


We begin at the beginning. The world is made of objects. Just open your eyes and ears. They are out there. Bank customers, students, cats, elephants, cars, balls of string, atoms, molecules, tubs of ice cream, Madonna, stars, bureaucrats, Robin Hood. The world is built of objects. Objects are built of smaller objects, and so ad infinitum. Objects combine to make bigger objects. We already live in an object-oriented world.

The first thing an object analyst must do is to remove the scales from his or her eyes. Object modelling consists of looking for objects. Of course, there has to be some boundary. Even sitting at my desk I can see more objects than I could reasonably list. But that is where the beauty of object modelling comes in. It uses observation.

Objects can be described by their attributes and operations. Attributes are the changeable characteristics of an object. Cats have colour, size, weight and a preference for either Kit-E-Kat or Whiskers. Operations are the things an object does or can have done to it. Cats can catch mice, eat, miaow, worm up to owners, and be stroked. In our notation we draw an object such as a cat like this.

The name is shown at the top. The attributes are listed underneath. The operations are listed below that. Actually, strictly speaking, this is a class diagram. But we will explain that later.

In an object model, all data is stored as attributes of some object. The attributes of an object are manipulated by the operations. The only way of getting at the attributes is through an operation. Attributes may sometimes be objects in their own right (more of that later).

In an object model, all functionality is defined by operations. Objects may use each others operations, but the only legal way one object can manipulate another object is through an operation. An operation may inform, say "mass of ball", or change the state of an object, say "throw ball".

Object modelling is about finding objects, their attributes and their operations, and tying them together in an object model. Nothing more. Here are some more objects:

Some of these objects may seem jokey, but they could reasonably be part of a system, be it a computer game or a multimedia story. Do not be constrained to be those dull systems that most software engineers drag out. Object modelling can be used to design lots of things.

By now you should be getting the idea that object modelling is, at its simplest level, very straightforward. The trick comes in knowing what objects are appropriate, and what their appropriate attributes and operations are. It is a question of focus. We will consider some ways of controlling focus later in the course.

How do objects relate to other concepts in design methods?

Remember entity-relationship models? SSADM, JSD and so on have a notion of entity. These are really objects. All we are doing in object modelling is relabelling entity modelling. However, we put the emphasis on capturing and grouping together both functions and data (operations and attributes in our terminology). That is the elegant simplicity of object modelling. We will look at object models later which look remarkably like entity-relationship models (because they are).

We will now look one powerful way of arranging objects - inheritance hierarchies.


Often we will find that there are objects which have something in common. It is then useful to create an abstract object which groups together the common features, and to use inheritance to define the original objects. For example, consider our two fairy story creatures:

Now we can see that they both have the same operations "arrive" and "meet". We can therefore create an abstract creature:

which has the common operations. We can then draw the original objects grouped under the abstract object as follows

Inheritance means that all the attributes and operations of an abstract object are available in the specialised object below. The triangle in the diagram indicates inheritance. The point of the triangle indicates where operations and attributes are inherited from.

Now we can enter a debate about whether nose, teeth and appetite are characteristics of all creatures or not. It they are, we can revise the diagram above as:

By this device Red Riding Hood has also appetite, nose and teeth as operations. The latter two may be of relevance when she goes to charm the woodcutter, provided they are petite and pearly bright respectively.

Okay, so let's have some more practical examples for those of you who have to do real work. Firstly, the frighteningly dull student-lecturer example. You can do the same with the equally dull employee-customer example.

Or more interesting (only a bit):

Inheritance can pass down an arbitrarily deep hierarchy. A slightly more complicated hierarchy is given below to describe objects which can be juggled.

Now we see that a tennis ball has attributes diameter and weight, and operations catch, pick up, drop,throw and bounce. Objects accumulate all the attributes and operations of the objects above them in an inheritance hierarchy.

The big deal about inheritance

Inheritance is considered good for the software re-use and for clarity of description.


When new objects are created which are similar to other objects, they can have many of their attributes and operations ready defined. Let us suppose we now introduce Grandma into our fairy story hierarchy.

Here we get a Grandma who already had an appetite, nose and teeth and who can arrive and meet. Actually these are not in the normal scope of the fairy story, but the principle should be clear.

We might be writing a simple geometry program:

Now to add circles we simply put in another object under 2D shape.

So the circle object need not define the area and position attributes or the get area operation.

It is now possible to buy or obtain ready-built class hierarchies written in object-oriented languages which can be extended in this way to produce a new application.

Designing complex class hierarchies takes time and good design requires experience. But the basic principles outlined above, with some intuitive guidelines, are the basis for the design of good, re-usable designs.

Re-use can be viewed from two directions. Components can be re-used, which is a sort of bottom-up approach to re-use. Or designs can be re-used. Both elements are important in the production of re-usable software.


The notion of breaking the world down into hierarchies of types is not as old as the hills, but it goes back at least as far as the Ancient Greeks: all men are mortal, Socrates is a man, so Socrates is mortal. The reductionist, classification approach is embedded in Western thought, and should be natural for most people to follow.

Experience shows that describing things using hierarchies is an easy and comprehensive way of communicating both structure and functionality.

Summary of this lesson

  •  Objects are a natural way of representing things.
  •  Objects are described by their attributes and their operations.
  •  Objects can be organised in inheritance hierarchies.


  1.  Describe two of the following objects in terms of their attributes and operations: motor car, sheep, kite, hospital, elephant, garden, school, bacon, teddy bear, bank customer, bus.
  2.  Devise a simple inheritance hierarchy for two of the following: geometry, sports teams, politicians, rodents, viral infections, restaurants.

Object Lesson 2 - Relationships and Object Models


We can make up objects our of other objects. This is known as aggregation. The behaviour of the bigger object is defined by the behaviour of its component parts, separately and in conjunction with each other. You will have met a similar idea in program decomposition. Here is a simple example of a juggler:

A juggler has two hands and two feet. He or she uses hands to catch, drop, pick up and throw a ball, and perhaps from time to time scratch his or her head. He or she may also kick a ball with his or her foot.

By analysing our juggler object and breaking it down into component objects, we now have a better understanding of our object. Also, we have discovered two new operations, scratch head and kick.

The diamond in the diagram indicates that one object is made up of another object. The numbers are indicative of how many - more will be said about this shortly. Now the behaviour of a juggler is entirely defined by the behaviour of his or her hands and feet. Of course, real jugglers are made of quite a bit more, but for the purposes of considering their juggling skills we can focus on just the bits of them that are involved in juggling.

Hands and feet could be broken down into their constituent parts, say palms and fingers, soles and toes. However, that does not seem to help us to understand juggling, so the decomposition above is probably enough.

Let us look at another example.

A car engine lubricant is made up of a number of base oils blended with an additive package. The additive package consists of one or more detergents to keep engine surfaces clean, one or more dispersants to suspend particles in the oil to be carried to the filter, one or more anti-oxidants to slow up the thermal decay of the oil, and a viscosity improver to control the viscosity of the oil at different temperatures. A research scientist will experiment with different oils by running engines containing the oils and analysing the effect of the engines on the oils.

Let us look at another example.

For the purposes of assessing the care provision for someone, it is necessary to know their assets, any disabilities they have, any disease they have, any carers they have. Assets are made up of liquid assets, such as cash in the bank, and non-liquid assets such as their home. Carers may be family carers or care packages provided by various services. A care package may be made up of a number of social care packages, such as home helps, and health care packages such as GP service and hospital services.

One of the important analysis and design tools we have is the break-down of complex objects into their constituent parts. It provides a meaningful and sensible decomposition, and it provides scope for re-use of components. Further, the constituent components are often easier to design than large, complex components - this is the thesis on which the early ideas of structured programming were based.


The behaviour of an object which is made up of other objects is usually defined by the behaviour of the smaller objects. For example

To start a car, you start the engine. Thus the start operation on the car involves calling the start operation on the engine. This is known as delegation. The engine then will switch on the ignition system, switch on the starter motor then switch off the starter motor. This is further delegation. To stop the car, there will be a call to stop the engine, which in turn will make a further call to switch off the ignition.

You may read elsewhere about the benefits of multiple inheritance. Most of the features of multiple inheritance can be simulated using delegation, with safer consequences. However, the arguments for and against multiple inheritance (inheriting from more than one parent) are lengthy and can be side-stepped for now.


Objects can be related in other ways than by inheritance and aggregation. Any relationship between real world objects can be modelled: cats eat canaries, dogs bite postmen, the woodcutter murders the wolf, cars run over little old ladies, employees work for organisations, patients visit hospitals, patients stay in hospitals.

One to one relationships

In a one-to-one relationship, one object is associated with exactly one of its related objects. This is modelled by a straight line drawn between the objects. If the relationship is one-way, then an arrow is used to indicate the direction. The line can be labelled.

Thus a man marries one woman (at a time) and a woman marries one man (at a time). A cat eats one canary (before being battered to death by the little old lady who owned the canary). Canaries do not (in general) eat cats, so the eats relationship is one way.

One to many relationships

Sometimes one object can be related to many objects. This is indicated by different marks at the end of the line.

A player plays for one football team. There are at least 11 players for a given football team. Football teams do not play for players.

Zero or more suitors court the Princes. The black dot at the end of the line indicates zero or more.

Before an adequate suitor comes along, as is well known, a princess will kiss at least one frog, and possibly many more if she gets a taste for them. Frogs, being well mannered creatures and not wishing to appear in the gossip columns, never let more than one Princess kiss them.

Many to many relationships

Sometimes objects at either end of a relationship may be related to many objects at the other end.

A dog may bite zero or more postmen. A postman may be bitten by zero or more dogs.

A lubricant is recommended for at least one engine. An engine has at least one lubricant recommended for it.

Object Models

We now have the notation to describe quite complicated systems. The process of object-oriented analysis and design is one of elaborating an object model, increasing its detail and scope until enough is known to construct a computer system (if indeed that is what is wanted).

Here is a simple model for the Red Riding Hood story

Another example is one to describe patient referrals by GP's to specialists.

Another example is for a lift system:

The object model is the principal output of an analysis and design process. The distinction between analysis and design is much greyer in object-oriented development. Essentially it is one of detail. Analysis usually omits concerns about how a system is to be developed, and some of the objects may not be fully decomposed.

The object model is the central pillar of an analysis or design. It defines the computable elements. The dynamic and functional models discussed later all must result in objects, attributes and operations being defined in the object model. It is comprehensive and complete in terms of defining functionality, specifying not only the data but also the operations on the data.


Construct an object model for one of the following: the football league, star wars, a personnel system, Cinderella, an automatic washing machine, a video hire shop, a kitchen.

Object Lesson 3

Analysis - the rudiments of an approach

The first reaction to any project with any complexity is "how do we get started?". Where do you begin? The starting point for object-oriented analysis is to identify candidate objects and their relationships. In fact, this is the be-all and end-all of object-oriented analysis. All we are going to do in the rest of the course is give you some tools to look for objects. But for the present, we will rely on native wit. Begin by looking at the real world. Open your eyes and your ears.

The first stage can be a simple brain-storming of the possible objects. One method is to go through all the nouns in any documentation about the world you are analysing, and considering these as candidate objects. Let us suppose that you are analysing a system for a motor museum. The system is to produce a multi-media guide to the museum.

A preliminary list of objects might be:

car, bus, vehicle, number plate, exhibit, manufacturer, date of manufacture, value, position, weight, size, photograph, tools, shop, garage, ticket, owner, history 

No doubt we could go on. However, the above list is probably enough to illustrate our needs.

Removing synonyms and/or generalising

The first stage is to try and group together items which are either just different names for the same thing, or whether there is a common generalisation. The obvious thing above is to examine the following group

car, bus, vehicle, number plate, exhibit, photograph, tools, garage 

It would seem that these could all be grouped together under the object exhibit. The question is, can these all be considered to be one object. In this case, the answer is probably yes. In the museum the behaviour of cars, photographs, and garages are not radically different, so we can group them together. We can differentiate between the individual objects on the basis of an attribute, say type.

Our first object is therefore:

We are then left with the list of candidate objects:

exhibit, manufacturer, date of manufacture, value, position, weight, size, shop, ticket, owner, history 

Look for attributes

Attributes are themselves objects, but with trivial behaviour - essentially all you do is record their value. Considering the above list, the following are probably attributes of exhibit:

exhibit, date of manufacture, value, position, weight, size, owner 

So we have the exhibit object as:

And we are left with the candidate objects:

exhibit, manufacturer, shop, ticket, history 

We might have considered manufacturer as an attribute, but it may be useful to separate out manufacturer information later in the design. So we keep it for the time-being. The time to fold it in as an attribute is when we find it has no discernible behaviour of its own.

Irrelevant objects

Sometimes an object is irrelevant because of what you are designing the system for. Here the ticket object may be very relevant for a sales desk system, but not for a multi-media system. Likewise, the shop is irrelevant. So we strike these out and we are left with the following objects.

exhibit, manufacturer, history 

We have these objects potentially linked in the following way.

Both exhibits and manufacturers may have histories. By factoring out manufacturers, the system may be able to trace from an exhibit not only its history, but also that of its manufacturer, and maybe allow someone to search for all the objects in the museum held by a given manufacturer.

The fleshing out of the objects with attributes and operations can be developed hereon in. A first cut of this might be:

The process of development

The approach to development recommended here is an iterative one. It involves repeated refinement of the object model. The process needs to be controlled by an appropriate project management process, involving reviews and checkpointing.

There are a variety of techniques for driving this cycle.


The design of the process progresses by the construction of a series of prototypes. The prototypes are constructed using the most productive tools in terms of speed of development. The final system, when constructed, is functionally equivalent to the final prototype, but it is implemented in the most suitable technology for use of the system.

For example, a patient monitoring system, requiring lots of graphical displays from a digital feed, might be prototyped in a spreadsheet because the spreadsheet has very powerful graphics. However, when the system is built it would have to be implemented to run on a specific computer with a limited display and limited amount of memory, without a spreadsheet license. Using iterative design methods, a series of prototypes could be developed using a spreadsheet until functionality is finally agreed. The final prototype would then be used as an active specification for the final system which might be written in C.

Prototyping is useful in the following contexts

  •  at the beginning of a project
  •  in research-oriented projects where there needs to be some exploration of different ways of solving problems
  •  where there is a need to clearly define interfaces
  •  where the client needs reassurance that progress is being made

The weaknesses of prototyping are:

  •  the prototype is often taken on as a final system
  •  there can be difficulty explaining a long lag between first prototype and final system
  •  inevitably there will be some differences between the final system and the final prototype because of different interface mechanisms

Evolutionary development

This process involves a staged implementation of the system. Analysis and design of the next stage is after successful implementation of the previous stage. Evolutionary development is useful if:

  •  a partial implementation of the proposed system is of some use
  •  it is not replacing another system which cannot be integrated appropriately at each stage of development
  •  The weaknesses of evolutionary development are:
  •  disruption caused by each stage of implementation
  •  difficulty controlling the end-point of development


Computer systems development is constrained by money, time and personnel available to develop, implement and install. The idea of boxing is to drive the iterative process of development by checkpointing at key stages according to one of the three constraints.

Time-boxing involves setting a time limit on each iteration of the cycle. As the time limit nears, the system is implemented, irrespective of whether the planned functionality is fully implemented. The system is reviewed, and the plans for the next stage take into account what has been achieved and what has been learnt.

Money-boxing is similar to time-boxing, except that the reviews are triggered by expenditure limits. Personnel-boxing involves controlling the process using the time of certain personnel.

Boxing has an advantage of driving the system, and forcing changes of plan as the system development progresses.


Rather than produce a contract defining all functionality to be delivered in a cycle, it is possible instead to construct a charter. A charter prioritises the functionality which should be delivered in the cycle. The charter can over-specify what is expected in the cycle, in case progress is faster than anticipated.

Why object modelling lends itself to iterative methods

The problem with traditional systems developments has been that notations differ in each of the phases of development. There is then a translation problem and consistency problem at each phase transition. Consequently changes upstream have always been resisted.

Object modelling uses the same notation for both analysis and design, and if object-oriented languages and databases are used, the notation persists through to implementation. Design is essentially a fleshing out of the analysis, by further refining and adding detail to the objects produced by the analysis, and by adding new objects, say for the purposes of providing interfaces.

Object modelling, by using a consistent notation in each phase, results in more productive CASE tools which can generate the code outlines for the implementation.

Lesson 4 - Dynamic Modelling - Event traces

Dynamic Modelling

Dynamic modelling tries to capture how objects behave and how they interact. In this way, we can find new operations, attributes and relationships for the object model. Dynamic models are perhaps the most effective way of uncovering the behaviour of systems. We shall meet functional modelling, later, which is another method of uncovering operations and attributes.

We shall look at dynamic modelling as a form of story telling. Do not think this trivialises it. The plot of a good novel is at least as complicated as the most intricate of computer programs, though usually less baroque (at least since Dickens). The approach we are going to adopt is to look at the stories in the world we are analysing, call them scenarios (just so we have some jargon), and then see how these stories affect each of the individual objects (which for the novelist would be characters and artefacts).

Jacobson talks of "Use Cases". They are essentially the same. Unfortunately the name implies too much system design at the initial stages of analysis.

Event traces - building scenarios

As we said that we were going to tread dynamic modelling as a form of plot construction for stories, let us begin by looking at a trace of events for Red Riding Hood - we will do some serious work later. If we strip down the things that happen to Red Riding Hood in the story, we find that she does the following

  1.  leaves home
  2.  meets the Wolf
  3.  arrives at Grandma's
  4.  is threatened by the Wolf
  5.  leaves Grandma's
  6.  meets the Woodcutter
  7.  arrives at Grandma's

And no doubt lives happily ever after, wrapped in the arms of the woodcutter. We have focussed purely on the things that Red Riding Hood does, not the things she witnesses. Thus all the stuff about "what big teeth" can be omitted. The facts (as Mr Gradgrind of Hard Times, Dickens' prototype logical positivist, would have said) are all that are to be considered.

This is known as a scenario. We can construct other scenarios. For Grandma it would be:

  1.  Wolf arrives at Grandma's house
  2.  Wolf eats Grandma
  3.  Woodcutter cuts open wolf and Grandma escapes

Personally I have always thought this a little suspect.

Now we can open up the objects by examining the events in relation to the objects using an event-trace diagram. Here is an event trace diagram from Red Riding Hood's perspective.

The object interaction diagram here is the formal notation. Down the left hand side we can list the actions or events in a scenario. The thick vertical bar to the right of the events is the system boundary. Hence the trigger to Red Riding Hood leaving home is outside of the story - perhaps her mother nagged her into it. The vertical lines indicate objects. The arrows represent the interactions between the objects - Red Riding Hood is the protagonist in all this. The labels on the arrows are the operations.

Of course, we do not need an elaborate case tool to represent this. A simple table would do:

Red Riding Hood



Grandma's House


RRH leaves home


RRH meets wolf in wood



RRH arrives at Grandma's



RRH runs away from wolf



RRH meets Woodcutter



RRH returns to Grandma's



Well, let us look at a more traditional example.

Let us consider a patient being admitted to hospital for treatment of a simple hernia.

  •  Patient arrives at reception
  •  Patient gives personal details and is given a form
  •  Patient goes to ward and hands in form
  •  Patient is interviewed by nurse and gives medical details
  •  Patient is interviewed by aneasthetist and gives the same medical details
  •  Patient is examined by consultant
  •  Patient is given a pre-med injection
  •  Patient is taken to surgery
  •  Patient is given anaesthetic
  •  Patient is operated on
  •  Patient is returned to ward
  •  Patient is examined duty doctor
  •  Patient is examined by nurse
  •  Patient is examined by consultant
  •  Patient is discharged

A simple event trace might be as follows. To fit on the diagram, aneasthetists and consultants have been lumped together as doctors.

The aneasthetists story is a bit different

  •  Aneasthetist interviews patient
  •  Aneasthetist arrives at operating theatre
  •  Aneasthetist aneasthetises patient
  •  Aneasthetist monitors patient
  •  Aneasthetist releases patient to ward

Medical practitioners reading this, forgive any errors and omissions. We could go on and look at the story of the consultant, a nurse, or even the hospital bed.

We are trying to pick up the bare bones of stories. Stories tell us what things there are in the world, and how they interact. It is the basis of a good analysis. The simple list considerably simplifies interviewing and recording. We do not consider all cases, alternatives, and complex repetitions. We work through individual cases. Complexity can be introduced later.

Of course an individual may have many stories. The patient story varies with each disease and with any complications. The nurse story of caring for an individual patient depends on the patient's problems. And indeed an individual may be in the middle of a number of stories all at the same time, say a nurse caring for a number of patients and running the staff coffee club. We look for the simple sequences. It is the merging of those sequences which makes the world look a messy and complicated place.

Analysing the world is like unravelling a number of threads which have become wound together. The picking out of individual threads stops the feeling that the world is impossible to comprehend. Taking each thread at a time, the overall pattern begins to make sense. And we do not get buried in confusing knots and misleading patterns.

Here is a story of an initial analysis session

  •  Analyst contacts manager
  •  Analyst arranges interview
  •  Analyst meets manager
  •  Analyst asks manager the first thing she does in a morning
  •  Analyst asks manager the next thing she does
  •  ..... and so on
  •  Analyst ends enterview
  •  Analyst looks for ends of threads
  •  Analyst records possible threads

Of course, this could be quite time-consuming. Visits to the coffee machine, chats with colleagues and so on would be left out. Only the gross detail of the manager's activities would be recorded such as

  •  Check answerphone messages
  •  Check diary notes
  •  Check mail
  •  Attend safety review meeting
  •  Carry out staff appraisal interview
  •  Lunch with salesperson
  •  Organise workshop
  •  Write report
  •  Review bids for a tender

And some of these actions are part of bigger, longer stories. The focus of the analysis will determine which of these threads needs to be examined in detail at a later stage.

Does all this lead to computer programs? The answer is, it can do, but not necessarily. However, to reassure those with aspirations to encode the whole world on their PC, here is an example as part of the design for a washing machine controller. A boil-wash may operate like this:

  •  Lock washing machine door
  •  Open hot water valve (Note: start of wash program)
  •  Switch on drum tumble
  •  Close hot water valve
  •  Set thermostat cut-out to 105 degrees
  •  Set thermostat cut-in to 100 degrees
  •  Switch on heater
  •  Switch of heater
  •  Switch off drum tumble
  •  Open drain valve
  •  Turn on pump
  •  Start drum spin
  •  Stop drum spin
  •  Turn off pump
  •  Close drain valve
  •  Open cold water valve (Note: start of first rinse)
  •  Switch on drum tumble
  •  Close cold water valve
  •  Switch off drum tumble
  •  Open drain valve
  •  Turn on pump
  •  Start drum spin
  •  Stop drum spin
  •  Turn off pump
  •  Close drain valve
  •  Open cold water valve (Note: start of second rinse)
  •  Switch on drum tumble
  •  Close cold water valve
  •  Switch off drum tumble
  •  Open drain valve
  •  Turn on pump
  •  Start drum spin
  •  Stop drum spin
  •  Turn off pump
  •  Close drain valve
  •  Unlock door

Pretty tedious, I know, but a comprehensive explanation of how a washing machine operates on a particular wash. It leaves out a lot of things, such as timing information, but that can be added in later if necessary. And, of course, the hot wash, warm wash and cold wash cycles will be similar, but equally long-winded. Note also that there are some stories hidden behind this. One even duller story is that of the drum. A single tumble story is:

  •  Set motor direction clockwise
  •  Turn on motor
  •  Turn off motor
  •  Set motor direction anticlockwise
  •  Turn on motor
  •  Turn off motor

And drums being the goldfish of the washing machine world will repeat this story endlessly until told to stop.

Progressing the analysis with event traces

At first it can be a little disconcerting to approach problem-solving in this way. Where is the flow of control? This seems too simple. The answer is that we eat a banquet one mouthful at a time. We climb mountains one step at a time. Great floods arrive in raindrops. There is no wisdom in trying to do everything at once.

Stories written down like this are a wonderful basis for discussion. People quickly relate to them and feel they can make changes. Their very simplicity is their power. Once the idea is grasped, it works very well. And above all, we have got away from the jargon and mysticism of computer system design-speak. Theology is fine for theologians, but not for the layman. Talk the jargon in private, in the cloisters of the system development team.

So, the first step of analysis is to go out and collect stories. Lots of them. Call them scenarios to please your IT manager (or whatever manager has commissioned you). Collect them until you have the complete recipe book for an organisation, or the part of an organisation you are interested in. Then, when you have the recipes, you can cook the dishes and prepare the banquet.

Work like a detective. Ask what happens first. Ask what happens next. Sit patiently through the arm waving and the elaborate descriptions - some of them might be worth recording for later use, so use your judgement. Remove conjecture. Keep out the emotive elements. And then, when you have a simple list of instructions, ask your interviewee if he/she agrees with the simple list. Explain that you are after the bare bones, and that all the wonderful detail you have collected will be used later.

What do you do with the event traces?

As you develop an event trace you will find the following:

  •  There are objects missing from your object model - so add them.
  •  There are operations missing from your object model - so add them
  •  There may be attributes you have overlooked - so add them
  •  Perhaps there are objects and operations which you thought at first were appropriate, but now look unused - so remove them.

This follows the philosophy that everything in object modelling is to do with expanding the object model.

Dynamic modelling - state diagrams

Computer systems are built from objects which respond to events. External events arrive at the boundary of the system, say when a mouse button is clicked. Some object picks up the event, responds to it, and perhaps generates some internal event by calling an operation on another object.

In this lesson we examine a way of describing how an individual object responds to events, either internal events triggered by other objects or external events triggered by the outside world.

State diagrams allow you to further explore the operations and attributes that need to be defined for an object. They consist of sets of states which an object is in, and events which take the object from one state to another. Let us go back to our juggling example. A juggling ball starts on the floor, and alternates between being held and being in the air, until it is dropped and lands on the floor. This can be drawn like this:

Initial states are represented by a black blob. Intermediate states are represented by round-cornered boxes. Final states are represented by bulls-eyes. Events are represented by arrows with labels giving a name. An event usually corresponds to an operation in the object model.

Basically any system which you can reasonably model on a computer jumps from state to state. Even those apparently continuous games are made up of objects which change their state very quickly.

The level of state decomposition must be decided on judgement. Too fine a grained model is inappropriate; say modelling all the possible ages of an employee as individual states. Too gross a decomposition is useless; say modelling an employee as employed or not.

Let us look at the state diagram for Red Riding Hood. One might be:

Where events come from and go to indicate something about the behaviour of the object. The above diagram implies that Red Riding Hood cannot proceed to living happily ever after while she is in the woods. Of course, if she had fallen madly in love with the woodcutter and did not care for the well-being of her grandmother, then the arrow to lives happily ever after might well have originated at a different place.

Of course, not all objects have behaviour worth modelling. Consider our example from the field of object oriented nursery rhymes, the sad story of Humpty Dumpty. Clearly one would not want to model such trivial examples.

A more complicated example from the world of object-oriented aviation is:

Careful consideration of a state can often uncover more complicated behaviour. Consequently we have a notation to describe substates. The aggregation notation can be used to illustrate this.

What do we learn from this modelling? Firstly, the states need to be recorded in the attributes of an object. For example, we may need an attribute ("is flying", say) which is true or false to indicate whether an aeroplane is flying or not. Also, we recognise the need for operations to cause the state transitions. For example, we may need a "take off" operation to enable the object to move from "at airport" to "flying". The transition diagram also indicates some of the things the operation must do, such as change the attributes which determine the state.

Basically you need to construct a state transition diagram for every object with significant behaviour. You need not construct one for anything with trivial behaviour. The reason for doing this is to identify further operations and attributes, to check the logical consistency of the object, and to more clearly specify its behaviour.

All the best notations can be used to describe the process they facilitate. Now we have the basic ideas of object models and dynamic models, our approach to analysis and design (so far) can be summarised as:

We shall shortly be adding functional modelling to our portfolio of tools, which is another way of discovering attributes, operations and relations for our object model.

An example of an object model for a simple computer

Let us look at the example of designing a computer. Of course, one is unlikely to be asked to design a computer this way, but it is an interesting way of exploring the method so far using a relatively complicated application that computer scientists are familiar with. Our first notions of a computer might be:

The CPU (central processing unit) reads and writes words to and from memory. A peripheral (such as a keyboard or screen) has information transferred to or from memory. A peripheral can also interrupt the CPU. (For the non-computer scientist venturing this far, an interrupt is an electronic signal which forces the CPU to jump to a special instruction stored in memory). There needs to be at least one peripheral to make the computer useful (intraverted computers are of little use). However, not all peripherals use interrupts, so it is possible to have a CPU without interrupts (unlikely, but some simple computers operate this way).

Some careful thinking might open up the operations and attributes thus:

Let us now look at an event trace for an instruction to add two integers (stored in memory) and to put the result back into memory.

This does not tell us about any new operations, but careful consideration of the above indicates that the CPU needs somewhere to store the first integer (at least). Internal storage in a CPU is normally called a register. So our object model for a CPU grows just a tiny amount.

Exploring more instructions is not likely to yield too much more. However, let's look at the state diagram for the CPU.

The CPU loops endlessly, fetching instructions and executing them. We can see from this state diagram that the CPU needs somewhere to store the address of an instruction. It is also clear that there is a need to store an instruction retrieved from memory. Our object model grows a little more.

We have added two more attributes - an instruction register to store the instruction, and a program counter to store the address of the next instruction to be fetched. We have two more operations, fetch instruction and execute instruction.

Let us now look at an interrupt.

An interrupt causes the CPU to start executing a routine stored in a pre-defined location in memory. Before it does that, however, it needs to remember where it is in the program it is executing. The normal way of doing this is to keep a stack, the top of which contains a sequence of addresses of partially complete routines. This implies the need for a stack register, to store the address in memory of the top of a stack. We can model the response of the interrupt by the CPU by extending the state diagram for the CPU.

Basically, the interrupt changes the address of the next instruction which the CPU is to execute. The response to the interrupt must also store the address of the next instruction on the stack.

We now can modify the object model further:

We need to add a stack register to the CPU to store the address of the top of the stack in memory. Also, we need to know the address of the interrupt routine which the CPU must go to when an interrupt takes place. This can be stored as an attribute of the Peripheral, as there is likely to be a different interrupt address for each peripheral. Finally, we can record the goto operation as a special operation for the CPU, which simply changes the address in the program counter.

Now let us be a little adventurous and add a peripheral. Let us put in a mouse.

A mouse, if you take it apart, has two little rollers recording movement in two directions (vertical and horizontal, say). Every tiny movement of the mouse causes the one or both rollers to register a movement and interrupt the CPU. The CPU therefore gets a continual stream of interrupts as the mouse is moved. It then needs to decide what to do (move the cursor on the screen, or whatever). A move which is not vertical or horizontal will send a mixture of vertical and horizontal messages to the CPU.

The mouse inherits the transfer in and transfer out operations for passing information to the memory. One way the mouse might use these is to have two locations in memory to record how far the mouse has moved in the vertical and horizontal directions.

An object model for genetic algorithms.

Let us now consider another example, this time from the realm of evolutionary computing. A genetic algorithm uses simulations of genes to solve problems. To begin with, we have the idea of a chromosome which is made up of one or more genes. The more genes you have, the more complicated the problem you can try to solve.

The genes themselves can be represented as integers, bit strings, characters, or whatever. Normally they are described as bit strings. But basically they represent some value. Suppose we want to find high values of an equation such as


Now we could represent this with a sequence of three genes, each gene being an integer value for x, y and z. We end up with an object diagram:

The chromosome represents (in a sense) the equation. Each gene defines the value of one of the variables. Now the equation knows how the variables are combined, and so can calculate its value using the integer values defined by the gene.

Of course, this is all very uninteresting, until you start putting lots of chromosomes together in a gene pool. The chromosomes can start with randomly chosen values, and you end up with a large number of equations with different values.

Now, by randomly creating a set of chromosomes in a gene pool, we have effectively generated lots of values of the equation. Some will be greater than others. The genes which represent high values are considered to be "fitter" than those representing low values of the equation. This is where Darwin steps in. Let us look at the operation of the gene pool.

From a randomly generated set of chromosomes, firstly all the chromosomes are evaluated for their fitness, which is done by evaluating the equation which the chromosome represents. Then the chromosomes are selected; strong ones (ie those with a high fitness) have multiple copies made. Weak ones are destroyed. Then the selected chromosomes are interbred, by mixing their genes up in some way. Then the new set of chromosomes are evaluated, and the process goes on indefinitely. Let us examine the state transition of a chromosome.

A chromosome comes into being either by being generated randomly, by being created as a copy of a strong gene, or by the result of inter-breeding. It is destroyed either because it is unfit, or because it breeds with another gene.

Let us look at one way of breeding. An object interaction diagram is a good way of describing this.

Firstly the chromosomes are paired up. Then a crossover point is chosen. Finally, the chromosomes swap tails.

Now we can go back to our object model and add a few attributes and operations.

Having done all this work to define the genetic algorithm, we can now try and reuse it on another problem. Let us suppose that we want to find a good schedule for a truck delivering Christmas puddings to supermarkets in different town centres. The truck must deliver as much product as it can in the day. However, it has to stop after doing three hundred miles. We can use our genetic algorithm to describe the problem thus:

This time the gene represents a particular supermarket rather than the value of a variable. The fitness function would return zero if the schedule broke the mileage limit, or the number of Christmas puddings delivered if the schedule is within the mileage limit.

One of the great benefits of a good design method is the capacity to re-use components. Hopefully you can see from the above example how the simple notion of a genetic algorithm could be re-applied to lots of different problems.


  1.  One feature of genetic algorithms is the notion of mutation. Periodically a gene mutates, say with one chance in a thousand during breeding. Add mutation to the genetic algorithm design above.
  2.  Construct a design for a genetic algorithm system to find solutions to the travelling salesman problem. Starting from home a salesman must visit each of ten (say) towns, exactly once and return home at the end. The objective is to minimise the distance travelled.

Business Activity Modelling - A Starting Point for Software Development

Applied to The Functional Specification of a System for Planning Elderly Care in the European Community (PLANEC)

Dr K. Lunn

Meniscus IT Ltd.

email: kenlunn@firstnet.co.uk


One of the most fundamental problem of software engineering is the construction of suitable abstractions. Another is the adequate communication of functionality between all parties involved in the construction and implementation of IT solutions, from end-users, through business managers, to software developers. The approach described here uses the notion of business activity (or business process) as a form of abstraction which can be used in a variety of ways to develop specifications and designs for software systems. The method avoids complex notations, at least at the specification stage. The origins of the ideas began with a multi-national company which had a very diverse set of software solutions implemented in similar businesses world-wide. The ideas have been refined and adapted in the functional specification of a system to support elderly care planning in the European Community, and some aspects of that project are reported in brief in this paper.

The business activity perspective has a number of potential uses. At a strategic level it is a good framework for comparing total IT solutions in similar business environments, and consequently can be used as a support for strategic planning of systems. In the procurement process, it provides a good framework for scoping and defining functionality of a required system, and for comparing proposed solutions. In the business reengineering phase, it can be used as the basis for the terminology in describing actual processes and quantifying aspects of changes to business operations. In the software specification and design process it can be used in an object modelling process as a framework for driving the dynamic modelling in a way similar to the event-trace methods of OMT.

The author has used the techniques briefly described here in a variety of consultancy projects. The common characteristic of these projects have been:

  •  A number of business environments with similar but non-identical needs wishing to share software solutions.
  •  A need to communicate to senior management, often with no computing background, the scope and functionality of computer systems.
  •  A need to compare at a gross and at a detailed level the functionality of different computer solutions.
  •  A need to relate computer systems to business operations.
  •  A need to quantify the effects of system changes (of both computer and business operations) for the purpose of assessing financial returns and quality improvements.
  •  A need to involve non-IT staff in the analysis and design of systems.

Although no panacea, the business activity approach has proved to be very effective in communication, obtaining consensus, and co-ordinating group activities. By removing temporal and information flow considerations, which are often a source of confusion and disagreement, it enables a comprehensive map of a business to be constructed; it can be used later to introduce information flow and time dependencies for the purposes of deeper analysis and design.

The approach described here has been pragmatic and driven by the needs of applications. However, a common theme is emerging, and hopefully through the broader application of the techniques they can be related to and/or integrated with established methodologies. The use of the approach in PLANEC has begun to open up the integration into object modelling in a way which appears consistent with and complementary to established approaches.

The PLANEC application

The PLANEC consortium has undertaken the ambitious task of producing a demonstrator computer system which can be used in all member states of the EC; it will support the strategic planning of elderly care provision by a broad range of organisations from municipalities to hospitals and other service providers. The consortium is led by the Stakes Research Institute in Finland, and has partners in Holland, Spain, Germany and the UK, and consists of research social scientists and practitioners in the field of elderly care; a software house has been contracted to construct the demonstrator system, but the responsibility for specifying functionality rests with the social scientists. The author has been facilitating the specification process, firstly by teaching the rudiments of object modelling, and then by introducing and developing the notion of activity modelling.

Although the basic needs of the elderly may be similar in each locality, the provision of services to meet those needs varies widely in type and organisational structure. Each country has different legislation, different infrastructure, and different policies. The greatest challenge, from a software design perspective, has been to devise a suitable abstract description of the problem in a way that can be understood by the social scientists who are steering the development, the software house which will be producing the demonstrator, the potential users of the system, and other interested third-parties.

Object modelling was chosen as a design method for the following reasons:

  •  it is a good basis for prototype-driven, iterative methods of development;
  •  there is a good correspondence between some aspects of the modelling and social science methods;
  •  the methodology is easy to teach at an abstract level;
  •  the resulting designs are clear and easy to communicate to non-computing personnel.

The project found early-on that the greatest challenge was providing a common framework within which to describe the differing approaches to elderly-care provision. The solution adopted was to introduce ideas from Business Process Re-engineering, using an activity (or process) model. This model was then mapped back into the object-modelling. This method is a novel extension of object modelling, and serves as a useful contribution to computer science in its own right.

A Three-Level Business Activity Model

A business activity is a logical grouping of events which can be agreed as a fundamental element of a business. Although some implicit ordering may emerge, the intention is to describe a whole business as a set of about 200 to 300 activities without concern about the order or the interactions of the individual activities. These activities are grouped on three levels. The top level should be no more than about ten activities. For a manufacturing company, these activities may be:

  •  PLAN - develop the tactical and strategic plans of the company
  •  BUY - all purchasing activities and the recording of these activities, including payment
  •  SELL - all selling activities and the recording of these activities, including revenue
  •  MANUFACTURE - all manufacturing activities and the recording of these activities
  •  DEVELOP / DECOMMISSION - all construction and decommissioning of plant
  •  FINANCE - monetary flow and control activities, including investments, etc.
  •  RESEARCH - all research activities

It can be seen that at the abstract level these are very gross divisions of the company behaviour, which may be considered too obvious and simplistic. However, experience shows that it is easy to obtain broad agreement on these. Using a group of experienced researches and practitioners in the field of elderly care, a workshop produced the following breakdown of elderly care provision into:

  •  POLICY - Setting of national policy, regional policy, local policy. Setting of voluntary organisation policy. Identification of cultural norms. Commercial organisation policy (e.g. health care, residential homes, sheltered houses). Negotiation with providers, planners, financiers, and with different policy makers.
  •  PROVIDE / CONSUME - Activities involved in the provision and consumption of care at the organisational and individual level. Organisational planning. Purchasing, provision. Developing of the service. Personnel and day-to-day management. Negotiation with planners and financier. Business plan and implementation. Cash management and financial management. Assessment of needs. Allocation of care.
  •  MONITOR / EVALUATE - Estimation of demand. Creation of information services. Identification of goals. Setting evaluation criteria. Collecting information. Analysing information. Identifying and reporting successes / problems / failures.
  •  FINANCE - Raise funds (e.g. taxation, fees, statutory and private insurance). Budget setting. Control of spending. Forward projection. Allocation of funds. Release of funds.
  •  PLAN - Strategic and tactical planning activities. National, regional and organisation levels. Estimation. Negotiation. Goal setting. Implementation planning. Realisation of goals. Purchase, sell provide. Identification and realisation of policy. Developing of provision.

Agreement on these took less than a morning, and the breakdown has survived six months of scrutiny.

The activity model is intended to hide organisational structure, and simply to describe the functionality of the organisation. Activities may be implemented in many different ways, sometimes even within the same company; for example there may be many types of manufacturing plant within a company, producing different goods, or producing the same goods in different ways. Having obtained the common abstraction it is possible then to map it back onto organisational structure.

The importance of the omission of information flow and temporal considerations cannot be over-emphasised. There is rarely any disagreement on what an organisation does, but there is always disagreement on how it does it. Flows can be added later, once functionality has been agreed.

The second level breaks down into finer detail each of the first level activities. Again, this is a logical grouping, and any temporal or informational links are implicit and/or coincidental. A second level breakdown for the SELL activity of a manufacturing business might be:

  •  NEGOTIATE CONTRACT - all activities involving negotiations and setting up of contracts with a customer.
  •  TAKE ORDER - all activities relating to order taking
  •  DELIVER GOODS - all activities relating to delivery
  •  ACCEPT RETURNS - all activities relating to return of unwanted or unsatisfactory goods
  •  INVOICE CUSTOMER - all activities relating to invoicing
  •  ACCEPT PAYMENT - all activities relating to acceptance of payment

In practice there are likely to be at least double these activities for a particular business type. A second level breakdown for the PLAN activity for elderly care provision was derived in a workshop as:

  •  Order to start planning
  •  Collect information
  •  Analyse information
  •  Set up strategic goals
  •  Set operational goals
  •  Arrange Resources
  •  Negotiate with actors
  •  Make proposal
  •  Consult / liase
  •  Approve plan
  •  Implement plan

Again, this took about two hours to agree and has survived six months of scrutiny.

Finally, the second level activities can be broken down into third-level groupings. For the elderly care planning system, this was only done for the PLAN and MONITOR/EVALUATE activities, as these are the principal focus of the system being developed. A third level grouping for the INVOICE activity in a manufacturing business might be:

  •  Identify goods delivered and accepted
  •  Price goods
  •  Apply discounts
  •  Print invoice
  •  Issue invoice

Again, this is a little simplistic, and it is likely to involve at least twice as many activities in a given business. The third level activity for Analyse Information in PLAN was:

  •  Analyse need/demand
  •  Analyse qualitative output
  •  Analyse quantitative output
  •  Analyse material input
  •  Analyse immaterial input
  •  Analyse effectiveness
  •  Analyse productivity
  •  Analyse economy
  •  Analyse efficiency
  •  Analyse equity
  •  Identify problems and explanations
  •  Identify successes and explanations
  •  Produce projections of future demand
  •  Evaluate against state of the art

This was extended towards the end of the project from a much shorter list. The lesson to be learnt is that the third level is more open to changes. Practical experience of trying to break down to four levels shows that it is very difficult to gain agreement, and becomes much more related to how a business operates rather than what it does.

Having obtained agreement on the third level of activity, it is possible to describe each activity in some detail. This has not yet been done for the PLANEC application. For a manufacturing company, such a description can take up many hundreds of pages. However, the principle value of the activity modelling approach is its conciseness and brevity. Detailed descriptions are at times either too vague or too prescriptive. However, for some purposes, such as software tendering, it is vital to elaborate on activities for a third party to understand.

The initial value of constructing a business activity model

It can be seen that the above breakdown is in some sense simplistic, and it leaves out a lot of detail. However, experience shows that there is considerable value in the process of constructing such a model. The following benefits have emerged:

  •  It is possible to construct the model with business personnel, without consideration of the operation of computer systems, or indeed any consideration of software design issues.
  •  Agreement is easy to obtain up to three levels.
  •  Summarising a business in such a way, which can usually be printed on a single sheet of A4 paper, provides a framework for discussions which is manageable.
  •  The process clarifies terminology.
  •  The model can be constructed in two or three days using personnel with a high-level view of business operations.
  •  Although it is a systemic view of a business, it can be understood easily by anyone with very little introduction.
  •  It builds consensus, and emphasises areas of commonality.
  •  By focusing on function rather than organisational structure, it can be used as a basis for discussion in business process re-engineering, and it can build a bridge between software functionality descriptions and business organisation descriptions.

Strategic uses of a business activity model

Perhaps the greatest value of an activity model at this level is its ability to provide a concise, easily explained terminology, organised in a simple framework, which uses the phraseology of the domain. Some of the uses to which they have been applied successfully are listed below.

Mapping current IT solutions

Using the activity model as a checklist, it is possible to identify all IT systems in use within a business. This is a gross level map; for example a manufacturing business may have different types of manufacturing plant using different software systems. It may be desirable to produce a number of maps for the business, depending on organisational structure, say one for each production stream.

The result of doing this for one organisation highlighted the radical difference in IT support between similar organisations in different countries, from one using over a hundred systems to one using thirty to achieve the same purpose in similar markets. The experience of using this framework was very positive, as it was possible to identify possible changes in IT usage within different businesses, and to relate them to possible business process changes.

Scoping of a package

It is possible to rate a package against each activity. The author has worked with various numeric ratings. The most effective scheme, however, has been to rate the package against each third-level activity according to the scheme:

  •  FULL - the IT package supports all aspects of the activity where IT support is reasonably expected
  •  PARTIAL - the IT package supports some aspects of the activity, but not all, where IT support is reasonably expected
  •  NONE - the IT package supports none of the IT requirements of the activity
  •  NOT APPLICABLE - the activity requires no IT support

In the process of rating the package, the partial support ratings should be accompanied by a description of the deficiencies of the package.

Comparison of packages

Using a rating scheme as above, it is possible to provide a very descriptive and concise overview of the similarities and differences of two or more packages.

Request for tender

The activity model can be used as a checklist for a package supplier or systems developer. By highlighting the activities which need IT support, the supplier can be asked to address the needs of the business. This requires a full description of the activities.

Identifying strengths and weaknesses of IT provision

This was done by the PLANEC group, to establish market needs for the proposed PLANEC system. The activity model was used to construct a questionnaire to identify strengths and weakness of the IT provision in different organisations which need to support the planning of elderly care provision. By collecting information in this form, it is possible to relate the current market to the planned functionality of PLANEC.

Identifying actors

By cross-referencing interested parties with activities, it is possible to gain an understanding of how a system may be used. This was done by the PLANEC partners in their own countries, and it opened up a number of organisational differences between the countries. In each case, there are different actors operating elderly care provision.

Analytical uses of a business activity model

So far, temporal and information flow aspects have been omitted from the model. The reason for this is that at the functional level similar businesses rarely differ. However, at the operational level they do. By introducing temporal and information flows, analysis of different aspects of a business can be incorporated. Two simplified examples drawn from actual projects are given below. They use activities defined in the activity model to indicate information flows, and the effect of delays on information flows.

A problem of invoice errors

Chinese whispers within a group of companies indicated that one company had cut its invoice error rate from 5% of all invoices having an error, to almost zero, by changing from one computer package. This was being used as an argument for a wider utilisation of the given package. To substantiate the claim, an analysis was done of the business process before and after the package installation. Prior to the installation, the information flow relating to orders was:

Changes to standing orders were taken by the salesman visiting the customer. These were taken on paper, and by the time the salesman had returned to the office, and the changes recorded on the computer system, an average of ten days elapsed. The consequence was that a high number of deliveries, approximately 5%, were not as the customer now wanted them. Therefore there were returns which the depot recorded on paper, and sometimes extra emergency deliveries. Returns took five days to record on the computer system. However, the practice was to issue invoices the day after delivery, and the standing orders were used. The customer would then query the invoice, usually at the end of the 30 day payment period; then they would pay the re-issued and correct invoice at the end of a further 30 days payment period.

The net effect of 5% of invoices being paid 30 days late is that the company needs approximately half a percent of its annual turnover needlessly tied up in cash, which for a multi-million pound turnover is a substantial cost to the business. The system looked ridiculously inefficient, but it had been set up fifteen years earlier in a more stable marketplace with fewer products, and when batch processing was the most cost-effective means of providing IT support.

A new system was introduced which allowed telephone ordering, and for the depot to record directly onto the computer system any returns. A burden was taken off the sales force, who could then concentrate more on selling to new customers rather than servicing existing ones. Customers were sold the new process on the basis that they could adjust their stock levels more easily, and therefore reduce their own working capital demands. The result was that hardly any deliveries did not match customers expectations. There were some added benefits of increased customer satisfaction and reduction in extra deliveries to meet needs of customers. The change in the flow of information was as follows.

Explained in this way, it was clear that the new software package was not of a higher quality, thereby reducing invoice errors, but that it facilitated a change in business operation which allowed the errors to be removed. The decision to adopt a change in IT to tackle similar problems could then be taken in a more informed way; arguably changes could have been made to existing systems to achieve the same effect.

Stock reduction

Another Chinese whisper about a computer system in another business area was that it reduced stocks in the warehouse and reduced the number of stock-outs. Again, analysis of the process using information flows between activities indicated the full reason for the improvement in stock management. The flows before the new system were:

Customers would typically order about 15 days in advance of delivery, to fit in with their own production schedules. Orders were taken, irrespective of stock levels in the warehouse. Production would then schedule its activities to maintain stock levels in the warehouse. The new system added an information flow that allowed the production activity to see the orders. This meant that production could respond to changes in order levels rather than changes in stock levels. The result was that stocks could be reduced.


The above examples could have been described in many ways. The use of the activities from the business activity model meant that the problem could be explained in terminology which could be understood by different businesses. The use of information flows, with delays, proved very effective at describing the benefits of changes in IT to business managers in ways that could be quantified.

Object Modelling with Activity Models

Activities are logically arranged groups of events. Dynamic modelling, through event traces, is one way of linking events to objects, thereby defining object behaviour and identifying new objects. Activities can be used in a similar way to identify operations on objects. This can be done by simply producing a matrix; a simplified example from the PLANEC application is illustrated below. This is for the ARRANGE RESOURCES activity. PROVIDER, FUNDER and RESOURCE are three of the candidate objects for the PLANEC system.

A similar cross-referencing process can be used to identify attributes.

This process has proved remarkably productive and effective in defining functionality. Over a two day workshop involving the representatives of the consortium, approximately 250 operations and attributes were identified and agreed. The list for one of the objects is given in appendix 2.

The high level specification ended up with nine core objects in the specification:

  •  CARE
  •  COSTS
  •  PLAN

The objects are related according to the following diagram

The PLAN object interacts with all other objects, and so has been drawn as above for clarity.

These objects are really large groupings which will need further refinement with specialisations and component objects to be devised in the design phase. The objective of the functional specification has been to obtain a logical and complete grouping of all the data and functionality requirements for the first prototype.

The activity model was used extensively to drive the process of defining these objects and their relationships. The process of cross-referencing objects with activities clarified a number of issues an reduced the set of candidate objects considerably.

The relationship of activity modelling to object modelling techniques

An object model, consisting of objects, their attributes, their operations, and their relationships, can be viewed as a computable representation of the real world. Dynamic modelling, through event traces and state diagrams, and functional modelling, through data flow diagrams, can be viewed as a means of refining the object model. Activity models, as used above, can be thought of in the same light. Thus they extend the repertoire of available tools to analyse a business environment.

The difference between activity models and event traces (or use-cases), is their omission of temporal considerations. In this sense they are weaker than dynamic models. On the other hand, they prove easier to deal with in the earlier stages of analysis. In particular, the lack of temporal and information flow considerations allows specification of system functionality without overly constraining implementation.

Planned research

The above methods and techniques have emerged initially as ad hoc approaches to solving high-level analysis and design problems, and communicating IT issues to senior management and to non-IT researchers. They have not, therefore, been related to the literature. Two avenues of research are opening up. One is to extend existing methods, using the activity modelling approach. Another is to investigate the use of activity models as a bridge between business process re-engineering and software engineering.


Activity models are proving a useful means of reasoning about systems at an abstract level. By omitting temporal and information flow considerations, they allow high level reasoning about systems and the environment in which the systems operate. They are largely independent of organisational considerations. They can be used as a framework for strategic planning and analysis, as a basis for a common terminology in BPR, and as an entry point for specification and initial design of systems in an object modelling approach. They offer a possible, useful extension to existing methods, and a possible bridge between BPR and software engineering. They have been proven in a number of practical projects where the characteristics of the projects have involved differing organisations which provide similar products or services.

Functional modelling - data flows

Data flow modelling is a common component of many software engineering methodologies in. It is exceptionally useful for analysing and describing systems where there is a large amount of calculation involved. However within the object modelling community it seems to be losing ground in favour of other techniques - object interaction diagrams can cover most of the needs met by data flow diagrams. They tend to be less effective and more arbitrary in producing designs. They are included here for completeness.

Data flow models consist a of a number of processes which exchange information. A process transforms information it receives and passes the transformed information on to other processes or to objects in the system. Data flow models can be used to uncover new operations and new attributes for the object model. Sometimes new objects can be discovered too.

Processes are drawn as bubbles, with the name of the process written inside. Arrowhead lines are used to connect processes to each other and to objects in the system. The lines are labelled with the information that is being passed. Objects are drawn as rectangular boxes, just as in the object model, but usually with just the name of these objects and not the attributes and operations.

Let us look at a simple example.

The next stage is to devise operations and attributes to support the calculations. From the above we can probably deduce the following attributes and operations for the Customer, Invoice, Sales and Product objects - of course there will be more attributes and operations derived from other techniques or other data flow diagrams.

Each process needs to be implemented as an operation in one or more of the objects. Each data item arising from an object must have a corresponding attribute or set of attributes in the source object.

Data flow diagrams are useful if:

  •  you have lots of calculations to carry out
  •  you are familiar with data flow techniques in a method you have used repeatedly before.

The approach to data flow diagramming should be as follows:

  •  create a data flow diagram for each of the major outputs of the system
  •  work back from the outputs to the inputs to construct the diagram
  •  add new objects where necessary to the object model as you discover the need for them in the data flow modelling
  •  add new operations and attributes to the object model as you discover the need for them in the data flow modelling

The difference between data flow use in object methods and other methods (e.g. SASD) is that the data flow diagram is not used as a basis for devising the structure of the system

Use Cases

One of the most fundamental problems in software engineering is determining the requirements of a system. The notion of use-cases, introduced by Jacobson is an excellent approach. It complements the activity modelling approach developed by Lunn described earlier in the course, and has wider application.

The use-case approach requires the analyst to determine all the potential actors involved in a system. Actors are external to the system and make use of it. An actor is typically a person, but may be a third-party organisation or another computer system. One person may in fact be multiple actors, say a shop assistant may be a customer of the same shop at another time. We model actors, not individuals.

An actor makes use of a system in different ways. Each of these ways is known as a use-case. A use-case may involve a number of actors, just as an individual actor may make use of several use-cases. We draw little stick men to represent actors and ovals to represent use-cases. In a banking system where a customer can withdraw money, we would draw:

Of course, to withdraw money, a customer must have put money in, so there is at least one more use-case.

Now it might be that the system which is being implemented in the bank needs to involve a cashier for depositing, but that to withdraw money the customer has to use the cash machine. The cashier is then an actor.

And so we build our model of the ways the system is used. We might add two more use-cases of the bank system.

Examining this, we see that there is probably a need for another actor. (I never borrowed money without having to grovel to the bank manager, and the bank manager always looked carefully at my pitiful balance before grudgingly saying yes).

Use cases may in fact use other use-cases. The withdraw cash use-case would make use of the account balance use-case before issuing the money.

A use-case is a very abstract view of what the system provides to the various actors who use it. It is not intended to give a detailed, blow-by-blow account of how the services are provided. Therein lies the beauty of the approach. By keeping out detail, it is possible to reason at a higher level. Later the use-cases can be opened out using object-interaction diagrams to begin to define their precise operation.

So what about our object oriented fairy tales? Where are our use-cases for those? We might model it in this way.

So what about Red Riding Hood and the Wolf and all that? What the example points out is that use-cases are very abstract. They do not define the architecture of the system (the objects and how they interact). They merely list what the system does. Almost all fairy stories would fit into the above use-case.

The first stage of any project should be an analysis of the use of the system being devised. The requirements stage in object modelling, as described in this course, involves constructing use-case models of the system. The next stage is to open up the architectural issues by elaborating the use-cases as scenarios, described earlier.

Here is a simple use-case model for a computer:

Programmers are distinguished from ordinary users of the computer by the fact that they compile and edit programs as well as running them (for testing) and printing files. Edit and compile use-cases make use of the run-program use case (editors and compilers being just programs to be run).

Use-cases are simple to describe to potential users of the system, and therefore should be used in the dialogue as part of requirements analysis.

Opening up the use-case requirements model

Use-cases allow us to describe at a high level the basic functionality of a system in terms of the various actors who need to interact with the system. We now consider how to flesh out those use-cases, and to link them into the object modelling method we considered earlier.

From the use-cases, which describe broad functionality, we begin to analyse the details of the functionality of the systems we want to construct. Use-cases are broken down into sequences of actions and events. These actions and events are mapped onto objects through object interaction diagrams.

Let us look at the bank example:

The deposit cash involves the following sequence of actions

  •  Complete pay in slip
  •  Hand cash to cashier
  •  Increase cash balance of account
  •  Increase cashier's balance
  •  Increase bank's balance
  •  Issue receipt to customer

It could be more complicated if we included cheques, but you can do that as an exercise. The link between the requirements model and the object model is built through object interaction diagrams. We can now construct an object interaction diagram something like:

Now we have to consider what objects we might need for interaction. The first debate is whether we want to model the pay-in slip. In the final analysis, the decision to model something as an object depends on how important the behaviour is to the system we are developing. It might be that we only need to record the existence of the pay-in slip, and it can be referenced by an attribute. But to be safe, at first we shall include it as an object.

Next we have to decide if we wish to model the cashier. Again, at first it is better to model something as an object and reduce its status to an attribute (say) or remove it from the system at a later stage. So we include a second object.

Now we have to worry about the customer and his/her account(s). Now a customer may have more than one account, normally. So it seems sensible to model both. Here, however, we merely need to know about the account. In fact the person depositing money may not be the owner of the account.

Now we are left with trying to understand where the cashier balance might be stored. We might want to create an object for this, or store this as an attribute of the cashier object. There seems no need to keep a cashier balance separate, so we shall record the cashier balance in the cashier object.

Now we are left with a decision about whether to keep a bank object. It seems reasonable, even though there may only be one. (Alternatively, we might think of the bank having many branches, and have to increment the branch balance. Let's keep it simple though.)

Now we have to decide whether to include the customer object. At this stage, the thing we are talking about is a real, physical customer. Hence we might leave a customer object out of the interaction diagram.

And so we progress. Now it is almost certain that two analysts working on the system would come up with different decisions at each stage. There is no single, definitive answer. Hopefully the set of objects derived by independent analysts would be similar, but any expectation that analysis is a deterministic activity with right or wrong answers is bound to be frustrated. Of course, there are good and bad analyses and designs. A large part of what makes for good design comes from experience.

In a nutshell, the first step from requirements analysis to detailed analysis is through the use of object-interaction diagrams to define objects, their behaviour and their attributes. These objects will be found in many interaction diagrams, and their overall behaviour is a composite of the behaviours from the individual interaction diagrams.

Having found some of the behaviours of objects in this way, we can then produce state-transition diagrams for the objects to more tightly define their behaviour. Our initial analysis can progress beyond functionality towards some architecture to support that functionality.

There are many techniques for driving the analysis. An analysis should involve the appropriate parties in the development of the system. There is not enough scope in this course to discuss methods of interviewing. One useful technique, however is to provide some discussion around the interface of the use-cases with the actors. This has the advantage that it begins to open up the way the system may be used, and it defines both attributes and operations the use-case needs to take into consideration.

For example, suppose we wished to discuss with a bank manager what happened when he/she checked an account balance. We might draw up a possible screen and describe it to the bank manager, asking for feedback on the suitability of the screen, its contents, and what the bank manager would expect to be able to do through the screen. It can be related to the use case diagram as below.

As the analysis progresses through to design, interface objects will have to be designed to drive the interfaces with the various actors, and effectively implement the use-cases as objects in their own rights. Use-cases can be thought of as objects whose behaviour is defined in terms of other objects in the system.

Abstract analysis and design can progress a long way before taking into consideration the hardware and software requirements for implementation. Arguably it is better to defer the decisions about implementation as far down the line as possible, though in practice it is often important to have some idea of the basic hardware and software configurations which will be available; in many practical projects these will be given as part of the existing infrastructure or policy of the client.


We have seen how use-case, through object-interaction diagrams, can be linked into object modelling. We have, in this way, provided a complete framework for requirements analysis, through to design of the basic structure to provide the required functionality. Of course, this framework can be elaborated in different ways, for example to incorporate special requirements of real-time systems, or to incorporate formal methods in areas where more rigour is required.

We have seen that analysis and design can be viewed as an iterative process, with a consistent notation throughout. It can proceed as a successive refinement and extension of requirements models, defined in terms of use-cases, dynamic models defined in terms of object-interaction diagrams and state diagrams, functional models defined in terms of data-flow diagrams, and an object-model defined in terms of objects, attributes, operations and relationships. The elegance of object-oriented methods lies in the coherence between the models, and the comprehensive sufficiency of objects to link and implement the various views of the system functionality and behaviour.

A further stage is required to map the design onto software and hardware architectures. A brief discussion of this is given in the rest of the course.

Business Process Reengineering

Business Process Reengineering (BPR) has become a buzz-phrase in the business world. What does it mean?

Largely businesses evolve over a period. There are well-established methods which which look at optimising parts of a business, say maximising the flow through a warehouse. These methods use a variety of mathematical techniques to improve on a given situation. However, methods of revising whole businesses have been largely ad hoc, and often based on accountancy models.

BPR takes the view that an organisation can be viewed as a system. Then systems methods can be used to analyse what is going on, and to redesign the system to improve some aspects of it. Object-oriented methods are one way of looking at businesses.

The value of object methods is that they can clearly separate out architectural and functionality issues, and then link them back. The problems businesses often face is that their architecture can no longer sustain the functionality they wish to provide. This is often expressed in terms of issues such as productivity.

Consider a car-producing company in the late 1970's, early 1980's which used traditional assembly line methods. According to the Henry Ford model, cars were built by hand in a series of steps, with one employee repetitively doing the same task over and over again. This had two major drawbacks. Firstly, the speed of producing cars was limited by the total time taken to do each task, and problems if any one of the tasks was done badly or slowed down for some reason. Secondly, the tasks were tedious which contributed to poor production quality, absenteeism and industrial unrest.

Car companies took two routes. One was to increase the level of automation, reducing the manual tasks. The other was to shift to small teams building cars. Essentially these are different business architectures, though the functionality is essentially the same (that is, producing cars).

Functionality is dependent on architecture. Two systems (or businesses) with similar functionality can be implemented with radically different architectures. However, different architectures can have very different performance characteristics, often in the business community focused on productivity, cost and quality.

Using object-oriented methods we can analyse a whole business in terms of its customers, suppliers and the functions that they expect. We can then consider the existing architecture, considering the components of the business as objects, such as employees, machines, warehouses, and so on. We can then redefine the objects, either adding in new objects, taking out old ones, or changing the behaviour and interaction of objects.

Consider a small company which makes a variety of metal screw fixings. The production line can only make one fixing at a time, and retooling to make another fixing takes half a day. The sales team has been successful in increasing orders, but that has introduced more types of fixing. The warehouse now keeps runing out of stock of some of the fixings. Conflict is arising with production which cannot keep pace with orders - either it produces too much of each line, which the warehouse cannot store, or it spends a great deal of time retooling. An initial analysis of the problem is as follows. Firstly, a use-case analysis of the various actors involved in the process is:

And if we open up the two use cases using object interaction diagrams we get:

After some consideration of this, it might be seen that if orders were to be considered as part of the production scheduling, then the number of stock-outs might be reduced, without over-stocking. A survey of customers indicates that they can often cope with a lead time of up to two weeks before delivery, giving a wider window for the production line to respond to demand. The two object-interaction diagrams are revised to:

By changing the behaviour of the individual objects, and how they interact, the production line now has a chance to optimise production. It can schedule production as late as it needs to and still meet orders. It can also skip restocking the warehouse if the orders are not high enough to justify it. The warehouse may have to take extra stock of certain lines at times to meet bulges in orders of that line.

This does not change radically the basic architecture of the business. In a different type of business, say bulk chemicals production, it might be possible to do away with the warehouse altogether by producing goods just in time for delivery. One change we might consider making to the above architecture is to source some items from another supplier to make up orders. Then the order process might look like:

What we are doing here is exactly the same as we would be doing if we were analysing and designing a computer system. The only difference is that the implementation of the objects is in physical things, not software.

We could go on to do state diagrams for the sales department, the warehouse, the production line and so on. We could construct an object model for the business. These, in different circumstances, might be useful for the analysis and redesign.


Business Process Reengineering views a business as a system. Object oriented methods can be used to analyse the current architecture of the business, and to devise new architectures. The differences appear at the implementation stage, where the objects are realised as physical components, not software components.


А также другие работы, которые могут Вас заинтересовать

2353. Повсякденне життя українців у селі та місті. 19.07 KB
  Мета: сформувати в учнів уявлення про повсякденне життя українських селян та міщан у першій половині XIX ст. Розпорядок робочого дня українських селян.
2354. Соціально-економічне становище українського населення під владою Австрійської та Російської імперій і соціальні рухи 41 KB
  Мета: проаналізувати причини розгортання селянського руху, розкрити хід і наслідки соціальної боротьби в Україні в першій половині XIX ст., довести, що колонізаторська політика Російської та Австрійської імперій на українських землях призвела до посилення національного та соціального гніту, з’ясувати форми й характер протесту українського населення.
2355. Утворення Кирило-Мефодіївського братства. 38 KB
  Мета: ознайомити учнів із процесом розвитку національної ідеї в суспільно-політичному русі України першої половини XIX ст., діяльністю першої української політичної нелегальної організації – Кирило-Мефодіївське братство, проаналізувати діяльність Т.Г. Шевченка, як суспільно-політичного діяча та його участь у братстві; проаналізувати програмні документи братства, розкрити значення діяльності кирило-мефодіївців у розвитку українського національного руху.
2356. Національно-визвольний рух на західноукраїнських землях під час революції 1848-1849 рр. в Австрійській імперії 42 KB
  Мета: з’ясувати причини розгортання революції 1848-1849 рр. на західноукраїнських землях; показати роль Головної руської ради в розгортанні українського національно-визвольного руху.
2357. Національно-визвольний рух на західноукраїнських землях під час революції в Австрійській імперії 53 KB
  Мета: з’ясувати досягнення української національної революції 1848-1849 рр. у Галичині, довести, що Україна як складова європейської цивілізації не стояла осторонь проблем, які в середині XIX ст. вирішувала Європа.
2358. Соціально-економічне життя народу та український національний рух у першій половині XIX ст 35 KB
  Мета: провести підсумкове оцінювання знань, як кінцевий етап оцінювання знань з теми Соціально-економічне життя народу та український національний рух у першій половині XIX ст..
2359. Розвиток освіти і науки України наприкінці XVIII – у першій половині XIX ст. 35 KB
  Мета: виявити загальні закономірності й особливості розвитку культури в Україні в першій половині XIX ст., проаналізувати стан розвитку освіти, науки.
2360. Розвиток українського мистецтва наприкінці XVIII – у першій половині XIX ст. 37.5 KB
  Мета: проаналізувати стан розвитку літератури, музичного, образотворчого мистецтва, архітектури.
2361. Культура України кінця XVIII – першої половини XIX ст. 33 KB
  Мета: з’ясувати особливості традиційно-побутової культури в українському селі та місті, проаналізувати стан релігійного життя та долі української жінки в першій половині XIX ст. в Україні.