1825Course150-notesindex.xml15-150: Principles of Functional Programming (Summer 2024)2024Harrison Grodinhttps://www.cs.cmu.edu/~15150/These lecture notes were prepared using the Forester software by Harrison Grodin based partially on lectures by Stephen Brookes, Michael Erdmann, Dilsun Kaynar, Jacob Neumann, and Brandon Wu.Use Ctrl-K to search the notes.1690150-0001150-0001.xmlCourse policiesCourse policies can be found on the course website.
1818150-concepts150-concepts.xmlConcepts and definitions1691Concept150-000M150-000M.xmlval declarationsA val declaration gives a variable name to the result of an expression evaluation.val x : t = e
1692Concept150-0015150-0015.xmlfun declarationsfun f (x : t1) : t2 = e
The value assigned to f is fn (x : t1) => e.1693Concept150-001J150-001J.xmlcase expressionscase e of
pat1 => e1
| pat2 => e2
...
| patn => en
To evaluate a case expression:Evaluate e to a value.
Then, evaluate the first branch matching the value.1694Concept150-001P150-001P.xmlif expressionsSML has shorthand notation ("syntactic sugar") for casing on a boolean.case e of
true => e1
| false => e0
if e then e1 else e0
In other languages:Python: e1 if e else e0
C: e ? e1 : e0Not to be confused with if "statements"!The following further syntactic sugars are available:e1 andalso e2 is sugar for if e1 then e2 else false
e1 orelse e2 is sugar for if e1 then true else e21695Concept150-004E150-004E.xmlorder datatypeThe following datatype is built into the standard library of Standard ML:datatype order = LESS | EQUAL | GREATER
As the constructor names indicate, these constructors indicate the result of a comparison of elements in a trichotomous relation.1696Definition150-007K150-007K.xmlregexp datatypedatatype regexp
= Char of char
| Zero
| One
| Plus of regexp * regexp
| Times of regexp * regexp
| Star of regexp
1697Concept150-00AZ150-00AZ.xmlraise expressionThe expression raise Fail "TODO" has most general type 'a, filling in for any type we wish. More generally, raise e has most general type 'a, for any exception e.Unlike other expressions, it does not evaluate to any value.1698Concept150-00B0150-00B0.xmlexn typeAn exception, like Fail "TODO" or Div, has type exn. So, note that Fail : string -> exn. We can write raise e for any e : exn.The type exn can be thought of as a datatype with infinitely many constructors:datatype exn = Fail of string | Div | ...
1699Concept150-00B5150-00B5.xmlhandle expressionA handle expression has the following structure:e handle pat1 => e1
| pat2 => e2
| ...
| patn => en
Note the similarity to a case expressions. For this to typecheck, we must have that e : t for some type t, and each ei : t, and each pati is a pattern matching the exn type. A handle expression will first evaluate e. If it evaluates to a value, that value is provided immediately; or, if it raises an exception, the corresponding handler is evaluated. If no patterns match, the expression is simply re-raised.1700Definition150-0080150-0080.xmlAccepted language of a machineWe say that a string \texttt {s} is accepted by a machine \texttt {m} when \texttt {run m s} \cong \texttt {true}. We write \mathcal {A}(\tt m) = \{\texttt {s : char list} \mid \texttt {run m s} \cong \texttt {true}\} for the set of all strings accepted by machine \texttt {m}.1701Definition150-00A8150-00A8.xmlAssociative functionLet g : t * t -> t. We say that g is a associative when for all a, b, c: \texttt {g (g (a, b), c)} \cong \texttt {g (a, g (b, c))}.1702Concept150-0007150-0007.xmlBase types
Type
Values
int
0, 1, 150, ~12, ...
real
1.5, 3.14, 0.0001, ...
bool
false, true
char
#"a", #"b", #"7", ...
string
"", "hello world", ...
1703Concept150-0038150-0038.xmlBig-\mathcal {O}Sometimes, we wish to simplify exact bounds, ignoring linear factors. To do this, we use big-\mathcal {O} notation.Let X be a set and let f, g : X \to \mathbb {N}. We say that f \in \mathcal {O}(g) when there exist constants a, b : \mathbb {N} such that f \le ag + b, i.e. \forall x : X, f(x) \le ag(x) + b.We write \mathcal {O}(g) for the set of all functions f bounded by g, i.e. \mathcal {O}(g) = \{f : X \to \mathbb {N} \mid f \in \mathcal {O}(g) \}.Traditionally, X = \mathbb {N} and function inputs are assumed to be named n: for example, \mathcal {O}(n^2) is syntactic sugar for \mathcal {O}(n \mapsto n^2).1704Concept150-002N150-002N.xmlBinary tree with ints at the nodesWe define the following datatype declaration to represent binary trees:datatype tree
= Empty
| Node of tree * int * tree
Note that tree is used recursively.1705Concept150-006R150-006R.xmlBind abstractionWe previously saw bind, which takes a function f : 'a -> 'b list and a list 'a list and applies the function on each 'a to get a resulting flattened 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* bind : ('a -> 'b t) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES: ...
*)
The ENSURES should contain some conditions similar to those given for the map abstraction, but we elide them in this class.We can always implement the infix >>= using a bind implementation:fun (x : 'a t) >>= (f : 'a -> 'b t) : 'b t = bind f x
1706Concept150-0059150-0059.xmlCircularity errorWith recursion, sometimes there is no valid type for an expression because the output type contains itself as a component. For example:fun f 0 = 0
| f n = (f (n - 1), 0)
This function is not well typed.
653Proof#188unstable-188.xml150-0059
Assume f : int -> t, for some t.
Then, in the second clause, (f (n - 1), 0) : t * int.
However, since this is returned by f itself, this would mean that t = t * int, leading to a contradiction.
1707Concept150-003A150-003A.xmlCommon big-\mathcal {O} classesThe following classes are distinct, ordered by inclusion from top to bottom:
Class
Common Name
\mathcal {O}(1)
constant
\mathcal {O}(\log n)
logarithmic
\mathcal {O}(n)
linear
\mathcal {O}(n \log n)
quasilinear/log-linear
\mathcal {O}(n^2)
quadratic
\mathcal {O}(n^3)
cubic
\mathcal {O}(2^n)
exponential
1708Concept150-005I150-005I.xmlComparison functionIn the implementation of the insert auxiliary function, we used Int.compare : int * int -> order. To sort a list of 'as, we need a function of type 'a * 'a -> order.1709Concept150-001D150-001D.xmlConstant patternConstants, such as int, string, and bool values, are patterns. Make sure to match them all!1710Concept150-004V150-004V.xmlContradiction in type inferenceIf a variable is used in such a way that it has two incompatible types, a type error will be produced.1711Concept150-0077150-0077.xmlCorecursionDefinitions such as do not go by recursion on an input; nothing needs to ever shrink. Instead, they go by corecursion, producing a finite amount of data but offering to produce more if desired.1712Concept150-0031150-0031.xmlCost analysisGoal: understand the cost of programs. Some choices:Time each execution. However, this is machine-dependent.
Count a given metric (recursive calls; additions; evaluation steps; etc.). This is abstract enough to be proved, and it corresponds to real time.First, we choose a cost metric and size metrics for inputs. Then, we:Write a recurrence following the structure of the code, computing cost from input sizes.
Solve for a closed form.
Give a simple asymptotic (big-\mathcal {O}) solution.1713Definition150-009S150-009S.xmlCost graphA cost graph is a visualization technique for parallel processes consisting of a directed acyclic graph with designated start and end nodes. They are defined inductively as follows, where we implicitly treat all edges as top-to-bottom:
Atomic units are variables representing cost of an abstract operation, drawn using a hexagon:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] {\texttt {f}};
\end {tikzpicture}
There is an empty cost graph 0:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node {$\bullet $};
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in sequence, written G_1 \triangleright G_2, representing data dependency:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (0,1) {$G_1$};
\node (G2) at (0,0) {$G_2$};
\path (G1) edge (G2);
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in parallel, written G_1 \otimes G_2, representing data independence:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (-1,0) {$G_1$};
\node (G2) at (1,0) {$G_2$};
\node (start) at (0,1) {$\bullet $};
\node (end) at (0,-1) {$\bullet $};
\path (start) edge (G1);
\path (start) edge (G2);
\path (G1) edge (end);
\path (G2) edge (end);
\end {tikzpicture}
1714Concept150-0068150-0068.xmlCurryingWe say that a function is curried, named for mathematician Haskell Curry, when it takes in multiple arguments one at a time, producing a function accepting the rest of the arguments.For example, the type t1 -> t2 -> t3 is curried, but the type t1 * t2 -> t3 is not (sometimes called "uncurried").1715Concept150-002H150-002H.xmlDatatype declarationA datatype declaration lets us define a new type that can be pattern-matched on.datatype newTypeName
= Constructor1 of dataToContain1
| Constructor2 of dataToContain2
| Constructor3 (* does not contain any data *)
| ...
| ConstructorN of dataToContainN
1716Definition150-00BB150-00BB.xmlEffectAn effect is something the evaluation of a program can do aside from returning a value.1717Definition150-00AH150-00AH.xmlEmpty sequenceUsing sequence tabulate, we can define a function to create an empty sequence:(* empty : unit -> 'a Seq.t
* REQUIRES: true
* ENSURES: empty () ~= <>
*)
fun empty () = Seq.tabulate (fn _ => raise Fail "impossible") 0
This function has constant work and span.1718Concept150-00BT150-00BT.xmlEquality of reference cellsReference cells can be compared for equality using op = : 'a ref * 'a ref -> bool. This compares the "addresses", not the contained data. Every reference cell created (using ref) is fresh and not equal to any previously-defined reference cells.1719Concept150-00B1150-00B1.xmlException declarationAn exception can be declared as follows:exception Constructor1
exception Constructor2 of dataToContain2
Notice the similarity to datatype declaration. However, here, we only give one constructor per declaration: since the exn type has infinitely many constructors, we only provide one more.Like a datatype declaration, an exception declaration can also go in a signature, requiring that the structure provide a matching exception declaration.1720Concept150-0005150-0005.xmlExpressionAn expression e is a program that can be evaluated.Every value is also an expression.
Until the end of the course, we make the blanket assumption that all expressions e evaluate to some value v.1721Concept150-007D150-007D.xmlExtensional equivalence at stream type: coinductionLet t be an arbitrary type, and let s0 and s0' be of type t stream. To show that \texttt {s0} \cong \texttt {s0'}:Choose a relation R(-, -) on pairs of t streams that relates pairs of streams that you expect to be equivalent.
Start State: Show that R(\texttt {s0}, \texttt {s0'}), guaranteeing that the streams you care about are related.
Preservation: Then, show that for all s and s', if R(\texttt {s}, \texttt {s'}), then:
the heads are the same, \texttt {head s} \cong \texttt {head s'} (the "co-base case", since no more stream data comes after the head); and
the tails stay related, R(\texttt {tail s}, \texttt {tail s'}) (the "coinductive conclusion", dual to the inductive hypothesis).This proof technique is called coinduction.Notice that this definition has some similarities with extensional equivalence at function types: both check that you see equivalent results when you use the expressions in equivalent ways.1722Definition150-000Q150-000Q.xmlExtensional equivalence at base typesTwo expressions e and e' (that evaluate to values) are extensionally equivalent, written e \cong e', when they evaluate to the same value.1723Definition150-000T150-000T.xmlExtensional equivalence at function typesSuppose f and f' are both of type t1 -> t2. Then, \texttt {f} \cong \texttt {f'} when for all values x and x' of type t1, \texttt {x} \cong \texttt {x'} implies \texttt {f x} \cong \texttt {f' x'}.When t1 is a base type, this is equivalent to: for all values x : t1, \texttt {f x} \cong \texttt {f' x}.1724Definition150-000S150-000S.xmlExtensional equivalence at product typesIt is the case that (e_1, e_2) \cong (e_1', e_2') when e_1 \cong e_1' and e_2 \cong e_2'.1725Concept150-008A150-008A.xmlExtensional equivalence of lazy state machinesLet m0 and m0' be of type machine. To show that \texttt {m0} \cong \texttt {m0'}:Choose a relation R(-, -) on pairs of machines that relates pairs of machines that you expect to be equivalent.
Start State: Show that R(\texttt {m0}, \texttt {m0'}), guaranteeing that the streams you care about are related.
Preservation: Then, show that for all m and m', if R(\texttt {m}, \texttt {m'}), then:
the statuses are the same, \texttt {status m} \cong \texttt {status m'} (the "co-base case", since no more characters are read after the status is checked); and
for all c : char, the feeding the machines c causes them to stay related, R(\texttt {feed m c}, \texttt {feed m' c}) (the "coinductive conclusion", dual to the inductive hypothesis).This proof technique is called coinduction.This definition is analogous to extensional equivalence of streams.1726Definition150-00B3150-00B3.xmlExtensional equivalence with effectsWhen considering exceptions, we say that e_1 \cong e_2 when both:e_1 and e_2 perform indistinguishable effects; for example, they raise the same exceptions, loop infinitely, or print the same string.
If e_1 \hookrightarrow v_1 and e_2 \hookrightarrow v_2, then v_1 \cong v_2 as pure expressions (i.e., as described before).1727Concept150-006H150-006H.xmlFold abstractionWe previously saw foldr. Crucially, it sent [x1, x2, ..., xn], i.e., op:: (x1, op:: (x2, ..., op:: (xn, nil))) to f (x1, f (x2, ..., f (xn, init))) by replacing op:: with f and nil with init.If we rewrite the list datatype as follows:datatype 'a list = Cons of 'a * 'a list | Nil
We might as well write foldr as:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b *)
fun foldr (cons : 'a * 'b -> 'b) (nil : 'b) (l : 'a list) : 'b =
case l of
Cons (x, xs) => cons (x, foldr cons nil xs)
| Nil => nil
The type of each argument matches the type of the constructor, swapping 'a list for 'b. Here, cons is just a function (not a constructor!) to replace every Cons with, and nil is just a value to replace every Nil with.The general recipe is as follows:For each constructor, replace the name of the type with 'b, including recursive uses.
Take in each of these functions/values meant to replace the constructor as arguments.
In the implementation, replace each constructor with its function, performing recursive calls on substructures if there are any.For example:We have Cons : 'a * 'a list -> 'a list and Nil : 'a list, so we get cons : 'a * 'b -> 'b and nil : 'b.
We take in cons and nil as arguments.
The implementation is as above.This perspective justifies the universality of list foldr.1728Concept150-009A150-009A.xmlFreshness of typesIf a functor is called multiple times, the opaquely-ascribed types created at each call will be fresh. For example:structure D1 = TreeDict (StringOrdered)
structure D2 = TreeDict (StringOrdered)
val d : D1.empty = D2.empty (* type error: D1.empty is different from D2.empty *)
1729Concept150-000Z150-000Z.xmlFunction applicationFunction application is written using a space. If e : t1 -> t2 and e1 : t1, then e e1 : t2.When evaluating e e1, SML does the following:e is evaluated to fn (x : t1) => e'.
e1 is evaluated to v1 (of type t1).
e' is evaluated, where v1 is now bound to x.1730Concept150-005Z150-005Z.xmlFunction compositionTo compose two functions f : 'a -> 'b and g : 'b -> 'c, we can define (g o f) : 'a -> 'c:fun (op o) (g : 'b -> 'c, f : 'a -> 'b) : 'a -> 'c = fn (x : 'a) => g (f x)
We can equivalently define composition in the following ways:fun g o f = fn x => g (f x)
fun (g o f) x = g (f x)
1731Concept150-0013150-0013.xmlFunction specifications(* f : t1 -> t2
* REQUIRES: ...some assumptions about x...
* ENSURES: ...some guarantees about (f x)...
*)
fun f (x : t1) : t2 = e
1732Concept150-000L150-000L.xmlFunction typesIn math, we talk about functions f : X \to Y between sets X and Y. In SML, we do the same, but where X and Y are types.If t1 and t2 are types, then t1 -> t2 is the type of functions that take a value of type t1 as input and produce a value of type t2 as an output.
Type
Values
t1 -> t2
fn (x : t1) => e
If assuming that x : t1 makes e : t2, then (fn (x : t1) => e) : t1 -> t2.1733Principle150-009P150-009P.xmlFunctional parallelismParallelism and functional programming go hand-in-hand.At a low level, parallelism involves scheduling work to processors;
but at a high level, parallelism involves indicating which expressions can be evaluated in parallel, without baking in a schedule.Functional programming helps:Since there are no effects (like memory updates) available, evaluation order doesn't matter, and race conditions are impossible to even describe in code.
Higher-order functions and abstract types allow complex parallelism techniques to be implemented under the hood but retain a simple interface.
Work and span analysis lets us predict the parallel speedup without fixing the number of processors in advance.1734Concept150-001L150-001L.xmlFunctions are valuesFunctions are values: they do not evaluate further.1735Concept150-0098150-0098.xmlFunctorA functor is a function that takes in a structure and produces another structure. The analogy is:
Expression Level
Module Level
type
signature
expression
structure
function
functor
(Unfortunately, ideas such as "functors are values", "higher-order functors", and "functor signatures" are not present in Standard ML itself.)1736Concept150-009C150-009C.xmlFunctor argument syntactic sugarWhen structures take multiple arguments, it is cumbersome to write Arg. before every sub-component. So, Standard ML provides syntactic sugar where the Arg : sig and end can be left off of inputs:functor PairOrdered
( structure X : ORDERED
structure Y : ORDERED
) : ORDERED =
struct
type t = X.t * Y.t
fun compare ((x1, y1), (x2, y2)) =
case X.compare (x1, x2) of
EQUAL => Y.compare (y1, y2)
| ord => ord
end
This is functionally the same, but it is typically more ergonomic and leads to more readable code.Analogous syntactic sugar is available when a functor is applied, allowing the struct and end of an argument to be left off:structure ChessOrdered =
PairOrdered
( structure X = CharOrdered
structure Y = IntOrdered
)
1737Definition150-005P150-005P.xmlHigher-order functionA higher-order function is a function that takes a function as input or produces a function as output.1738Definition150-00A7150-00A7.xmlIdentity elementLet z : t and g : t * t -> t. We say that z is an identity element for g when for all a: \texttt {g (a, z)} \cong \texttt {a} \cong \texttt {g (z, a)}.1739Concept150-006Q150-006Q.xmlInfix >>= notation for bindSimilar to the pipe function, we can reverse the argument order of list bind and view it as an infix function:infix 4 >>=
(* op >>= : 'a list * ('a -> 'b list) -> 'b list *)
fun l >>= f = bind f l
1740Definition150-007Z150-007Z.xmlLazy state machineWe define state machines (sometimes known as automata) as a lazy datatype like streams, but instead of having a single tail via unit ->, we have one tail per character with char ->.datatype machine = Machine of bool * (char -> machine)
We always expect a current value of type bool, representing whether or not the machine is in an accepting state (i.e., would accept the empty string). We could suspend the bool, but we choose not to for convenience.Similar to head and tail for streams, we define the following helpers:(* status : machine -> bool *)
fun status (Machine (b, _)) = b
(* feed : machine -> char -> machine *)
fun feed (Machine (_, f)) c = f c
1741Concept150-005S150-005S.xmlLeft-associativity of function applicationFunction application is left-associative. In other words, when f : t1 -> t2 -> t3, e1 : t1, and e2 : t2, the application f e1 e2 is the same as (f e1) e2, applying function f to input e1, and then applying that function to e2.1742Concept150-00A4150-00A4.xmlLimited sequence signature: free monoidWe can also view sequences inductively, where every sequence arises as the combination of some singletons:signature SEQUENCE =
sig
(* ...as before... *)
val singleton : 'a -> 'a seq
val empty : unit -> 'a seq
val append : 'a seq * 'a seq -> 'a seq
val mapreduce : ('a -> 'b) -> 'b -> ('b * 'b -> 'b) -> 'a seq -> 'b
(* ...more to come... *)
end
The functions singleton, empty, and append can be implemented using sequence tabulate. (Alternatively, they can be viewed as the primitive way to construct sequences, where sequence tabulate is implemented in terms of them; however, when working with an array-based implementation of sequences, this cost bound will be worse.) The mapreduce function is the fold abstraction for sequences built this way.1743Concept150-009X150-009X.xmlLimited sequence signature: indexed collectionThe sequence signature includes the following specifications:signature SEQUENCE =
sig
type 'a t (* abstract *)
type 'a seq = 'a t (* concrete *)
val tabulate : (int -> 'a) -> int -> 'a seq
val length : 'a seq -> int
val nth : 'a seq -> int -> 'a
(* ...more to come... *)
end
The abstract type 'a t represents a sequence of 'as, where 'a seq is an alias for signature readability.The implementation of SEQUENCE is called Seq:structure Seq :> SEQUENCE = (* ... *)
The full signature and documentation is available on the course website.1744Concept150-0063150-0063.xmlList foldrConsider the following functions:(* sum : int list -> int
* REQUIRES: true
* ENSURES: sum [x1, ..., xn] = x1 + (x2 + (... + (xn + 0)))
*)
fun sum nil = 0
| sum (x :: xs) = x + sum xs
(* concat : 'a list list -> 'a list
* REQUIRES: true
* ENSURES: concat [x1, ..., xn] = x1 @ (x2 @ (... @ (xn @ nil)))
*)
fun concat nil = nil
| concat (x :: xs) = x @ concat xs
(* commas : string list -> string
* REQUIRES: true
* ENSURES: commas [x1, ..., xn] = (x1 ^ ", ") ^ ((x2 ^ ", ") ^ (... ^ ((xn ^ ", ") ^ ".")))
*)
fun commas nil = "."
| commas (x :: xs) = (x ^ ", ") ^ commas xs
(* rebuild : 'a list -> 'a list *)
fun rebuild nil = nil
| rebuild (x :: xs) = x :: rebuild xs
(* isort : int list -> int list *)
fun isort nil = nil
| isort (x :: xs) = insert (x, isort xs)
All three share a common structure, combining x into the recursive call on xs. For a base case init : t2 and a recursive case f : t1 * t2 -> t2, we have:(* combine : t1 list -> t2
* REQUIRES: true
* ENSURES: combine [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun combine nil = init
| combine (x :: xs) = f (x, combine xs)
So, we can define a higher-order function, foldr, that takes in such an initial value init and a combining function f and produces the corresponding combine function:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
* REQUIRES: true
* ENSURES: foldr f init [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun foldr f init nil = init
| foldr f init (x :: xs) = f (x, foldr f init xs)
Then, we can define the other functions very simply:val sum = foldr (op +) 0
val concat = foldr (op @) nil
val commas = foldr (fn (x, y) => x ^ ", " ^ y) "."
val rebuild = foldr (op ::) nil
val isort = foldr insert nil
1745Concept150-0065150-0065.xmlList foldlWe can also traverse a list in the other direction:(* foldl : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
* REQUIRES: true
* ENSURES: foldr f acc [x1, ..., xn] = f (xn, ... f (x2, f (x1, acc)))
*)
fun foldl f acc nil = acc
| foldl f acc (x :: xs) = foldl f (f (x, acc)) xs
Here, we traverse the list in the other direction. Rather than the 'b input serving as a base case, it serves as an accumulator.Equivalently, we can implement foldl using foldr and list reverse:fun foldl f acc = foldr f acc o rev
This makes it clear that if we choose the first implementation, we could implement rev using foldl:val rev = foldl (op ::) nil
1746Concept150-006L150-006L.xmlList bindThe function bind takes in a function f : 'a -> 'b list that produces as many 'bs as it wishes; we accumulate all of them in a list.(* bind : ('a -> 'b list) -> 'a list -> 'b list *)
fun bind f nil = nil
| bind f (x :: xs) = f x @ bind f xs
It generalizes list map, whose function input must always produce exactly one 'b.1747Concept150-0062150-0062.xmlList filterConsider the following functions:(* keepEvens : int list -> int list
* REQUIRES: true
* ENSURES: keepEvens l ==> l', where l' contains the even elements of l in the same order
*)
fun keepEvens nil = nil
| keepEvens (x :: xs) =
if isEven x
then x :: keepEvens xs
else keepEvens xs
(* keepMammals : animal list -> animal list
* REQUIRES: true
* ENSURES: keepMammals l ==> l', where l' contains the mammals of l in the same order
*)
fun keepMammals nil = nil
| keepMammals (x :: xs) =
if isMammal x
then x :: keepMammals xs
else keepMammals xs
Both share a common structure, only keeping the elements of the input list satisfying some condition. For a predicate f : t -> bool, we have:(* keepP : t list -> t list
* REQUIRES: true
* ENSURES: keepP l ==> l', where l' contains the elements of l satisfying p in the same order
*)
fun keepP nil = nil
| keepP (x :: xs) =
if p x
then x :: keepP xs
else keepP xs
So, we can define a higher-order function, filter, that takes in such a predicate p and produces the corresponding keepP function:(* filter : ('a -> bool) -> 'a list -> 'a list
* REQUIRES: true
* ENSURES: filter p l ==> l', where l' contains the elements of l satisfying p in the same order
*)
fun filter p nil = nil
| filter p (x :: xs) =
if p x
then x :: filter p xs
else filter p xs
Then, we can define the other functions very simply:val keepEvens = filter isEven
val keepMammals = filter isMammal
1748Concept150-005X150-005X.xmlList mapConsider the following functions:(* incAll : int list -> int list
* REQUIRES: true
* ENSURES: incAll [x1, ..., xn] = [x1 + 1, ..., xn + 1]
*)
fun incAll nil = nil
| incAll (x :: xs) = (x + 1) :: incAll xs
(* stringAll : int list -> string list
* REQUIRES: true
* ENSURES: stringAll [x1, ..., xn] = [Int.toString x1, ..., Int.toString xn]
*)
fun stringAll nil = nil
| stringAll (x :: xs) = Int.toString x :: stringAll xs
(* bool list -> bool list
* REQUIRES: true
* ENSURES: flipAll [x1, ..., xn] = [not x1, ..., not xn]
*)
fun flipAll nil = nil
| flipAll (x :: xs) = not x :: flipAll xs
All share a common structure, applying a function to each element of the input list. For a function f : t1 -> t2, we have:(* fAll : t1 list -> t2 list
* REQUIRES: true
* ENSURES: fAll [x1, ..., xn] = [f x1, ..., f xn]
*)
fun fAll nil = nil
| fAll (x :: xs) = f x :: fAll xs
So, we can define a higher-order function, map, that takes in such a function f and produces the corresponding fAll function:(* map : ('a -> 'b) -> 'a list -> 'b list
* REQUIRES: true
* ENSURES: map f [x1, ..., xn] = [f x1, ..., f xn]
*)
fun map f nil = nil
| map f (x :: xs) = f x :: map f xs
Then, we can define the other functions very simply:val incAll = map (fn x => x + 1)
val stringAll = map Int.toString
val flipAll = map not
1749Concept150-0022150-0022.xmlListsFor all types t, the type t list represents ordered lists of values of type t.The values of type t list are:nil, the empty list
v1 :: v2 (pronounced "cons"), where v1 : t is an element and v2 : t list is the remainder of the listSyntactic sugar [v1, v2, ..., vn] is equivalent to v1 :: v2 :: ... :: vn, i.e. v1 :: (v2 :: (... :: (vn :: nil))).There are corresponding expressions that evaluate left-to-right.1750Concept150-006D150-006D.xmlMap abstractionWe previously saw map, which takes a function f : 'a -> 'b and a list 'a list and applies the function on each 'a to get a resulting 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* map : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - map id = id, ie map id s = s
* - map (f o g) = map f o map g, ie map f (map g s) = map (f o g) s
*)
In other words, the ENSURES guarantees that map is structure-preserving.1751Concept150-007A150-007A.xmlMaximal lazinessWe say that a function on streams is maximally lazy when it exposes as few elements of input streams as possible at any given point in evaluation.1752Concept150-00AV150-00AV.xmlMiddle-tree viewWe can implement a view of sequences as trees with data at the nodes as follows:signature SEQUENCE =
sig
(* ...as before... *)
datatype 'a mview = Bud | Branch of 'a seq * 'a * 'a seq
val join : 'a seq * 'a * 'a seq -> 'a seq
val showm : 'a seq -> 'a mview
val hidem : 'a mview -> 'a seq
end
These functions make sequences look like trees, hiding away some indexing:fun join (S1, x, S2) =
Seq.append (S1, Seq.append (Seq.singleton x, S2))
fun showm (S : 'a Seq.t) : 'a Seq.mview =
if Seq.null S then Seq.Bud else
let
val n = Seq.length S div 2
in
Branch (Seq.take S n, Seq.nth S n, Seq.drop S (n + 1))
end
fun hidem Seq.Bud = Seq.empty ()
| hidem (Seq.Branch (s1, x, s2)) = join (s1, x, s2)
Note: while other views (lview and tview) are available in the given sequence signature, this mview is not included by default.1753Definition150-00A6150-00A6.xmlMonoidA monoid consists of:a type t,
some z : t,
and some g : t * t -> t such that
z is an identity element for g, and
g is an associative function.1754Concept150-0057150-0057.xmlMost general typeThe most general type of an expression e is the type t such that all other types t' that could be assigned to e can be achieved by plugging in for type variables in t.We say that these other types t' are instances of type t.When we say that "e has type t", we implicitly mean that e has most general type t.1755Concept150-002E150-002E.xmlOption typesFor all types t, the type t option represents at most one value of type t.The values of type t option are:NONE, with no other data
SOME v, where v : t is the single element containedThere are analogous expressions and patterns.1756Concept150-003M150-003M.xmlParallel evaluation of tuplesIn Standard ML, we can evaluate components of a tuple in parallel.1757Principle150-000X150-000X.xmlParallelizing functional programsThe "how" of imperative programming does not parallelize easily, since instructions can accidentally interact with each other. The "what" of functional programming does.1758Concept150-005C150-005C.xmlParameterized type and datatype declarationsA datatype declaration can include type variable parameters:datatype ('a, 'b, 'c, ...) t = ...
In the common case that only one type variable parameter is included, the parentheses and commas are excluded:datatype 'a t = ...
Similarly, type alias declaration can include type variable parameters, too:type ('a, 'b, 'c, ...) t = ...
type 'a t = ...
1759Concept150-0094150-0094.xmlPartial transparency using where typeUsing the signature former where type, we can make parts of a signature transparent, even if parts remain opaque. We write MY_SIGATURE where type t = someKnownType to make type t transparently be someKnownType, leaving all other types abstract.This feature is commonly used alongside type classes to reveal the definition of some type in a signature.1760Concept150-001E150-001E.xmlPattern inputs in fun declarationsfun f <pattern> : t2 = e
1761Concept150-00BK150-00BK.xmlPattern matching and purityIn the presence of effects, a function defined by pattern matching only says what happens given a pure argument, since in a function application, arguments are evaluated first.1762Concept150-006P150-006P.xmlPipe functionThe following function, pronounced "pipe", is useful for building data pipelines:infix 4 |>
(* op |> : 'a * ('a -> 'b) -> 'b *)
fun x |> f = f x
1763Concept150-005A150-005A.xmlPolymorphic quantification in proofsWhen proving a fact about polymorphic functions, we must be careful with quantification.❌ If we say "for all l : 'a list, we have \texttt {rev (rev l)} \cong \texttt {l}", this means "for all l such that (for all types t, l : t list), we have we have \texttt {rev (rev l)} \cong \texttt {l}". However, the only list l satisfying "for all types t, l : t list" is nil.
✅ If we say "for all types t, for all l : t list, we have \texttt {rev (rev l)} \cong \texttt {l}", this generalizes the proof that "for all l : int list, we have \texttt {rev (rev l)} \cong \texttt {l}", replacing int with an arbitrary type t.1764Principle150-000V150-000V.xmlPrinciples of functional programmingSimplicity: pure, functional code is easy to reason about, test, and parallelize.
Compositionality: build bigger programs out of smaller ones, taking advantage of patterns.
Abstraction: use types/specification to guide program development.1765Concept150-000G150-000G.xmlProduct typesThe product type t1 * t2 represents pairs whose first component is a value of type t1 and whose second component is a value of type t2.
Type
Values
t1 * t2
(v1, v2)
If e_1 : t_1 and e_2 : t_2, then (e_1, e_2) : t_1 \texttt {*} t_2.1766Principle150-000W150-000W.xmlProgramming as a linguistic processImperative programming is telling a computer how to compute a result. \begin {aligned} x &\leftarrow 2; \\ y &\leftarrow x + x \end {aligned} Functional programming is explaining what you want to compute. 2 + 2Functional programming is applicable in all "high-level" programming languages.1767Principle150-001V150-001V.xmlProof structure mirrors program structureThe structure of a proof should mirror the structure of the program.If the program uses recursion on a natural number n, the proof should use induction on n.
If the program uses recursion with cases 0, 1, and n, the proof should use induction with base cases for 0 and 1 and an inductive case for n.
If the program cases on b : bool, the proof should case in the same way.1768Definition150-00B4150-00B4.xmlPureSay that an expression e is pure when there exists some value v such that e \hookrightarrow v without performing any observable effects.1769Definition150-009H150-009H.xmlRed-black invariantsA full, balanced tree has the same number of nodes on every path from the root to each Empty. However, such trees only can have 2^d - 1 nodes, where d is the height (depth) of the tree. In order to maintain a similar invariant, we color some nodes black and some nodes red and only count the black nodes. The red nodes are just to fix "off-by-one" errors, where we want to add more data to a tree but don't want to increase the black height. This leads us to the following pair of invariants.The red-black tree invariants require that:
Every path from the root to each Empty have the same number of black nodes, called the black height. (We treat Empty as black with black height zero.)
There are no two red nodes adjacent to each other (referred to as red-red violations), i.e. every red parent node has two black child nodes.
The first invariant guarantees that the trees are balanced ignoring red nodes, and the second invariant ensures that ther aren't "too many" red nodes in a given tree.1770Definition150-00BP150-00BP.xmlReference primitivesThe standard library includes the following signature:signature REF =
sig
type 'a ref
val ref : 'a -> 'a ref
val ! : 'a ref -> 'a
val := : 'a ref * 'a -> unit (* infix *)
(* ...some helper functions... *)
end
The type t ref represents mutable reference cells that store a value of type t.
The function ref allocates a new reference cell, where the starting value of the cell is the input.
The function ! accesses the current value of the reference cell given.
The infix function op := takes a reference cell (of type t ref) and a compatible value (of type t) and replaces the data in the reference cell with the given value.All of these definitions are available at the top level. The use of references is considered an effect.1771Definition150-0088150-0088.xmlRegular expression matching using machinesWe can compile every regular expression to a lazy state machine, and then we can use run to figure out if a given string is accepted.(* compile : regexp -> machine
* REQUIRES: true
* ENSURES: A(compile r) = L(r)
*)
fun compile (Char a) = char a
| compile Zero = zero ()
| compile One = one ()
| compile (Plus (r1, r2)) = plus (compile r1, compile r2)
| compile (Times (r1, r2)) = times (compile r1, compile r2)
| compile (Star r) = star (compile r)
(* accept : regexp -> string -> bool *)
fun accept r s = run (compile r) (String.explode s)
1772Definition150-007U150-007U.xmlRegular expressions
Regex \texttt {r}
Language \mathcal {L}(\tt r)
s \in \mathcal {L}(r) when...
\text {a}
\{\text {a}\}
s = \text {a}
\mathbf {0}
\varnothing
never
\mathbf {1}
\{\texttt {""}\}
s is empty
{r_1 + r_2}
\mathcal {L}(\tt r_1) \cup \mathcal {L}(\tt r_2)
s matches r_1 or r_2
r_1r_2
\{s_1s_2 \mid {s_1 \in \mathcal {L}(\tt r_1)} \text { and } {s_2 \in \mathcal {L}(\tt r_2)}\}
s = s_1s_2, where s_1 matches r_1 and s_2 matches r_2
r^\ast
\{s_1 \cdots s_n \mid {s_i \in \mathcal {L}(\tt r)} {\text { for all } i}, \text {where } n \ge 0\}
s is empty, or s = s_1s_2 where s_1 matches r and s_2 matches r^\ast
In these notes, we conflate strings and character lists, e.g. "ab" with [#"a", #"b"] and "" with [].1773Concept150-005R150-005R.xmlRight-associativity of arrowsFunction types are right-associative. In other words, the type t1 -> t2 -> t3 means t1 -> (t2 -> t3), taking an input of type t1 and producing a function of type t2 -> t3.1774Definition150-0082150-0082.xmlRunning a matching machineWe can run a machine m : machine on a string s : char list by recursively traversing s, feeding each character to m's transition function and reading the status at the end.(* run : machine -> char list -> bool *)
fun run m nil = status m
| run m (c :: cs) = run (feed m c) cs
1775Definition150-00BQ150-00BQ.xmlSemicolon expressionThe expression e1 ; e2 is syntactic sugar for the expression let val _ = e1 in e2 end. In other words, it evaluates e1 (running any effects but ignoring any returned value) and then evaluates e2 (keeping the effects and return value).1776Definition150-009Y150-009Y.xmlSequence tabulateThe function Seq.tabulate creates a new sequence of length n, calling a function on 0 through n - 1:(* Seq.tabulate : (int -> 'a) -> int -> 'a Seq.t
* REQUIRES: n >= 0
* ENSURES: Seq.tabulate f n ~= <f 0, f 1, ..., f (n - 1)>
*)
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 1) {$\bullet $};
\node [hexagon] (f0) at (-2.5, 0) {\texttt {f}};
\node [hexagon] (f1) at (-1, 0) {\texttt {f}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (f2) at (1, 0) {\texttt {f}};
\node [hexagon] (f3) at (2.5, 0) {\texttt {f}};
\node (end) at (0, -1) {$\bullet $};
\path (start) edge (f0);
\path (start) edge (f1);
\path (start) edge (f2);
\path (start) edge (f3);
\path (f0) edge (end);
\path (f1) edge (end);
\path (f2) edge (end);
\path (f3) edge (end);
\end {tikzpicture}
Its work and span depend on the cost of f, but assuming f is constant-time, then tabulate f n has work \mathcal {O}(n) and span \mathcal {O}(1).
1777Definition150-00A0150-00A0.xmlSequence lengthThe function Seq.length computes the length of a sequence:(* Seq.length : 'a Seq.t -> int
* REQUIRES: true
* ENSURES: Seq.length <x0, ..., x_{n-1}> ~= n
*)
Its cost graph is depicted as a single node, which by we assume has constant-time cost:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (0, 0) {\small \texttt {length}};
\end {tikzpicture}
1778Definition150-00A1150-00A1.xmlSequence nthThe function Seq.nth retrieves an element of a sequence:(* Seq.nth : 'a Seq.t -> int -> 'a
* REQUIRES: 0 <= i < Seq.length S
* ENSURES: Seq.nth <x0, ..., x_{n-1}> i ~= x_i
*)
Its cost graph is depicted as a single node, which by we assume has constant-time cost:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (0, 0) {\texttt {nth}};
\end {tikzpicture}
1779Definition150-00A2150-00A2.xmlSequence mapUsing sequence tabulate, sequence length, and sequence nth, we can define a map function:(* map : ('a -> 'b) -> 'a Seq.t -> 'b Seq.t
* REQUIRES: true
* ENSURES: map f <x0, ..., x_{n-1}> ~= <f x0, ..., f x_{n-1}>
*)
fun map f S = Seq.tabulate (fn i => f (Seq.nth S i)) (Seq.length S)
(* or equivalently: *)
fun map f S = Seq.tabulate (f o Seq.nth S) (Seq.length S)
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (start) at (0, 2) {\small \texttt {length}};
\node [hexagon] (nth0) at (-2.5, 1) {\texttt {nth}};
\node [hexagon] (f0) at (-2.5, 0) {\texttt {f}};
\node [hexagon] (nth1) at (-1, 1) {\texttt {nth}};
\node [hexagon] (f1) at (-1, 0) {\texttt {f}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (nth2) at (1, 1) {\texttt {nth}};
\node [hexagon] (f2) at (1, 0) {\texttt {f}};
\node [hexagon] (nth3) at (2.5, 1) {\texttt {nth}};
\node [hexagon] (f3) at (2.5, 0) {\texttt {f}};
\node (end) at (0, -1) {$\bullet $};
\path (start) edge (nth0);
\path (start) edge (nth1);
\path (start) edge (nth2);
\path (start) edge (nth3);
\path (nth0) edge (f0);
\path (nth1) edge (f1);
\path (nth2) edge (f2);
\path (nth3) edge (f3);
\path (f0) edge (end);
\path (f1) edge (end);
\path (f2) edge (end);
\path (f3) edge (end);
\end {tikzpicture}
The work and span of map depend on the cost of f, but assuming f is constant-time, then map f S has work \mathcal {O}(n) and span \mathcal {O}(1).
Although it can be easily implemented as above, this function is included in the SEQUENCE signature for convenience.1780Definition150-00AB150-00AB.xmlSequence reduceThe function Seq.reduce combines the data in a sequence using a monoid:(* Seq.reduce : ('a * 'a -> 'a) -> 'a -> 'a Seq.t -> 'a
* REQUIRES: g and z form a monoid
* ENSURES: Seq.reduce g z <x0, x1, ..., x_{n-1}> ~= g (x0, g (x1, ..., g (x_{n-1}, z)))
*)
Notice that the behavior of reduce exactly mirrors list foldr, and its type is an instance of the type of list foldr. However, thanks to the assumption that g and z form a monoid, reduce is more efficient than foldr in parallel.
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 1) {$\bullet $};
\node [hexagon] (g0) at (-2.5, 0) {\texttt {g}};
\node [hexagon] (g1) at (-1, 0) {\texttt {g}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (g2) at (1, 0) {\texttt {g}};
\node [hexagon] (g3) at (2.5, 0) {\texttt {g}};
\node [hexagon] (gg0) at (-1.75, -1) {\texttt {g}};
\node [hexagon] (gg1) at (1.75, -1) {\texttt {g}};
\node [hexagon] (ggg) at (0, -2) {\texttt {g}};
\path (start) edge (g0);
\path (start) edge (g1);
\path (start) edge (g2);
\path (start) edge (g3);
\path (g0) edge (gg0);
\path (g1) edge (gg0);
\path (g2) edge (gg1);
\path (g3) edge (gg1);
\path (gg0) edge (ggg);
\path (gg1) edge (ggg);
\end {tikzpicture}
Its work and span depend on the cost of g, but assuming g is constant-time, then reduce g z S has work \mathcal {O}(n) and span \mathcal {O}(\log n).
1781Concept150-00AE150-00AE.xmlSequence mapreduceThe pattern of sequence map followed by sequence reduce is very common. We define a hybrid function mapreduce accordingly:(* mapreduce : ('a -> 'b) -> 'b -> ('b * 'b -> 'b) -> 'a Seq.t -> 'b
* REQUIRES: g and z form a monoid
* ENSURES: Seq.mapreduce f z g ~= Seq.reduce g z o Seq.map f
*)
fun mapreduce f z g = Seq.reduce g z o Seq.map f
In fact, this function is the fold for sequences defined using singleton, empty, and append:
\begin {aligned} \texttt {mapreduce f z g (Seq.singleton a)} &\cong \texttt {f a} \\ \texttt {mapreduce f z g (Seq.empty ())} &\cong \texttt {z} \\ \texttt {mapreduce f z g (Seq.append (s1, s2))} &\cong \texttt {g (mr f z g s1, mr f z g s2)} \end {aligned}
We abbreviate mapreduce as mr here for brevity.The monoid requirements on reduce (and mapreduce) are justified by the behavior of append and empty. For example:
\begin {aligned} &\texttt {g (z, mapreduce f z g s)} \\ &\cong \texttt {g (mapreduce f z g (Seq.empty ()), mapreduce f z g s)} \\ &\cong \texttt {mapreduce f z g (Seq.append (Seq.empty (), s))} \\ &\cong \texttt {mapreduce f z g s} \end {aligned}
Here, since \texttt {Seq.append (Seq.empty (), s)} \cong \texttt {s}, we must have that z is a left identity for g. Similar reasoning justifies that z must be a right identity and g must be associative.
1782Definition150-00AI150-00AI.xmlSequence appendUsing sequence tabulate, sequence length, and sequence nth, we can define a function to append two sequences:(* append : 'a Seq.t * 'a Seq.t -> 'a Seq.t
* REQUIRES: true
* ENSURES: append (<x0, ..., x_{m-1}>, <y0, ..., y_{n-1}>) ~= <x0, ..., x_{m-1}, y0, ..., y_{n-1}>
*)
fun append (S1, S2) =
Seq.tabulate
(fn i => if i < Seq.length S1 then Seq.nth S1 i else Seq.nth S2 (i - Seq.length S1))
(Seq.length S1 + Seq.length S2)
Based on the cost graphs for sequence tabulate, sequence length, and sequence nth, we find that this function has work \mathcal {O}(m + n) and span \mathcal {O}(1).1783Concept150-000N150-000N.xmlShadowingval tau : real = 6.28
val radius : real = 5.0
val area : real = tau * radius
val radius : real = 10.0
The value bound to area is 31.4 at the end (not, e.g., 62.8). The new definition radius does not affect the previous definition, which area still refers back to.1784Concept150-008H150-008H.xmlSignatureA signature is the type of a structure. We say that a structure ascribes to a signature.We can declare a signature using the signature keyword, and we can write a signature using sig ... end.signature MY_SIGNATURE =
sig
(* signature specification here *)
end
1785Concept150-001T150-001T.xmlSimple induction on natural numbersTo prove that a property holds on all natural numbers n \in \{0, 1, 2, 3, \cdots \}:Base Case: Prove that the property holds on 0.
Inductive Case: Prove that if the property holds on n, then the property holds on n + 1.Then:The property holds on 0.
The property holds on 1 = 0 + 1, since the property holds on 0.
The property holds on 2 = 1 + 1, since the property holds on 1.
The property holds on 3 = 2 + 1, since the property holds on 2.
...and so on.1786Definition150-00AG150-00AG.xmlSingleton sequenceUsing sequence tabulate, we can define a function to create a sequence with one element:(* singleton : 'a -> 'a Seq.t
* REQUIRES: true
* ENSURES: singleton a ~= <a>
*)
fun singleton a = Seq.tabulate (fn _ => a) 1
This function has constant work and span.1787Definition150-004I150-004I.xmlSorting algorithm specificationThe result of sorting l : int list should be:Sorted (nondecreasing/weakly ascending) according to Int.compare.
A permutation of l.For cost, we count the number of comparisons performed.1788Concept150-0067150-0067.xmlStagingCurried functions can perform some intermediate computation before receiving all of their arguments.1789Definition150-0073150-0073.xmlStreamUsing a suspension, we can define a type of streams as follows:datatype 'a stream = Stream of unit -> 'a * 'a stream
Here, Stream takes the role of ::, but storing a suspension of a first element and the remainder of the stream.Note that in this formulation, every stream is infinite.The following helper function computes the first element of a stream and its tail:(* expose : 'a stream -> 'a * 'a stream *)
fun expose (Stream susp : 'a stream) : 'a * 'a stream = susp ()
We call the first element of a stream its head, and the remainder its tail.fun fst (x, y) = x
fun snd (x, y) = y
fun head (s : 'a stream) : 'a = fst (expose s)
fun tail (s : 'a stream) : 'a stream = snd (expose s)
1790Concept150-001X150-001X.xmlStrong induction on natural numbersTo prove that a property holds on all natural numbers n \in \{0, 1, 2, 3, \cdots \}:Base Case: Prove that the property holds on 0.
Inductive Case: Prove that if the property holds on all m \le n, then the property holds on n + 1.Then:The property holds on 0.
The property holds on 1 = 0 + 1, since the property holds on 0.
The property holds on 2 = 1 + 1, since the property holds on 0 and 1.
The property holds on 3 = 2 + 1, since the property holds on 0, 1, and 2.
...and so on.Contrast this technique with simple induction.1791Concept150-0027150-0027.xmlStructural induction on int listTo prove that a property holds on all list values l : int list:Base Case: Prove that the property holds on nil.
Inductive Case: Prove that for all x : int and xs : int list, if the property holds on xs, then the property holds on x :: xs.1792Concept150-002T150-002T.xmlStructural induction on treeTo prove that a property holds on all tree values t : tree:Base Case: Prove that the property holds on Empty.
Inductive Case: Prove that for all x : int and l, r : tree, if the property holds on both l and r (inductive hypotheses), then the property holds on Node (l, x, r).1793Concept150-008J150-008J.xmlStructureWe can declare a structure using the structure keyword, and we can write a structure using struct ... end.structure MyStructure =
struct
(* declarations here *)
end
1794Concept150-008V150-008V.xmlStructure equivalence via representation independenceTwo structures M1, M2 : S are equivalent when:For each abstract type t, we give a relation R_\texttt {t}(-, -) relating M1.t to M2.t.
All values declared are \cong , where R_\texttt {t} is taken as the notion of equivalence for type t.1795Definition150-0072150-0072.xmlSuspensionA value v : unit -> t is called a suspension (or thunk), since it contains a "suspended", not-yet-evaluated expression of type t.We suspend an expression e : t via fn () => e.To compute the result of the expression e, we evaluate v ().1797Definition150-007N150-007N.xmlThe match algorithm using combinatorsWe can implement a staged version of the match algorithm elegantly using some combinators.(* match : regexp -> char list -> char list validator *)
fun match (r : regexp) (s : char list) : char list validator =
case r of
Char a =>
( case s of
nil => FALSE
| c :: cs => if a = c then TEST s' else FALSE
)
| Zero => FALSE
| One => TEST s
| Plus (r1, r2) => match r1 s ORELSE match r2 s
| Times (r1, r2) => match r1 s >>= match r2
| Star r =>
TEST s
ORELSE match r s >>= (fn s' => if s' << s then match (Star r) s' else FALSE)
This is extensionally equivalent to the match algorithm from before, but it uses combinators to avoid threading p through the program manually, and it uses staging to recur over the regular expression up-front.1798Definition150-00BF150-00BF.xmlTotalA value f : t1 -> t2 is total when for all values x : t1, we have that f x is a pure expression.1799Concept150-008P150-008P.xmlTransparent and opaque ascriptionWhen we write MyStruct : MY_SIG, the structure transparently ascribes to MY_SIG: all types in the signature are visible from the outside.When we write MyStruct :> MY_SIG, the structure opaquely ascribes to MY_SIG: all type t specifications in the signature are hidden from the outside.1800Concept150-003H150-003H.xmlTree method technique
To guess a solution to a recurrence with multiple recursive calls, the simplest thing to do is to consider the tree of costs.
Determine the following quantities:
Symbol
Description
L
number of levels in the computation tree
n_i
nodes at level i, where 0 is the top level
w_i
non-recursive work at level i
e
number of leaves
b
cost per leaf/base case
Then, the cost should be e \cdot b + \sum _{i = 0}^{L - 1} n_i \cdot w_i.1801Concept150-003D150-003D.xmlTree size metrics and assumptionsWhen analyzing the cost of a tree algorithm, we typically use one of the following approaches:
Assume the tree is a left or right spine, only consisting of nonempty left or right children.
Here, we may use either depth d or number of nodes n; both are equivalent.
Assume the tree is (full and) balanced, where every node has two children and all paths from leaves to the root are of the same length.
We may use depth d, equal to \log _2(n + 1).
Alternatively, we may use number of nodes n, equal to 2^d - 1.1802Concept150-0019150-0019.xmlTuple pattern matchingval (x, y) = e
1803Concept150-0004150-0004.xmlTypeA type is a prediction about the kind of value an expression will evaluate to. When an expression e has type t, we write e : t.An expression is well-typed if it has a type and ill-typed otherwise.Type-checking happens prior to evaluation: only well-typed programs are evaluated.1804Concept150-002K150-002K.xmlType alias declarationTo alias a type to a new name, we may use:type newName = someType
This is solely for readability, reducing redundancy.1805Definition150-0091150-0091.xmlType classA type class is a signature containing a type parameter (meant to be transparent) alongside some operations involving the type.signtaure MY_TYPE_CLASS =
sig
type t (* parameter *)
val f1 : (* ...involving t... *)
val f2 : (* ...involving t... *)
(* ... *)
end
The type should be transparent, since a client is meant to use the operations freely. Type classes do not hide type information; they simply classify types supporting some operations.1806Concept150-0053150-0053.xmlType inference algorithmTo infer the most general type of a function in SML:Give each variable, including the function being defined, an arbitrary type variable.
Add constraints based on the usage of each variable, and add constraints to make sure all clauses have the same type.
Solve the constraints.
Optionally, re-letter the type variables in the answer for convenience.For simple functions, this process often occurs implicitly in one's head.1807Concept150-004T150-004T.xmlType variableA type variable stands for an arbitrary type, denoted by an ' followed by a variable name. We pronounce type variables as Greek letters:
SML Syntax
Greek Letter
Pronunciation
'a
\alpha
alpha
'b
\beta
beta
'c
\gamma
gamma
'd
\delta
delta
'e
\epsilon
epsilon
1808Concept150-005G150-005G.xmlUnit typeThe type unit has a single value, () : unit, the empty tuple.1809Concept150-0033150-0033.xmlUnrolling techniqueTo guess a solution to a recurrence, the simplest thing to do is unfold the definition repeatedly and observe the behavior.1811Definition150-007O150-007O.xmlValidatorA t validator computes a bool, potentially asking "questions" to the input predicate p : t -> bool in the process.type 'a validator = ('a -> bool) -> bool
1812Concept150-0006150-0006.xmlValueA value v is a final answer that cannot be simplified further.1813Concept150-0093150-0093.xmlVarieties of types in signaturesEvery type in a signature can be annotated to be abstract, parameter, or concrete.If the type is unspecified via type t, it can be:
abstract, if it is meant to be hidden with opaque ascription; or
parameter, if it is meant to be known to clients with transparent ascription.
If type type is specified via type t = ..., it is concrete.1814Concept150-00C8150-00C8.xmlVisualizing cost with the print effectTo visualize the cost of a program, we can run print "$" every time our cost model says we used one abstract unit of cost.1815Concept150-001B150-001B.xmlWildcard patternThe wildcard pattern _ behaves like a variable but without binding any variables. It is useful when an input is not used.1816Concept150-003K150-003K.xmlWork and spanWork: the cost of evaluating an expression sequentially.
Span: the cost of evaluating an expression in parallel, assuming unlimited parallel processors.In reality, given finite parallel processors, the cost will be between the work and the span.1817Definition150-009V150-009V.xmlWork and span of a cost graphThe work of a cost graph is the sum of the costs of all hexagonal nodes in the graph.
The span of a cost graph is the sum of the costs of the hexagonal nodes on the path from the start node to the end node with the highest cost.
1820150-lecture150-lecture.xmlLecturesHarrison Grodin288Lecture150-lect01150-lect01.xmlTypes, expressions, and evaluation2024514Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
In this lecture, we begin to discuss the foundations of functional programming in Standard ML.222150-0009150-0009.xmlMotivation219Principle150-000W150-000W.xmlProgramming as a linguistic processImperative programming is telling a computer how to compute a result. \begin {aligned} x &\leftarrow 2; \\ y &\leftarrow x + x \end {aligned} Functional programming is explaining what you want to compute. 2 + 2Functional programming is applicable in all "high-level" programming languages.220Principle150-000V150-000V.xmlPrinciples of functional programmingSimplicity: pure, functional code is easy to reason about, test, and parallelize.
Compositionality: build bigger programs out of smaller ones, taking advantage of patterns.
Abstraction: use types/specification to guide program development.221Principle150-000X150-000X.xmlParallelizing functional programsThe "how" of imperative programming does not parallelize easily, since instructions can accidentally interact with each other. The "what" of functional programming does.281150-0002150-0002.xmlValues, expressions, and typesHarrison Grodin223Concept150-0006150-0006.xmlValueA value v is a final answer that cannot be simplified further.224Example150-000C150-000C.xmlExamples of values150, "hello", and true are all values.225Concept150-0005150-0005.xmlExpressionAn expression e is a program that can be evaluated.Every value is also an expression.
Until the end of the course, we make the blanket assumption that all expressions e evaluate to some value v.226Example150-000D150-000D.xmlExamples of expressions149 + 1, 150, and "hello " ^ "world" are all expressions.240Notation150-000E150-000E.xmlEvaluation notationHarrison GrodinWe use the following notation for evaluation.
Notation
Meaning
e \Longrightarrow ^{n} e'
e evaluates to e' in n steps
e \Longrightarrow e'
e evaluates to e' in an unspecified number of steps
e \hookrightarrow v
e evaluates to the value v
Note that e \hookrightarrow v iff e \Longrightarrow v.242Example150-0008150-0008.xmlEvaluation of numeric expressions \begin {aligned} \texttt {(3 + 4) * 2} &\Longrightarrow ^{1} \texttt {7 * 2} \\ &\Longrightarrow ^{1} \texttt {14} \end {aligned} \begin {aligned} \texttt {(3 + 4) * (2 + 1)} &\Longrightarrow ^{3} \texttt {21} \end {aligned} \begin {aligned} \texttt {(3 + 4) * (2 + 1)} &\hookrightarrow \texttt {21} \end {aligned} 243Example150-000A150-000A.xmlEvaluation of string expressions \begin {aligned} \texttt {"the " \textasciicircum {} "walrus"} &\Longrightarrow \texttt {"the walrus"} \end {aligned} \begin {aligned} \texttt {("the " \textasciicircum {} "walrus") \textasciicircum {} " leaps"} &\Longrightarrow \texttt {"the walrus" \textasciicircum {} " leaps"} \\ &\Longrightarrow \texttt {"the walrus leaps"} \end {aligned} 244Example150-000B150-000B.xmlIll-typed expressions do not evaluateThe expression \texttt {"the walrus" + 1} does not have a type, so it cannot be evaluated.245Concept150-0004150-0004.xmlTypeA type is a prediction about the kind of value an expression will evaluate to. When an expression e has type t, we write e : t.An expression is well-typed if it has a type and ill-typed otherwise.Type-checking happens prior to evaluation: only well-typed programs are evaluated.265Concept150-0007150-0007.xmlBase types
Type
Values
int
0, 1, 150, ~12, ...
real
1.5, 3.14, 0.0001, ...
bool
false, true
char
#"a", #"b", #"7", ...
string
"", "hello world", ...
266Example150-000F150-000F.xmlPrimitive expressionsIf e_1 : \texttt {int} and e_2 : \texttt {int}, then e_1 \texttt {+} e_2 : \texttt {int}. Same for e_1 \texttt {-} e_2, e_1 \texttt {*} e_2, e_1 \texttt {div} e_2, e_1 \texttt {mod} e_2, ...
If e_1 : \texttt {string} and e_2 : \texttt {string}, then e_1 \texttt {\textasciicircum {}} e_2 : \texttt {string}.268Warning150-000I150-000I.xmlDivision by zeroThe expression 2 div 0 : int is well-typed, but it does not evaluate to a value; instead, it raises an exception. 2 div 0;
uncaught exception Div [divide by zero]
raised at: stdIn:1.4-1.7]]>Here, 0 violates an assumption that the second argument to div is nonzero.(* div : int * int -> int
* REQUIRES: y is nonzero
* ENSURES: `x div y` computes the integer division of `x` by `y`
*)
We implicitly assume that all well-typed expressions evaluate to values; in other words, we manually verify that we never use div with the invalid denominator 0.269Definition150-000Q150-000Q.xmlExtensional equivalence at base typesTwo expressions e and e' (that evaluate to values) are extensionally equivalent, written e \cong e', when they evaluate to the same value.270Example150-000R150-000R.xmlExtensional equivalence of int expressions\texttt {21 + 21} \cong \texttt {42} \cong \texttt {6 * 7}, but \texttt {21 + 21} \ncong \texttt {7 * 7}.278Concept150-000G150-000G.xmlProduct typesThe product type t1 * t2 represents pairs whose first component is a value of type t1 and whose second component is a value of type t2.
Type
Values
t1 * t2
(v1, v2)
If e_1 : t_1 and e_2 : t_2, then (e_1, e_2) : t_1 \texttt {*} t_2.279Example150-000H150-000H.xmlExample pairs(3 + 4, true) : int * bool
(1.0, ~6.28) : real * real
(1, 50, false, "hi") : int * int * bool * string
(1, (50, false), "hi") : int * (int * bool) * stringNotice in the last example that parentheses matter!280Definition150-000S150-000S.xmlExtensional equivalence at product typesIt is the case that (e_1, e_2) \cong (e_1', e_2') when e_1 \cong e_1' and e_2 \cong e_2'.287150-0003150-0003.xmlDeclarations and variables284Concept150-000M150-000M.xmlval declarationsA val declaration gives a variable name to the result of an expression evaluation.val x : t = e
286Concept150-000N150-000N.xmlShadowingval tau : real = 6.28
val radius : real = 5.0
val area : real = tau * radius
val radius : real = 10.0
The value bound to area is 31.4 at the end (not, e.g., 62.8). The new definition radius does not affect the previous definition, which area still refers back to.354Lecture150-lect02150-lect02.xmlFunctions and patterns2024516Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
299150-0012150-0012.xmlPrivate declarations296Example150-000O150-000O.xmllet expressionsval a : int =
let
val b : int = 15
val c : int = b + 150
in
b * c
end + 1
(* ERROR: b not in scope *)
val d : int = a + b
298Example150-000P150-000P.xmllocal declarationslocal
val b : int = 15
val c : int = b + 150
in
val a : int = b * c + 1
end
(* ERROR: b not in scope *)
val d : int = a + b
325150-0010150-0010.xmlFunctions307Concept150-000L150-000L.xmlFunction typesIn math, we talk about functions f : X \to Y between sets X and Y. In SML, we do the same, but where X and Y are types.If t1 and t2 are types, then t1 -> t2 is the type of functions that take a value of type t1 as input and produce a value of type t2 as an output.
Type
Values
t1 -> t2
fn (x : t1) => e
If assuming that x : t1 makes e : t2, then (fn (x : t1) => e) : t1 -> t2.309Example150-000Y150-000Y.xmlDoubling functionval double : int -> int =
fn (x : int) => x + x
310Concept150-001L150-001L.xmlFunctions are valuesFunctions are values: they do not evaluate further.311Example150-001M150-001M.xmlFunction value exampleAll of the following are values of type int -> int:fn (x : int) => x
fn (x : int) => x + 2
fn (x : int) => 2 + 2
fn (x : int) => 4In other words, none of them evaluate to any of the others.312Concept150-000Z150-000Z.xmlFunction applicationFunction application is written using a space. If e : t1 -> t2 and e1 : t1, then e e1 : t2.When evaluating e e1, SML does the following:e is evaluated to fn (x : t1) => e'.
e1 is evaluated to v1 (of type t1).
e' is evaluated, where v1 is now bound to x.313Example150-0014150-0014.xmlApplying the double functionRecall the doubling function. \begin {aligned} \texttt {double (70 + 5)} &\Longrightarrow \texttt {(fn (x : int) => x + x) (70 + 5)} \\ &\Longrightarrow \texttt {(fn (x : int) => x + x) 75} \\ &\Longrightarrow \texttt {75 + 75} \\ &\Longrightarrow \texttt {150} \end {aligned} 314Definition150-000T150-000T.xmlExtensional equivalence at function typesSuppose f and f' are both of type t1 -> t2. Then, \texttt {f} \cong \texttt {f'} when for all values x and x' of type t1, \texttt {x} \cong \texttt {x'} implies \texttt {f x} \cong \texttt {f' x'}.When t1 is a base type, this is equivalent to: for all values x : t1, \texttt {f x} \cong \texttt {f' x}.316Example150-000U150-000U.xmlExtensional equivalence of doubling functionsRecall the double function, and consider the following alternate implementation:val double' : int -> int =
fn (x : int) => 2 * x
It is the case that \texttt {double} \cong \texttt {double'}, since for all x : int, we have \texttt {x + x} \cong \texttt {2 * x} by arithmetic reasoning.318Concept150-0015150-0015.xmlfun declarationsfun f (x : t1) : t2 = e
The value assigned to f is fn (x : t1) => e.320Concept150-0013150-0013.xmlFunction specifications(* f : t1 -> t2
* REQUIRES: ...some assumptions about x...
* ENSURES: ...some guarantees about (f x)...
*)
fun f (x : t1) : t2 = e
322Example150-000K150-000K.xmlDoubling function using fun(* double : int -> int
* REQUIRES: true
* ENSURES: `double x` evaluates to double `x`
*)
fun double (x : int) : int = x + x
val () = Test.int ("double 75 test", 150, double 75)
324Example150-0016150-0016.xmlStatic variable scope in functionsRecall the concept of shadowing.val pi : real = 3.14
fun area (r : real) : real = pi * r * r
val pi : real = 3.14159
In the function area, the variable pi is always 3.14, never 3.14159.353150-0011150-0011.xmlPatterns326Concept150-001B150-001B.xmlWildcard patternThe wildcard pattern _ behaves like a variable but without binding any variables. It is useful when an input is not used.329Example150-001C150-001C.xmlWildcard patternfun onefifty (_ : int) : int = 150
λ> fun onefifty (x : int) : int = 150;
stdIn:2.5-2.35 Warning: variable x is defined but not used
val onefifty = fn : int -> int
λ> fun onefifty (_ : int) : int = 150;
val onefifty = fn : int -> int
331Concept150-0019150-0019.xmlTuple pattern matchingval (x, y) = e
333Example150-001A150-001A.xmlTuple pattern matchingval name_and_age : string * int = ("Polly", 5)
val (name, age) : string * int = name_and_age
(* OR: *)
val (name : string, age : int) = name_and_age
(* OR: *)
val (name, age) = name_and_age
val age' : int =
let
val (_, age) = name_and_age
in
age + 1
end
val ((a : string, b : int), (c : string, d : int)) =
(name_and_age, name_and_age)
335Concept150-001E150-001E.xmlPattern inputs in fun declarationsfun f <pattern> : t2 = e
337Example150-001F150-001F.xmlFunction with a tuple inputfun f (x : int, y : int) : int = 2 * x + y * y
339Warning150-001N150-001N.xmlLinear variable usageVariables must occur exactly once in a pattern. The following is invalid:fun f (x : int, x : int) : int = x
340Concept150-001D150-001D.xmlConstant patternConstants, such as int, string, and bool values, are patterns. Make sure to match them all!342Example150-001H150-001H.xmlPattern matching on booleansfun not (false : bool) : bool = true
| not (true : bool) : bool = false
(* equivalent, but without type annotations *)
fun not false = true
| not true = false
344Example150-001G150-001G.xmlPattern matching on multiple integersfun usesSML (150 : int) : bool = true
| usesSML (210 : int) : bool = true
| usesSML (312 : int) : bool = true
| usesSML (_ : int) : bool = false
(* equivalent, but without type annotations *)
fun usesSML 150 = true
| usesSML 210 = true
| usesSML 312 = true
| usesSML _ = false
346Warning150-001I150-001I.xmlRedundant and missing branchesSML will give a warning/error if branches are missing/redundant.fun f (0 : int) : int = 150
(* WARNING: missing branches *)
fun f (x : int) : int = 150
| f (y : int) : int = 122
(* ERROR: redundant branches *)
fun f (x : int) : int = 150
| f (_ : int) : int = 122
(* ERROR: redundant branches *)
fun f (0 : int) : int = 150
| f (0 : int) : int = 122
(* ERROR: missing and redundant branches *)
348Concept150-001J150-001J.xmlcase expressionscase e of
pat1 => e1
| pat2 => e2
...
| patn => en
To evaluate a case expression:Evaluate e to a value.
Then, evaluate the first branch matching the value.350Concept150-001P150-001P.xmlif expressionsSML has shorthand notation ("syntactic sugar") for casing on a boolean.case e of
true => e1
| false => e0
if e then e1 else e0
In other languages:Python: e1 if e else e0
C: e ? e1 : e0Not to be confused with if "statements"!The following further syntactic sugars are available:e1 andalso e2 is sugar for if e1 then e2 else false
e1 orelse e2 is sugar for if e1 then true else e2352Example150-001K150-001K.xmlCasing on an expression(* mod : int * int -> int
* REQUIRES: y > 0
* ENSURES: 0 <= x mod y < y
*)
(* isEven : int -> bool
* REQUIRES: n >= 0
* ENSURES:
* isEven n ==> true if n is even, and
* isEven n ==> false if not
*)
fun isEven (n : int) : bool =
case n mod 2 of
0 => true
| 1 => false
| _ => raise Fail "impossible"
388Lecture150-lect03150-lect03.xmlRecursion and induction2024521Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
370150-0018150-0018.xmlRecursion
362Example150-0017150-0017.xmlFactorial functionThe factorial of n, written n!, is 1 \times 2 \times \cdots \times n.(* fact : int -> int
* REQUIRES: n >= 0
* ENSURES: (fact n) evaluates to n!
*)
fun fact (0 : int) : int = 1
| fact (n : int) : int = n * fact (n - 1)
365Example150-001O150-001O.xmlFibonacci function(* fib : int -> int
* REQUIRES: n >= 0
* ENSURES: fib n ==> the nth Fibonacci number
*)
fun fib 0 = 0
| fib 1 = 1
| fib n = fib (n - 1) + fib (n - 2)
local
(* helper : int -> int * int
* REQUIRES: n >= 0
* ENSURES: helper n ==> (the nth Fibonacci number, the n+1'th Fibonacci number)
*)
fun helper 0 = (0, 1)
| helper n =
let
val (a, b) = helper (n - 1)
in
(b, a + b)
end
in
(* fib : int -> int
* REQUIRES: n >= 0
* ENSURES: fib n ==> the nth Fibonacci number
*)
fun fib' n =
let
val (a, _) = helper n
in
a
end
end
367Example150-000J150-000J.xmlPassing a function as an argumentfun repeat (f : string -> string, x : string, 0 : int) : string = x
| repeat (f, x, n) = f (repeat (f, x, n - 1))
369Example150-001S150-001S.xmlEven tester(* isEven : int -> bool
* REQUIRES: n >= 0
* ENSURES: isEven n ==> true if n is even,
isEven n ==> false otherwise
*)
fun isEven (0 : int) : bool = true
| isEven n = not (isEven (n - 1))
Here, we use not as a helper function.383150-001Q150-001Q.xmlPower function372Example150-001R150-001R.xmlSimple power implementation(* pow : int * int -> int
* REQUIRES: k >= 0
* ENSURES: pow (n , k) evaluates to n^k
*)
fun pow (_ : int, 0 : int) : int = 1
| pow (n , k ) = n * pow (n, k - 1)
The application pow (n, k) evaluates k multiplications.373Concept150-001T150-001T.xmlSimple induction on natural numbersTo prove that a property holds on all natural numbers n \in \{0, 1, 2, 3, \cdots \}:Base Case: Prove that the property holds on 0.
Inductive Case: Prove that if the property holds on n, then the property holds on n + 1.Then:The property holds on 0.
The property holds on 1 = 0 + 1, since the property holds on 0.
The property holds on 2 = 1 + 1, since the property holds on 1.
The property holds on 3 = 2 + 1, since the property holds on 2.
...and so on.375Theorem150-001U150-001U.xmlCorrectness of powFor all values n : int and k : int such that k >= 0, we have \texttt {pow (n, k)} \Longrightarrow n^k.
374Proof#211unstable-211.xml150-001U
By induction on k.
Case 0: By the first clause of pow, we have \texttt {pow (n, 0)} \Longrightarrow \texttt {1}, and n^0 = 1.
Case k + 1:
IH: \texttt {pow (n, k)} \Longrightarrow n^k
WTS: \texttt {pow (n, k+1)} \Longrightarrow n^{k + 1}
\begin {aligned} &\texttt {pow (n, k + 1)} \\ &\Longrightarrow \texttt {n * pow (n, k)} &&\text {(\texttt {pow} clause 2)} \\ &\Longrightarrow \texttt {n * }n^{k} &&\text {(inductive hypothesis)} \\ &\Longrightarrow n^{k + 1} &&\text {(math)} \end {aligned}
376Principle150-001V150-001V.xmlProof structure mirrors program structureThe structure of a proof should mirror the structure of the program.If the program uses recursion on a natural number n, the proof should use induction on n.
If the program uses recursion with cases 0, 1, and n, the proof should use induction with base cases for 0 and 1 and an inductive case for n.
If the program cases on b : bool, the proof should case in the same way.379Example150-001W150-001W.xmlFast exponentiationfun fpow (_ : int, 0 : int) : int = 1
| fpow (n, k) =
case isEven k of
true =>
let
val halfAns = fpow (n, k div 2)
in
halfAns * halfAns
end
| false => n * fpow (n, k - 1)
The application fpow (n, k) evaluates between log2 k and 2 * log2 k multiplications, where we say log2 0 = 0.The following alternative also works for the true case: true => fpow (n * n, k div 2)
380Concept150-001X150-001X.xmlStrong induction on natural numbersTo prove that a property holds on all natural numbers n \in \{0, 1, 2, 3, \cdots \}:Base Case: Prove that the property holds on 0.
Inductive Case: Prove that if the property holds on all m \le n, then the property holds on n + 1.Then:The property holds on 0.
The property holds on 1 = 0 + 1, since the property holds on 0.
The property holds on 2 = 1 + 1, since the property holds on 0 and 1.
The property holds on 3 = 2 + 1, since the property holds on 0, 1, and 2.
...and so on.Contrast this technique with simple induction.382Theorem150-001Y150-001Y.xmlCorrectness of fpowFor all values n : int and k : int such that k >= 0, we have \texttt {fpow (n, k)} \Longrightarrow n^k.
381Proof#210unstable-210.xml150-001Y
By strong induction on k.
Case 0: By the first clause of fpow, we have \texttt {fpow (n, 0)} \Longrightarrow \texttt {1}, and n^0 = 1.
Case k + 1:
IH: for all k' \le k, we have \texttt {fpow (n, k')} \Longrightarrow n^{k'}.
WTS: \texttt {fpow (n, k + 1)} \Longrightarrow n^{k + 1}.
The code cases on whether k + 1 is even, so we will, too.
Case k + 1 is even:
By the IH, since (k + 1) div 2 <= k, we have \texttt {fpow (n, (k + 1) div 2)} \Longrightarrow n^\texttt {(k + 1) div 2}.
\begin {aligned} &\texttt {fpow (n, k + 1)} \\ &\Longrightarrow \texttt {let ... in ... end} &&\text {(\texttt {true} branch of clause 2)} \\ &\Longrightarrow n^\texttt {(k + 1) div 2}\texttt { * }n^\texttt {(k + 1) div 2} &&\text {(IH)} \\ &\Longrightarrow n^\texttt {k + 1} &&\text {(math)} \end {aligned}
Case k + 1 is odd:
\begin {aligned} &\texttt {fpow (n, k + 1)} \\ &\Longrightarrow \texttt {n * fpow (n, k)} &&\text {(\texttt {false} branch of clause 2)} \\ &\Longrightarrow \texttt {n * }n^k &&\text {(IH)} \\ &\Longrightarrow n^{k + 1} &&\text {(math)} \end {aligned}
387150-001Z150-001Z.xmlLists384Concept150-0022150-0022.xmlListsFor all types t, the type t list represents ordered lists of values of type t.The values of type t list are:nil, the empty list
v1 :: v2 (pronounced "cons"), where v1 : t is an element and v2 : t list is the remainder of the listSyntactic sugar [v1, v2, ..., vn] is equivalent to v1 :: v2 :: ... :: vn, i.e. v1 :: (v2 :: (... :: (vn :: nil))).There are corresponding expressions that evaluate left-to-right.386Example150-0023150-0023.xmlList length functionfun length (nil : string list) : int = 0
| length (_ :: xs) = 1 + length xs
417Lecture150-lect04150-lect04.xmlLists and structural induction2024523Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
405150-0025150-0025.xmlList append395Concept150-0022150-0022.xmlListsFor all types t, the type t list represents ordered lists of values of type t.The values of type t list are:nil, the empty list
v1 :: v2 (pronounced "cons"), where v1 : t is an element and v2 : t list is the remainder of the listSyntactic sugar [v1, v2, ..., vn] is equivalent to v1 :: v2 :: ... :: vn, i.e. v1 :: (v2 :: (... :: (vn :: nil))).There are corresponding expressions that evaluate left-to-right.397Example150-0020150-0020.xmlList appendinfixr 5 @
(* op @ : int list * int list -> int list
* REQUIRES: true
* ENSURES: l1 @ l2 ==> l, where l is the elements of l1 followed by the elements of l2 in order
*)
fun (l1 : int list) @ (l2 : int list) : int list =
case l1 of
nil => l2
| x :: xs => x :: (xs @ l2)
These declarations (both the infix and function declarations) are present in the prelude and are always available.399Lemma150-0021150-0021.xmlAppending nil on the leftFor all values l : int list, \texttt {nil @ l} \hookrightarrow l.
398Proof#209unstable-209.xml150-0021
Immediate, by Clause 1 of the definition of @.
400Concept150-0027150-0027.xmlStructural induction on int listTo prove that a property holds on all list values l : int list:Base Case: Prove that the property holds on nil.
Inductive Case: Prove that for all x : int and xs : int list, if the property holds on xs, then the property holds on x :: xs.402Lemma150-0024150-0024.xmlAppending nil on the rightFor all values l : int list, \texttt {l @ nil} \hookrightarrow \texttt {l}.
401Proof#208unstable-208.xml150-0024
Let l : int list be an arbitrary value.
By structural induction on l, following the definition of @.
Case nil:
By the first clause of @, \texttt {nil @ nil} \Longrightarrow \texttt {nil}.
Case x :: xs:
IH: \texttt {xs @ nil} \hookrightarrow \texttt {xs}.
WTS: \texttt {(x :: xs) @ nil} \hookrightarrow \texttt {x :: xs}.
We proceed as follows:
\begin {aligned} &\texttt {(x :: xs) @ nil} \\ &\Longrightarrow \texttt {x :: (xs @ nil)} &&\text {(\texttt {@} clause 2)} \\ &\Longrightarrow \texttt {x :: xs} &&\text {(IH)} \end {aligned}
404Lemma150-0028150-0028.xmlAssociativity of appendFor all values l1, l2, l3 : int list, \texttt {(l1 @ l2) @ l3} \cong \texttt {l1 @ (l2 @ l3)}.
403Proof#207unstable-207.xml150-0028
Let l1, l2, l3 : int list be arbitrary values.
By structural induction on l1, using the definition of @.
Case nil:
First, we reason about the left side:
\begin {aligned} &\texttt {(nil @ l2) @ l3} \\ &\cong \texttt {l2 @ l3} &&\text {(clause 2)} \end {aligned}
Then, we reason about the right side:
\begin {aligned} &\texttt {nil @ (l2 @ l3)} \\ &\cong \texttt {l2 @ l3} &&\text {(clause 2)} \end {aligned}
Both sides are equivalent to l2 @ l3, so the case is proven.
Case x :: xs:
IH: \texttt {(xs @ l2) @ l3} \cong \texttt {xs @ (l2 @ l3)}.
WTS: \texttt {((x :: xs) @ l2) @ l3} \cong \texttt {(x :: xs) @ (l2 @ l3)}.
First, we reason about the left side:
\begin {aligned} &\texttt {((x :: xs) @ l2) @ l3} \\ &\cong \texttt {(x :: (xs @ l2)) @ l3} &&\text {(\texttt {@} clause 2)} \\ &\cong \texttt {x :: ((xs @ l2) @ l3)} &&\text {(\texttt {@} clause 2)} \\ &\cong \texttt {x :: (xs @ (l2 @ l3))} &&\text {(IH)} \end {aligned}
Then, we reason about the right side:
\begin {aligned} &\texttt {(x :: xs) @ (l2 @ l3)} \\ &\cong \texttt {x :: (xs @ (l2 @ l3))} &&\text {(\texttt {@} clause 2)} \\ \end {aligned}
Both sides are equivalent, so the case is proven.
416150-0026150-0026.xmlList reverse407Example150-0029150-0029.xmlSlow list reverse(* revSlow : int list -> int list
* REQUIRES: true
* ENSURES: revSlow l ==> l', where l' is l reversed
*)
fun revSlow (l : int list) : int list =
case l of
nil => nil
| x :: xs => revSlow xs @ [x]
Notice that appending to the end of a list is slow, so revSlow is very slow (quadratic time).409Example150-002A150-002A.xmlReverse-append hybrid(* revApp : int list * int list -> int list
* REQUIRES: true
* ENSURES: revApp (l, acc) ~= revSlow l @ acc
*)
fun revApp (l : int list, acc : int list) : int list =
case l of
nil => acc
| x :: xs => revApp (xs, x :: acc)
We use a second "accumulator" argument, acc, to avoid appending each time.411Theorem150-002B150-002B.xmlCorrectness of revAppThe ENSURES of revApp is correct: for all values l, acc : int list, we have \texttt {revApp (l, acc)} \cong \texttt {revSlow l @ acc}.
410Proof#206unstable-206.xml150-002B
We use the definitions of @, revSlow, and revApp.
Let l : int list be an arbitrary value. We prove "for all values acc' : int list, \texttt {revApp (l, acc')} \cong \texttt {revSlow l @ acc'}" by induction on l.
Case nil:
Let acc : int list be arbitrary. First, we reason about the left side:
\begin {aligned} &\texttt {revApp (nil, acc)} \\ &\cong \texttt {acc} &&\text {(clause 1 of \texttt {revApp})} \end {aligned}
Then, we reason about the right side:
\begin {aligned} &\texttt {revSlow nil @ acc} \\ &\cong \texttt {nil @ acc} &&\text {(first clause of \texttt {revSlow})} \\ &\cong \texttt {acc} &&\text {(first clause of \texttt {@})} \end {aligned}
Both sides are equivalent, so the case is proven.
Case x :: xs:
IH: for all acc' : int list, \texttt {revApp (xs, acc')} \cong \texttt {revSlow xs @ acc'}.
WTS: for all acc : int list, \texttt {revApp (x :: xs, acc)} \cong \texttt {revSlow (x :: xs) @ acc}.
We name acc and acc' separately to reduce confusion.
Let acc : int list be arbitrary. First, we reason about the left side:
\begin {aligned} &\texttt {revApp (x :: xs, acc)} \\ &\cong \texttt {revApp (xs, x :: acc)} &&\text {(clause 2 of \texttt {revApp})} \\ &\cong \texttt {revSlow xs @ (x :: acc)} &&\text {(IH)} \end {aligned}
Here, we use the IH with acc' as x :: acc. Then, we reason about the right side:
\begin {aligned} &\texttt {revSlow (x :: xs) @ acc} \\ &\cong \texttt {(revSlow xs @ [x]) @ acc} &&\text {(clause 2 of \texttt {revSlow})} \\ &\cong \texttt {revSlow xs @ ([x] @ acc)} &&\text {(associativity of \texttt {@})} \\ &\cong \texttt {revSlow xs @ ((x :: nil) @ acc)} \\ &\cong \texttt {revSlow xs @ (x :: (nil @ acc))} &&\text {(clause 2 of \texttt {@})} \\ &\cong \texttt {revSlow xs @ (x :: acc)} &&\text {(clause 1 of \texttt {@})} \\ \end {aligned}
Here, we used the associativity of @. Both sides are then equivalent, so the case is proven.
413Example150-002C150-002C.xmlList reverse(* rev : int list -> int list
* REQUIRES: true
* ENSURES: rev l ==> l', where l' is l reversed
*)
fun rev (l : int list) : int list = revApp (l, nil)
We use the stronger revApp to implement rev efficiently.415Corollary150-002D150-002D.xmlCorrectness of revrev and revSlow are extensionally equivalent: \texttt {rev} \cong \texttt {revSlow}.
414Proof#205unstable-205.xml150-002D
To show that two functions are extensionally equivalent, we take in an arbitrary input and show that its application makes both sides equivalent. Let l : int list be arbitrary; we show that \texttt {rev l} \cong \texttt {revSlow l}.
By the correctness of revApp, our lemma about appending nil on the right, and the definition of rev, we have:
\begin {aligned} &\texttt {rev l} \\ &\cong \texttt {revApp (l, nil)} &&\text {(definition of \texttt {rev})} \\ &\cong \texttt {revSlow l @ nil} &&\text {(correctness of \texttt {revApp})} \\ &\cong \texttt {revSlow l} &&\text {(appending \texttt {nil} lemma)} \end {aligned}
This completes the proof.
459Lecture150-lect05150-lect05.xmlDatatypes and trees2024528Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
441150-002F150-002F.xmlDatatypes424Concept150-002E150-002E.xmlOption typesFor all types t, the type t option represents at most one value of type t.The values of type t option are:NONE, with no other data
SOME v, where v : t is the single element containedThere are analogous expressions and patterns.427Example150-002G150-002G.xmlList minimum(* listMin : int list -> int
* REQUIRES: l nonempty
* ENSURES: listMin l ==> the smallest integer contained in l
*)
fun listMin (nil : int list) : int = raise Fail "violates REQUIRES"
| listMin [x] = x
| listMin (x :: xs) = Int.min (x, listMin xs)
Alternatively, using option types, we can implement a function listMin that computes the minimum of a list if one exists, or evaluates to NONE otherwise.(* listMin : int list -> int option
* REQUIRES: true
* ENSURES: listMin l ==> SOME (the smallest integer contained in l), or NONE if l is empty
*)
fun listMin (nil : int list) : int option = NONE
| listMin (x :: xs) =
case listMin xs of
NONE => SOME x
| SOME y => SOME (Int.min (x, y))
Here, NONE behaves like an \infty : extending int to int option is like adding a value \infty that we treat as larger than all integers.429Concept150-002H150-002H.xmlDatatype declarationA datatype declaration lets us define a new type that can be pattern-matched on.datatype newTypeName
= Constructor1 of dataToContain1
| Constructor2 of dataToContain2
| Constructor3 (* does not contain any data *)
| ...
| ConstructorN of dataToContainN
431Example150-002I150-002I.xmlExisting types as datatype declarationsThe types bool, option, and list can all be implemented via datatype declarations.datatype bool
= false
| true
datatype intoption
= NONE
| SOME of int
datatype intlist
= nil
| :: of int * intlist
Note that here, we define intoption and intlist (as one word), rather than int option and int list. We will define the more general latter versions soon.433Concept150-002K150-002K.xmlType alias declarationTo alias a type to a new name, we may use:type newName = someType
This is solely for readability, reducing redundancy.436Example150-002L150-002L.xmlCoordinate type aliasUsing a type alias declaration, we can define aliases for points and vectors in a 2D plane:type point = int * int
type vector = int * int
We then use point and vector as if they are int * int:(* distance : point * point -> vector *)
fun distance ((x1, y1) : point, (x2, y2) : point) : vector =
(x2 - x1, y2 - y1)
Warning: This code would still typecheck if vector was swapped for point or vice versea, since both are int * int.440Example150-002J150-002J.xmlCustom datatype for messagesWhen implementing a messaging app, we may consider three varieties of messages:A simple text message, consisting of a string.
An image, and optionally a caption for the image.
A voice message.We can represent these options using the following datatype declaration:datatype message
= Text of string
| Image of image * string option
| Voice of audio
We assume that image and audio are defined prior.To implement a search functionality, we can pattern match on the message type.fun contains (m : message, s : string) : bool =
case m of
Text s' => isSubstring (s, s')
| Image (_, NONE) => false
| Image (_, SOME s') => isSubstring (s, s')
| Voice _ => false
We may also use a type alias declaration as follows:type messages = message list
fun anyContains (l : messages, s : string) : bool =
case l of
nil => false
| m :: ms => contains (m, s) orelse anyContains (ms, s)
458150-002M150-002M.xmlTrees443Concept150-002N150-002N.xmlBinary tree with ints at the nodesWe define the following datatype declaration to represent binary trees:datatype tree
= Empty
| Node of tree * int * tree
Note that tree is used recursively.445Example150-002O150-002O.xmlSample treeUsing the definition of binary trees, we can write down sample trees:fun leaf (x : int) : tree = Node (Empty, x, Empty)
val myTree : tree =
Node
( Node
( leaf 1
, 2
, leaf 3
)
, 4
, leaf 5
)
447Example150-002P150-002P.xmlSize of a tree(* size : tree -> int
* REQUIRES: true
* ENSURES: size t ==> the number of nodes in t
*)
fun size (Empty : tree) : int = 0
| size (Node (l, _, r)) = size l + 1 + size r
449Example150-002Q150-002Q.xmlIn-order traversal of a tree(* inordSlow : tree -> int list
* REQUIRES: true
* ENSURES: inordSlow t ==> a list of elements of t in left-to-right order
*)
fun inordSlow (Empty : tree) : int list = nil
| inordSlow (Node (l, x, r)) = inordSlow l @ x :: inordSlow r
450Concept150-002T150-002T.xmlStructural induction on treeTo prove that a property holds on all tree values t : tree:Base Case: Prove that the property holds on Empty.
Inductive Case: Prove that for all x : int and l, r : tree, if the property holds on both l and r (inductive hypotheses), then the property holds on Node (l, x, r).452Theorem150-002S150-002S.xmlLength of in-order traversal is tree sizeRecall the definitions of size, length, and inordSlow.For all t : tree, it is the case that \texttt {size t} \cong \texttt {length (inordSlow t)}.
451Proof#204unstable-204.xml150-002S
Let t : tree be arbitrary; we go by structural induction on t.
Case Empty:
On the left side:
\begin {aligned} \texttt {size Empty} \cong \texttt {0} &&\text {(clause 1 of \texttt {size})} \end {aligned}
On the right side:
\begin {aligned} &\texttt {length (inordSlow Empty)} \\ &\cong \texttt {length nil} &&\text {(first clause of \texttt {inordSlow})} \\ &\cong \texttt {0} &&\text {(first clause of \texttt {length})} \end {aligned}
Both sides are equivalent, so the case is proven.
Case Node (l, x, r):
IH1: \texttt {size l} \cong \texttt {length (inordSlow l)}
IH2: \texttt {size r} \cong \texttt {length (inordSlow r)}
WTS: \texttt {size (Node (l, x, r))} \cong \texttt {length (inordSlow (Node (l, x, r)))}.
First, we reason about the left side:
\begin {aligned} &\texttt {size (Node (l, x, r))} \\ &\cong \texttt {size l + 1 + size r} &&\text {(clause 2 of \texttt {size})} \\ &\cong \texttt {length (inordSlow l) + 1 + length (inordSlow r)} &&\text {(IHs)} \end {aligned}
Then, we reason about the right side:
\begin {aligned} &\texttt {length (inordSlow (Node (l, x, r)))} \\ &\cong \texttt {length (inordSlow l @ x :: inordSlow r)} &&\text {(clause 2 of \texttt {inordSlow})} \\ &\cong \texttt {length (inordSlow l) + length (x :: inordSlow r)} &&\text {(lemma)} \\ &\cong \texttt {length (inordSlow l) + (1 + length (inordSlow r))} &&\text {(clause 2 of \texttt {length})} \\ &\cong \texttt {length (inordSlow l) + 1 + length (inordSlow r)} &&\text {(math)} \end {aligned}
Here, we used a lemma about the length of appended lists. Both sides are then equivalent, so the case is proven.
454Example150-002U150-002U.xmlEfficient in-order traversal(* inordApp : tree * int list -> int list
* REQUIRES: true
* ENSURES: inordApp (t, acc) ~= inordSlow t @ acc
*)
fun inordApp (Empty : tree, acc : int list) : int list = acc
| inordApp (Node (l, x, r), acc) = inordApp (l, x :: inordApp (r, acc))
(* inord : tree -> int list
* REQUIRES: true
* ENSURES: inord t ==> a list of elements of t in left-to-right order
*)
fun inord (t : tree) : int list = inordApp (t, nil)
455Theorem150-002V150-002V.xmlCorrectness of inordAppFor all v : tree and l : int list, we have \texttt {inordApp (t, acc)} \cong \texttt {inordSlow t @ acc}.457Corollary150-002W150-002W.xmlCorrectness of inordinord and inordSlow are extensionally equivalent: \texttt {inord} \cong \texttt {inordSlow}.
456Proof#203unstable-203.xml150-002W
This follows immediately from the correctness of inordApp, our lemma about appending nil on the right, and the definition of inord.
606Lecture150-lect06150-lect06.xmlCost analysis2024530Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
478150-002X150-002X.xmlBasic cost analysis466Concept150-0031150-0031.xmlCost analysisGoal: understand the cost of programs. Some choices:Time each execution. However, this is machine-dependent.
Count a given metric (recursive calls; additions; evaluation steps; etc.). This is abstract enough to be proved, and it corresponds to real time.First, we choose a cost metric and size metrics for inputs. Then, we:Write a recurrence following the structure of the code, computing cost from input sizes.
Solve for a closed form.
Give a simple asymptotic (big-\mathcal {O}) solution.468Example150-0032150-0032.xmlRecurrence for slow list reverseRecall the definition of revSlow:467Example150-0029150-0029.xmlSlow list reverse(* revSlow : int list -> int list
* REQUIRES: true
* ENSURES: revSlow l ==> l', where l' is l reversed
*)
fun revSlow (l : int list) : int list =
case l of
nil => nil
| x :: xs => revSlow xs @ [x]
Notice that appending to the end of a list is slow, so revSlow is very slow (quadratic time).We give its recurrence as follows, counting recursive calls as our cost metric, in terms of the length of the input list. \begin {aligned} W(0) &= 0 \\ W(n) &= W(n - 1) + W_\texttt {@}(n-1, 1) + 1 \\ &= W(n - 1) + (n - 1) + 1 \\ &= W(n - 1) + n \end {aligned} 469Concept150-0033150-0033.xmlUnrolling techniqueTo guess a solution to a recurrence, the simplest thing to do is unfold the definition repeatedly and observe the behavior.471Example150-0034150-0034.xmlClosed form for slow list reverseTo solve the recurrence for slow list reverse, we first guess a solution by unrolling.
\begin {aligned} W(n) &= W(n-1) + n \\ &= W(n-2) + (n-1) + n \\ &= W(n-3) + (n-2) + (n-1) + n \\ &= \cdots \\ &= W(0) + 1 + 2 + \cdots + (n-1) + n \\ &= 0 + 1 + 2 + \cdots + (n-1) + n \end {aligned}
It is well-known that \sum _{i = 0}^n = \frac {n(n + 1)}{2}.
To validate our guess, we prove it by induction.
470Proof#202unstable-202.xml150-0034
We show that W(n) = \frac {n(n+1)}{2} by induction on n.
Case 0:
\begin {aligned} W(0) &= 0 \\ &= \frac {0(1)}{2} \end {aligned}
Case n + 1:
IH: W(n) = \frac {n(n+1)}{2}
WTS: W(n+1) = \frac {(n+1)(n+2)}{2}
\begin {aligned} W(n + 1) &= W(n) + (n + 1) &&\text {(definition)} \\ &= \frac {n(n+1)}{2} + (n + 1) &&\text {(IH)} \\ &= \frac {n(n+1)}{2} + \frac {2(n+1)}{2} &&\text {(math)} \\ &= \frac {n(n+1) + 2(n+1)}{2} &&\text {(math)} \\ &= \frac {(n + 2)(n + 1)}{2} &&\text {(math)} \\ &= \frac {(n + 1)(n + 2)}{2} &&\text {(math)} \end {aligned}
473Example150-0035150-0035.xmlRecurrence for reverse-append hybridRecall the definition of revAppend:472Example150-002A150-002A.xmlReverse-append hybrid(* revApp : int list * int list -> int list
* REQUIRES: true
* ENSURES: revApp (l, acc) ~= revSlow l @ acc
*)
fun revApp (l : int list, acc : int list) : int list =
case l of
nil => acc
| x :: xs => revApp (xs, x :: acc)
We use a second "accumulator" argument, acc, to avoid appending each time.We give its recurrence as follows, counting recursive calls as our cost metric, in terms of the length of the input lists. \begin {aligned} W(0, m) &= 0 \\ W(n, m) &= W(n - 1, m + 1) + 1 \end {aligned} 475Example150-0036150-0036.xmlClosed form for reverse-append hybridTo solve the recurrence for reverse-append hybrid, we first guess a solution by unrolling.
\begin {aligned} W(n, m) &= W(n-1, m+1) + 1 \\ &= W(n-2, m+2) + 2 \\ &= W(n-3, m+3) + 3 \\ &= \cdots \\ &= W(0, m+n) + n \\ &= 0 + n \\ &= n \end {aligned}
To validate our guess, we prove it by induction.
474Proof#201unstable-201.xml150-0036
We show that "for all m, W(n, m) = n" by induction on n.
Case 0:
Let m be arbitrary.
\begin {aligned} W(0, m) &= 0 \end {aligned}
Case n + 1:
IH: for all m', W(n, m') = n
WTS: for all m, W(n + 1, m) = n + 1
Let m be arbitrary.
\begin {aligned} W(n + 1, m) &= W(n, m + 1) + 1 &&\text {(definition)} \\ &= n + 1 &&\text {(IH)} \end {aligned}
Here, we use the IH with m' = m + 1.
477Example150-0037150-0037.xmlCost of list reverseRecall the definition of rev:476Example150-002C150-002C.xmlList reverse(* rev : int list -> int list
* REQUIRES: true
* ENSURES: rev l ==> l', where l' is l reversed
*)
fun rev (l : int list) : int list = revApp (l, nil)
We use the stronger revApp to implement rev efficiently.We give its "recurrence" (it's not recursive, since the code isn't recursive!) as follows, counting recursive calls as our cost metric, in terms of the length of the input lists. \begin {aligned} W(n) &= W_\texttt {revApp}(n, 0) \end {aligned} So, by by the closed form for reverse-append hybrid, we have W(n) = n. This is much better than W_\texttt {slowRev}(n) = \frac {1}{2}(n^2 + n)!537150-002Z150-002Z.xmlAsymptotic analysis479Concept150-0038150-0038.xmlBig-\mathcal {O}Sometimes, we wish to simplify exact bounds, ignoring linear factors. To do this, we use big-\mathcal {O} notation.Let X be a set and let f, g : X \to \mathbb {N}. We say that f \in \mathcal {O}(g) when there exist constants a, b : \mathbb {N} such that f \le ag + b, i.e. \forall x : X, f(x) \le ag(x) + b.We write \mathcal {O}(g) for the set of all functions f bounded by g, i.e. \mathcal {O}(g) = \{f : X \to \mathbb {N} \mid f \in \mathcal {O}(g) \}.Traditionally, X = \mathbb {N} and function inputs are assumed to be named n: for example, \mathcal {O}(n^2) is syntactic sugar for \mathcal {O}(n \mapsto n^2).482Example150-003B150-003B.xmlBig-\mathcal {O} boundsWe have n^2 \in \mathcal {O}(n^2 + n + 3).
480Proof#199unstable-199.xml150-003B
By the definition, we must choose a, b : \mathbb {N} such that n^2 \le a(n^2 + n + 3) + b.
Let a = 1 and b = 0; we have n^2 \le n^2 + n + 3 immediately.
We have n^2 + n + 3 \in \mathcal {O}(n^2).
481Proof#200unstable-200.xml150-003B
By the definition, we must choose a, b : \mathbb {N} such that n^2 + n + 3 \le an^2 + b.
Let a = 2 and b = 3; we have n^2 + n + 3 \le 2n^2 + 3 = n^2 + n^2 + 3, since n^2 \le n^2, n \le n^2, and 3 \le 3.
483Theorem150-0039150-0039.xmlProperties of big-\mathcal {O}\mathcal {O} is a preorder:
\mathcal {O} is reflexive: for all f, we have f \in \mathcal {O}(f).
\mathcal {O} is transitive: for all f,g,h, if f \in \mathcal {O}(g) and g \in O(h), then f \in O(h).
If f \in \mathcal {O}(g) and f' \in \mathcal {O}(g'), then f + f' \in \mathcal {O}(g + g'), where (f + f')(x) = f(x) + f'(x).
For all f,g : X \to \mathbb {N}, we have \mathcal {O}(f + g) = \mathcal {O}(\max (f, g)).
For all a, b : \mathbb {N} and f : X \to \mathbb {N}, we have \mathcal {O}(f) = \mathcal {O}(af + b), where (af + b)(x) = a \cdot f(x) + b by definition.509Concept150-003A150-003A.xmlCommon big-\mathcal {O} classesThe following classes are distinct, ordered by inclusion from top to bottom:
Class
Common Name
\mathcal {O}(1)
constant
\mathcal {O}(\log n)
logarithmic
\mathcal {O}(n)
linear
\mathcal {O}(n \log n)
quasilinear/log-linear
\mathcal {O}(n^2)
quadratic
\mathcal {O}(n^3)
cubic
\mathcal {O}(2^n)
exponential
510Example150-003C150-003C.xmlBig-\mathcal {O} bounds for list reverse algorithmsRecall the bounds for revSlow, \frac {1}{2}(n^2 + n), and for rev, n.We have a tight big-\mathcal {O} bound W_\texttt {revSlow}(n) \in \mathcal {O}(n^2). We can prove it using properties of big-\mathcal {O}:
\begin {aligned} \frac {1}{2}(n^2 + n) &\in \mathcal {O}\left (\frac {1}{2}(n^2 + n)\right ) \\ &= \mathcal {O}(n^2 + n) \\ &= \mathcal {O}(\max (n^2, n)) \\ &= \mathcal {O}(n^2) \end {aligned}
In your analyses, you need not justify your asymptotic bounds formally in this way; for example, you are welcome to immediately state that \frac {1}{2}(n^2 + n) \in \mathcal {O}(n^2) without justification.
We also have a tight big-\mathcal {O} bound W_\texttt {rev}(n) \in \mathcal {O}(n), by reflexivity.536Table150-003E150-003E.xmlSolutions to common recurrencesIn the below table, we include solutions to common recurrences, where we let T(0) = c_0.
Recurrence T(n) = \cdots
Exact Solution
\mathcal {O}(-)
T(n-1) + c_1
c_0 + c_1n
\mathcal {O}(n)
2T(n-1) + c_1
c_02^n + c_1\left (2^n-1\right )
\mathcal {O}(2^n)
T(n-1) + c_1n + c_2
c_0 + c_1\frac {n(n+1)}{2} + c_2n
\mathcal {O}(n^2)
2T(n-1) + c_1n + c_2
c_02^n + c_1\left (2^{n+1}-n-2\right ) + c_2(2^n - 1)
\mathcal {O}(2^n)
T(n-1) + c_1\log _2(n) + c_2
c_0 + c_1\log (n!) + c_2n
\mathcal {O}(n \log n)
598150-002Y150-002Y.xmlCost analysis of trees538Concept150-003D150-003D.xmlTree size metrics and assumptionsWhen analyzing the cost of a tree algorithm, we typically use one of the following approaches:
Assume the tree is a left or right spine, only consisting of nonempty left or right children.
Here, we may use either depth d or number of nodes n; both are equivalent.
Assume the tree is (full and) balanced, where every node has two children and all paths from leaves to the root are of the same length.
We may use depth d, equal to \log _2(n + 1).
Alternatively, we may use number of nodes n, equal to 2^d - 1.540Example150-003F150-003F.xmlTree sum(* sum : tree -> int *)
fun sum Empty = 0
| sum (Node (l, x, r)) = sum l + x + sum r
For our cost analyses, we choose to count number of additions evaluated.541Example150-003G150-003G.xmlTree sum work analysis, assuming spineWe analyze tree sum in terms of the number of nodes n in the tree, assuming the tree is a left spine. We follow the template:
First, we write the recurrence:
\begin {aligned} W(0) &= 0 \\ W(n) &= W(n - 1) + W(0) + 2 \\ &= W(n - 1) + 0 + 2 \\ &= W(n - 1) + 2 \end {aligned}
By the table (or unrolling and induction), we have that W(n) = 2n.
So, W(n) \in \mathcal {O}(n).561Concept150-003H150-003H.xmlTree method technique
To guess a solution to a recurrence with multiple recursive calls, the simplest thing to do is to consider the tree of costs.
Determine the following quantities:
Symbol
Description
L
number of levels in the computation tree
n_i
nodes at level i, where 0 is the top level
w_i
non-recursive work at level i
e
number of leaves
b
cost per leaf/base case
Then, the cost should be e \cdot b + \sum _{i = 0}^{L - 1} n_i \cdot w_i.562Table150-003J150-003J.xmlCommon summations \begin {aligned} \sum _{i = 0}^{n - 1} c &= cn \\ \sum _{i = 0}^{n - 1} i &= \frac {n(n - 1)}{2} \\ \sum _{i = 0}^{n - 1} a^i &= \frac {a^n - 1}{a - 1} \\ \sum _{i = 0}^{n - 1} 2^i &= 2^n - 1 \\ \sum _{i = 0}^{n - 1} 2^{-i} &\le \sum _{i = 0}^\infty 2^{-i} = 2 \end {aligned} 597Example150-003I150-003I.xmlTree sum work analysis, assuming balanced treeWe now analyze tree sum in terms of the depth d of the tree, assuming the tree is balanced. We follow the template:
First, we write the recurrence:
\begin {aligned} W(0) &= 0 \\ W(d) &= W(d - 1) + W(d - 1) + 2 \\ &= 2W(d - 1) + 2 \end {aligned}
To guess a cost bound, we use the tree method.
L
d
n_i
2^i
w_i
2
e
2^d
b
0
So, we guess
\begin {aligned} \sum _{i = 0}^{d - 1} 2 \cdot 2^i &= 2 \cdot \sum _{i = 0}^{d - 1} 2^i \\ &= 2 \cdot (2^d - 1) \\ &= 2^{d + 1} - 2 \end {aligned}
using the table of common summations.
We prove it as follows:
579Proof#198unstable-198.xml150-003I
We show that W(d) = 2^{d + 1} - 2 by induction on d.
Case 0:
\begin {aligned} W(0) &= 0 \\ &= 2^1 - 2 \end {aligned}
Case d + 1:
IH: W(d) = 2^{d + 1} - 2
WTS: W(d + 1) = 2^{d + 2} - 2
\begin {aligned} W(d + 1) &= 2W(d) + 2 &&\text {(definition)} \\ &= 2(2^{d + 1} - 2) + 2 &&\text {(IH)} \\ &= 2^{d + 2} - 2 &&\text {(math)} \end {aligned}
This completes the proof.
Alternatively, we can just cite the table of common recurrences, where c_0 = 0 and c_1 = 2.
So, W(d) \in \mathcal {O}(2^d).Intuitively, this makes sense: in a balanced tree of depth d, there are \mathcal {O}(2^d) nodes, and we do a constant amount of work at each node.We can also analyze in terms of the number of nodes, n = 2^d-1:
First, we write the recurrence:
\begin {aligned} W(0) &= 0 \\ W(2n + 1) &= W(n) + W(n) + 2 \\ &= 2W(n) + 2 \end {aligned}
We only give cases for 0 and 2n + 1 since the tree is balanced.
To guess a cost bound, we use the tree method.
L
\log _2(n + 1)
n_i
2^i
w_i
2
e
n+1
b
0
So, we guess
\begin {aligned} \sum _{i = 0}^{\log _2(n + 1) - 1} 2 \cdot 2^i &= 2 \cdot \sum _{i = 0}^{\log _2(n + 1) - 1} 2^i \\ &= 2 \cdot (2^{\log _2(n + 1)} - 1) \\ &= 2 \cdot ((n + 1) - 1) \\ &= 2n \end {aligned}
using the table of common summations.
We prove it as follows:
596Proof#197unstable-197.xml150-003I
We show that W(n) = 2n by induction on n, where we assume n is either n or 2n' + 1 by the balance assumption.
Case 0:
\begin {aligned} W(0) &= 0 \\ &= 2 \cdot 0 \end {aligned}
Case 2n + 1:
IH: W(n) = 2n
WTS: W(2n + 1) = 4n + 2
\begin {aligned} W(2n + 1) &= 2W(n) + 2 &&\text {(definition)} \\ &= 2(2n) + 2 &&\text {(IH)} \\ &= 4n + 2 &&\text {(math)} \end {aligned}
This completes the proof.
Alternatively, we can just cite the table of common recurrences if we define W(n) = W'(\log _2(n + 1)), where W' is the recurrence in terms of depth (still letting c_0 = 0 and c_1 = 2).
So, W(n) \in \mathcal {O}(n).Intuitively, this also makes sense: we have a tree with n nodes and we do a constant amount of work at each node.605150-0030150-0030.xmlParallelism599Concept150-003K150-003K.xmlWork and spanWork: the cost of evaluating an expression sequentially.
Span: the cost of evaluating an expression in parallel, assuming unlimited parallel processors.In reality, given finite parallel processors, the cost will be between the work and the span.600Remark150-003L150-003L.xmlDependence and independenceNote that unlimited parallel processors does not mean that span is always trivial.To put on two socks, parallelism helps: with enough hands, they can be put on simultaneously.
To put on one sock and one shoe, parallelism does not help: no matter how many hands are willing to help, the sock has to be put on before the shoe.
To put on n socks and n shoes, parallelism helps partially: all of the socks can be put on in parallel, and then all of the shoes can be put on in parallel.Independent tasks can be evaluated in parallel, but dependent tasks have to be evaluated sequentially.601Concept150-003M150-003M.xmlParallel evaluation of tuplesIn Standard ML, we can evaluate components of a tuple in parallel.602Example150-003N150-003N.xmlParallel evaluation of sample expressionsWe demonstrate parallel evaluation of tuples. In parallel: \begin {aligned} (((1 + 1) + 1) + 1, (1 + 1) + 1) &\Longrightarrow ^{1} ((2 + 1) + 1, 2 + 1) \\ &\Longrightarrow ^{1} (3 + 1, 3) \\ &\Longrightarrow ^{1} (4, 3) \end {aligned} Notice that the cost of evaluating the tuple is the maximum of both components, since we wait for both components to compute.Binary infix operations evaluate in parallel, as well, since they are syntactic sugar for tuples. For example, e1 + e2 is syntactic sugar for (op +) (e1, e2). Re-parenthesizing: \begin {aligned} ((1 + 1) + (1 + 1), 1 + 1 + 1) &\Longrightarrow ^{1} (2 + 2, 2 + 1) \\ &\Longrightarrow ^{1} (4, 3) \end {aligned} 603Example150-003O150-003O.xmlTree sum span analysis, assuming spineRecall tree sum and the tree sum work analysis, assuming spine. We repeat this analysis but assuming parallelism.
First, we write the recurrence:
\begin {aligned} S(0) &= 0 \\ S(n) &= \max (S(n - 1), S(0)) + 2 \\ &= \max (S(n - 1), 0) + 2 \\ &= S(n - 1) + 2 \end {aligned}
By the table (or unrolling and induction), we have that S(n) = 2n.
So, S(n) \in \mathcal {O}(n).Notice that this is exactly the same as the work: there are no opportunities for parallelism, since the tree is a spine and must be traversed in order (like a list).604Example150-003P150-003P.xmlTree sum span analysis, assuming balanced treeWe now analyze the span of tree sum in terms of the depth d of the tree, assuming the tree is balanced. We follow the template:
First, we write the recurrence:
\begin {aligned} S(0) &= 0 \\ S(d) &= \max (S(d - 1), S(d - 1)) + 2 \\ &= S(d - 1) + 2 \end {aligned}
We take the maximum of the recursive calls: they are executed in parallel since they are added, and tuples are evaluated in parallel.
By the table of common recurrences, S(d) = 2d.
So, S(d) \in \mathcal {O}(d).Intuitively, this makes sense: in a balanced tree of depth d, we have d span, since both subtrees can be evaluated in parallel.We can also analyze in terms of the number of nodes, n = 2^d-1:
First, we write the recurrence:
\begin {aligned} S(0) &= 0 \\ S(2n + 1) &= \max (S(n), S(n)) + 2 \\ &= S(n) + 2 \end {aligned}
We only give cases for 0 and 2n + 1 since the tree is balanced.
We take the maximum of the recursive calls: they are executed in parallel since they are added, and tuples are evaluated in parallel.
By the table of common recurrences, S(n) = 2\log _2(n+1).
So, S(n) \in \mathcal {O}(\log n).Intuitively, this also makes sense: we have a tree with n nodes and we do a constant amount of work at each node, and the depth of the tree is \mathcal {O}(\log n) since the tree is balanced.638Lecture150-lect07150-lect07.xmlSequential and parallel sorting202464Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
Expanding on our discussion of cost analysis, we consider sequential and parallel sorting algorithms as a case study.618150-004D150-004D.xmlSorting specification614Concept150-004E150-004E.xmlorder datatypeThe following datatype is built into the standard library of Standard ML:datatype order = LESS | EQUAL | GREATER
As the constructor names indicate, these constructors indicate the result of a comparison of elements in a trichotomous relation.616Example150-004F150-004F.xmlint comparisonThe built-in function Int.compare : int * int -> order compares two integers. We often think of this function as the primitive notion of comparison. For example, we think of op <= implemented as:fun (x : int) <= (y : int) : bool =
case Int.compare (x, y) of
LESS => true
| EQUAL => true
| GREATER => false
617Definition150-004I150-004I.xmlSorting algorithm specificationThe result of sorting l : int list should be:Sorted (nondecreasing/weakly ascending) according to Int.compare.
A permutation of l.For cost, we count the number of comparisons performed.625150-004G150-004G.xmlInsertion sortInsertion sort works by inserting each element into a list in sorted order, one at a time.620Example150-004J150-004J.xmlInsert auxiliary functionThe insert function adds a single element into a sorted list.(* insert : int * int list -> int list
* REQUIRES: l is sorted
* ENSURES: insert (x, l) is a sorted permutation of x :: l
*)
fun insert (x, nil) = [x]
| insert (x, y :: ys) =
case Int.compare (x, y) of
GREATER => y :: insert (x, ys)
| _ => x :: y :: ys
622Algorithm150-004K150-004K.xmlInsertion sortIterating insert, we may sort a list.(* isort : int list -> int list
* REQUIRES: true
* ENSURES: isort l is a sorted permutation of l
*)
fun isort nil = nil
| isort (x :: xs) = insert (x, isort xs)
623Example150-004L150-004L.xmlCost analysis of insertionWe analyze the cost of insert as follows:
First, we give the recurence:
\begin {aligned} W(0) &= 0 \\ W(n) &= 1 + \max (W(n - 1), 0) \\ &= W(n - 1) + 1 \end {aligned}
Here, we take the maximum of both branches of the case expression to get an upper bound.
By the table of common recurrences, we get that W(n) = n.
By reflexivity, W(n) \in \mathcal {O}(n).
There are no opportunities for parallelism, so the span is the same.624Example150-004M150-004M.xmlCost analysis of insertion sortWe analyze the cost of isort as follows:
First, we give the recurence:
\begin {aligned} W(0) &= 0 \\ W(n) &= W(n - 1) + W_\texttt {insert}(n - 1) &= W(n - 1) + n - 1 \end {aligned}
We get W_\texttt {insert}(n - 1) from the call to insert on a list of length n - 1; we know isort xs has length n - 1 since isort xs is a permutation of xs.
By the table of common recurrences, we get that W(n) = \frac {n(n-1)}{2}.
So, W(n) \in \mathcal {O}(n^2).
There are no opportunities for parallelism, so the span is the same.637150-004H150-004H.xmlMerge sortMerge sort works by splitting a list in half, sorting the halves, and combining the sorted results.627Example150-004N150-004N.xmlSplit auxiliary functionThe split function splits a list into two halves.(* split : int list -> int list * int list
* REQUIRES: true
* ENSURES: split l ==> (l1, l2), where
* - l1 @ l2 is a permutation of l
* - length l1 = length l div 2 (rounding down)
*)
fun split nil = (nil, nil)
| split [x] = (nil, [x])
| split (x1 :: x2 :: xs) =
let
val (xs1, xs2) = split xs
in
(x1 :: xs1, x2 :: xs2)
end
629Example150-004O150-004O.xmlMerge auxiliary functionThe merge function combines two sorted lists to produce a larger sorted list.(* merge : int list * int list -> int list
* REQUIRES: l1, l2 are sorted
* ENSURES: merge (l1, l2) ==> l, where l is a sorted permutation of l1 @ l2
*)
fun merge (nil, l2) = l2
| merge (l1, nil) = l1
| merge (x :: xs, y :: ys) =
case Int.compare (x, y) of
LESS => x :: merge (xs, y :: ys)
| GREATER => y :: merge (x :: xs, ys)
| EQUAL => x :: y :: merge (xs, ys)
Notice that merge does not go by structural recursion on l1 or l2; in some recursive calls one shrinks, and in some the other shrinks. Rather, it goes by induction on the sum of their lengths, which always shrinks.631Algorithm150-004P150-004P.xmlMerge sortBy splitting a list in halves, recursively sorting the halves, and merging the sorted results back together, we may sort a list.(* msort : int list -> int list
* REQUIRES: true
* ENSURES: msort l ==> l', where l' is a sorted permutation of l
*)
fun msort nil = nil
| msort [x] = [x]
| msort l =
let
val (l1, l2) = split l
val (l1', l2') = (msort l1', msort l2')
in
merge (l1', l2')
end
Notice that msort does not go by structural recursion on l; we make recursive calls msort l1' and msort l2' to two lists that are not structural subcomponents of l. However, by the guarantees of the split auxiliary function, we know the lengths of l1' and l2' are smaller than the length of l. Thus, msort goes by recursion on the length of l.632Example150-004Q150-004Q.xmlCost analysis of splitWe analyze the cost of split as follows:
First, we give the recurence:
\begin {aligned} W(0) &= 0 \\ W(1) &= 0 \\ W(n) &= W(n - 2) \end {aligned}
We never incur cost according to the cost model that counts comparisons, since we never use Int.compare.
By the table of common recurrences, we get that W(n) = 0.
By reflexivity, W(n) \in \mathcal {O}(0).
There are no opportunities for parallelism, so the span is the same.633Example150-004R150-004R.xmlCost analysis of mergeWe analyze the cost of merge in terms of the sum of the lengths of the inputs, s, since that is what the implementation goes by recursion on.
First, we give the recurence:
\begin {aligned} W(0) &= 0 \\ W(s) &= \max (0, 0, 1 + \max (W(s - 1), W(s - 1), W(s - 2))) \\ &= 1 + W(s - 1) \end {aligned}
By the table of common recurrences, we get that W(s) = s.
By reflexivity, W(s) \in \mathcal {O}(s).
There are no opportunities for parallelism, so the span is the same.636Example150-004S150-004S.xmlCost analysis of merge sortWe analyze the cost of msort as follows, assuming the length of the input list is a power of two for simplicity:
First, we give the recurence:
\begin {aligned} W(1) &= 0 \\ W(n) &= W_\texttt {split}(n) + (W(n / 2) + W(n / 2)) + W_\texttt {merge}(n / 2 + n /2) \\ &= 2W(n / 2) + n \end {aligned}
In the inductive case, note that both l1' and l2' have length n / 2 since msort always computes a permutation and l1 and l2 have length n / 2.
By the tree method, we guess that W(n) = n \log _2(n).
We prove it as follows:
634Proof#190unstable-190.xml150-004S
We show that W(n) = n \log _2(n) by induction.
Case 1:
\begin {aligned} W(1) &= 0 \\ &= 1 \log _2(0) \end {aligned}
Case 2n:
IH: W(n) = n\log _2(n)
WTS: W(2n) = 2n\log _2(n)
\begin {aligned} W(2n) &= 2W(n) + 2n &&\text {(definition)} \\ &= 2(n\log _2(n)) + 2n &&\text {(IH)} \\ &= 2n(\log _2(n) + 1) &&\text {(math)} \\ &= 2n(\log _2(2n)) &&\text {(math)} \end {aligned}
This completes the proof.
So, W(n) \in \mathcal {O}(n\log _2(n)) = \mathcal {O}(n\log n).
Since the recursive calls are made in a tuple, they are evaluated in parallel. Thus, we analyze the span of msort separately.
First, we give the recurence:
\begin {aligned} S(1) &= 0 \\ S(n) &= S_\texttt {split}(n) + \max (S(n / 2), S(n / 2)) + S_\texttt {merge}(n / 2 + n /2) \\ &= S(n / 2) + n \end {aligned}
By unrolling, we notice that
\begin {aligned} S(n) &= n + \frac {n}{2} + \frac {n}{4} + \cdots 4 + 2 + 0 \\ &= 2\left (\frac {n}{2} + \frac {n}{4} + \cdots + 2 + 1\right ) \end {aligned} so we guess that
\begin {aligned} S(n) &= 2\left (\sum _{i = 0}^{\log _2(n) - 1} 2^i\right ) \\ &= 2(2^{\log _2(n)} - 1) \\ &= 2(n - 1) \\ &= 2n - 2 \end {aligned} using the table of common summations.
We prove it as follows:
635Proof#189unstable-189.xml150-004S
We show that S(n) = 2n - 2 by induction.
Case 1:
\begin {aligned} S(1) &= 0 \\ &= 2\cdot 1 - 2 \end {aligned}
Case 2n:
IH: S(n) = 2n - 2
WTS: S(2n) = 2(2n) - 2
\begin {aligned} S(2n) &= S(n) + 2n &&\text {(definition)} \\ &= 2n - 2 + 2n &&\text {(IH)} \\ &= 4n - 2 &&\text {(math)} \\ &= 2(2n) - 2 &&\text {(math)} \end {aligned}
This completes the proof.
So, S(n) \in \mathcal {O}(n).
In the worst case, we found that insertion sort has work and span \mathcal {O}(n^2).
However, merge sort has worst-case work of only \mathcal {O}(n \log n) and worst-case span of \mathcal {O}(n), which is a substantial improvement.
777Lecture150-lect08150-lect08.xmlPolymorphism and parameterized datatypes202466Harrison Grodin
hljs.highlightAll();
This lecture is inspired by analogous lectures by Michael Erdmann and Brandon Wu.
655150-004Y150-004Y.xmlType inference646Example150-004U150-004U.xmlBasic type inferenceWhen a certain type is required within a program, Standard ML will infer this constraint. For example, all of the following declarations are equivalent and have type int -> int * bool, since the use of x in the expression x + 1 forces x to have type int:fun f (x : int) : int * bool = (x + 1, true)
fun f (x : int) = (x + 1, true)
fun f x : int * bool = (x + 1, true)
fun f x = (x + 1, true)
647Concept150-004V150-004V.xmlContradiction in type inferenceIf a variable is used in such a way that it has two incompatible types, a type error will be produced.649Example150-004W150-004W.xmlContradictory base typesThe following code does not typecheck, since x is used as both an int and as a string:fun f x = (x + 1, x ^ "!")
651Example150-004X150-004X.xmlContradictory patternsThe following code does not typecheck, since the input is matched as both an int and a tuple:fun f 0 = 0
| f (x, y) = 1
654Concept150-0059150-0059.xmlCircularity errorWith recursion, sometimes there is no valid type for an expression because the output type contains itself as a component. For example:fun f 0 = 0
| f n = (f (n - 1), 0)
This function is not well typed.
653Proof#188unstable-188.xml150-0059
Assume f : int -> t, for some t.
Then, in the second clause, (f (n - 1), 0) : t * int.
However, since this is returned by f itself, this would mean that t = t * int, leading to a contradiction.
754150-004Z150-004Z.xmlPolymorphism681Concept150-004T150-004T.xmlType variableA type variable stands for an arbitrary type, denoted by an ' followed by a variable name. We pronounce type variables as Greek letters:
SML Syntax
Greek Letter
Pronunciation
'a
\alpha
alpha
'b
\beta
beta
'c
\gamma
gamma
'd
\delta
delta
'e
\epsilon
epsilon
685Example150-0052150-0052.xmlFirst functionConsider the following function:fun fst (x : int, y : string) : int = x
We can remove the type annotations:fun fst (x, y) = x
Nothing restricts the type of x or y, and the result type is whatever the type of x was. Therefore, this declaration of fst has type 'a * 'b -> 'a. We can optionally include explicit annotations:fun fst (x : 'a, y : 'b) : 'a = x
686Concept150-0057150-0057.xmlMost general typeThe most general type of an expression e is the type t such that all other types t' that could be assigned to e can be achieved by plugging in for type variables in t.We say that these other types t' are instances of type t.When we say that "e has type t", we implicitly mean that e has most general type t.687Concept150-0053150-0053.xmlType inference algorithmTo infer the most general type of a function in SML:Give each variable, including the function being defined, an arbitrary type variable.
Add constraints based on the usage of each variable, and add constraints to make sure all clauses have the same type.
Solve the constraints.
Optionally, re-letter the type variables in the answer for convenience.For simple functions, this process often occurs implicitly in one's head.708Example150-0054150-0054.xmlType inference of a simple functionWe formally perform type inference on the fst function.
We give each variable an arbitrary type:
fst
'a
x
'b
y
'c
We generate constraints:
Since fst is a function, we must have 'a = 'd -> 'e.
Since fst matches on (x, y), we must have 'd = 'b * 'c.
Since fst returns x, we must have 'b = 'e.
We solve the constraints to get:
fst
'e * 'c -> 'e
x
'e
y
'c
We re-letter to get fst : 'a * 'b -> 'a.
748Example150-0055150-0055.xmlType inference of a complicated functionWe formally perform type inference on the following function:fun f (a, b, c, d) =
case a of
nil => (b + 1, c)
| e :: _ => (d , e)
We give each variable an arbitrary type:
a
'a
b
'b
c
'c
d
'd
e
'e
f
'f
We generate constraints:
Since f is a function, we must have 'f = 'g -> 'h.
Since f matches on (a, b, c, d), we must have 'g = 'a * 'b * 'c * 'd.
Since we match a against list patterns, we must have 'a = 'i list.
Since e :: _ is a pattern for 'i list, we must have 'e = 'i.
Since we use b + 1, we must have 'b = int.
Since the branches return (b + 1, c) and (d, e), we must have 'h = int * 'c = 'd * 'e.
We solve the constraints to get:
a
'i list
b
int
c
'i
d
int
e
'i
f
'i list * int * 'i * int -> int * 'i
We re-letter to get f : 'a list * int * 'a * int -> int * 'a.
750Example150-0056150-0056.xmlIntuitive examples of type inferenceBy inspection, we infer the following most general types:(* id : 'a -> 'a *)
fun id x = x
(* two : 'a -> 'a * 'a *)
fun two x = (x, x)
(* f : 'a -> 'b list *)
fun f _ = nil
(* f : int list -> int list *)
fun f [0] = nil
| f l = l
(* f : bool * 'a * 'a -> 'a * 'a *)
fun f (b, x, y) =
if b
then (x, y)
else (y, x)
(* length : 'a list -> int *)
fun length nil = 0
| length (_ :: xs) = 1 + length xs
(* op @ : 'a list * 'a list -> 'a list *)
fun nil @ l2 = l2
| (x :: xs) @ l2 = x :: (xs @ l2)
(* zip : 'a list * 'b list -> ('a * 'b) list *)
fun zip (nil , _ ) = nil
| zip (_ , nil ) = nil
| zip (x :: xs, y :: ys) = (x, y) :: zip (xs, ys)
(* zip' : 'a list * 'a list -> ('a * 'a) list *)
fun zip' (nil , _ ) = nil
| zip' (_ , nil ) = nil
| zip' (x :: xs, y :: ys) = (x, y) :: zip' (ys, xs) (* note (ys, xs), not (xs, ys) *)
751Concept150-005A150-005A.xmlPolymorphic quantification in proofsWhen proving a fact about polymorphic functions, we must be careful with quantification.❌ If we say "for all l : 'a list, we have \texttt {rev (rev l)} \cong \texttt {l}", this means "for all l such that (for all types t, l : t list), we have we have \texttt {rev (rev l)} \cong \texttt {l}". However, the only list l satisfying "for all types t, l : t list" is nil.
✅ If we say "for all types t, for all l : t list, we have \texttt {rev (rev l)} \cong \texttt {l}", this generalizes the proof that "for all l : int list, we have \texttt {rev (rev l)} \cong \texttt {l}", replacing int with an arbitrary type t.753Theorem150-005B150-005B.xmlCorrect polymorphic quantificationFor all types t, for all x : t, we have \texttt {fst (two x)} \cong \texttt {x}.
752Proof#187unstable-187.xml150-005B
Let type t be arbitrary, and let x : t be arbitrary.
Then:
\begin {aligned} &\texttt {fst (two x)} \\ &\cong \texttt {fst (x, x)} &&\text {(definition of \texttt {two})} \\ &\cong \texttt {x} &&\text {(definition of \texttt {fst})} \\ \end {aligned}
768150-0050150-0050.xmlParameterized datatypes758Concept150-005C150-005C.xmlParameterized type and datatype declarationsA datatype declaration can include type variable parameters:datatype ('a, 'b, 'c, ...) t = ...
In the common case that only one type variable parameter is included, the parentheses and commas are excluded:datatype 'a t = ...
Similarly, type alias declaration can include type variable parameters, too:type ('a, 'b, 'c, ...) t = ...
type 'a t = ...
760Example150-005D150-005D.xmlBuilt-in polymorphic datatypesGeneralizing existing types as datatype declarations, we may include parameters:datatype 'a option
= NONE
| SOME of 'a
datatype 'a list
= nil
| :: of 'a * 'a list
762Example150-005E150-005E.xmlPolymorphic treesGeneralizing binary tree with ints at the nodes, we may have a tree storing any element type we wish:datatype 'a tree
= Empty
| Node of 'a tree * 'a * 'a tree
To recover the trees of integers, we use int tree. Now, though, we may have string tree, int option tree, int list tree, int tree tree, and more!764Example150-005F150-005F.xmlTrees with data at the leaves and nodesIf we wish to store 'as at the nodes of a tree but also 'bs at the leaves, we may use two type variable parameters:datatype ('a, 'b) bush
= Berry of 'b
| Brach of ('a, 'b) bush * 'a * ('a, 'b) bush
765Concept150-005G150-005G.xmlUnit typeThe type unit has a single value, () : unit, the empty tuple.767Example150-005H150-005H.xmlTrees from bushesTo recover a type equivalent to polymorphic trees using trees with data at the leaves and nodes, we may define:type 'a tree = ('a, unit) bush
val Empty : 'a tree = Berry ()
val Node : 'a tree * 'a * 'a tree -> 'a tree = Branch
We keep 'as at the nodes, bu we choose to include only trivial data of unit type at the leaves, analogous to Empty.776150-0051150-0051.xmlPolymorphic sortingUsing polymorphism, we may now hope to sort lists containing data of an arbitrary type.769Concept150-005I150-005I.xmlComparison functionIn the implementation of the insert auxiliary function, we used Int.compare : int * int -> order. To sort a list of 'as, we need a function of type 'a * 'a -> order.771Example150-005J150-005J.xmlPolymorphic insert auxiliary functionAssume we have some compare : 'a * 'a -> order. Then, we may implement:(* insert : 'a * 'a list -> 'a list
* REQUIRES: l is sorted, according to compare
* ENSURES: insert (x, l) is a sorted permutation of x :: l, according to compare
*)
fun insert (x, nil) = [x]
| insert (x, y :: ys) =
case compare (x, y) of
GREATER => y :: insert (x, ys)
| _ => x :: y :: ys
Note that we only replace Int.compare with compare, and we change the specification to say that the list is sorted according to compare.773Example150-005K150-005K.xmlPolymorphic insertion sortIn order to sort some l : 'a list, we must also take in a function compare : 'a * 'a -> order by which we will compare elements of the list.(* isort : ('a * 'a -> order) * 'a list -> 'a list
* REQUIRES: compare is a valid comparison function
* ENSURES: isort (compare, l) is a sorted permutation of l, according to compare
*)
fun isort (compare : 'a * 'a -> order, l : 'a list) : 'a list =
let
fun insert (x, nil) = [x]
| insert (x, y :: ys) =
case compare (x, y) of
GREATER => y :: insert (x, ys)
| _ => x :: y :: ys
fun sorter nil = nil
| sorter (x :: xs) = insert (x, sorter xs)
in
sorter l
end
We use polymorphic insert auxiliary function as an auxiliary function inside the body of isort, once we have access to compare. Here, the logic of sorter is analogous to isort of before, so we conclude by calling sorter l.775Example150-005L150-005L.xmlSorting with custom comparison functionsWe may use polymorphic insertion sort with various choices of comparison functions.val () =
Test.int_list
( "normal integer sorting"
, [0, 1, 5]
, isort (Int.compare, [1, 5, 0])
)
val () =
Test.string_list
( "normal string sorting"
, ["alice", "bob", "charlie"]
, isort (String.compare, ["charlie", "alice", "bob"])
)
fun intCompareBackwards (x, y) =
case Int.compare (x, y) of
LESS => GREATER
| EQUAL => EQUAL
| GREATER => LESS
val () =
Test.int_list
( "reverse integer sorting"
, [5, 1, 0]
, isort (intCompareBackwards, [1, 5, 0])
)
841Lecture150-lect09150-lect09.xmlHigher-order functions I: currying and list abstractions2024611Harrison Grodin
hljs.highlightAll();
This lecture is inspired by analogous lectures by Michael Erdmann and Brandon Wu.
In this lecture, we begin to take seriously the slogan functions are values.812150-005M150-005M.xmlCurrying784Definition150-005P150-005P.xmlHigher-order functionA higher-order function is a function that takes a function as input or produces a function as output.785Concept150-005R150-005R.xmlRight-associativity of arrowsFunction types are right-associative. In other words, the type t1 -> t2 -> t3 means t1 -> (t2 -> t3), taking an input of type t1 and producing a function of type t2 -> t3.786Concept150-005S150-005S.xmlLeft-associativity of function applicationFunction application is left-associative. In other words, when f : t1 -> t2 -> t3, e1 : t1, and e2 : t2, the application f e1 e2 is the same as (f e1) e2, applying function f to input e1, and then applying that function to e2.787Concept150-0068150-0068.xmlCurryingWe say that a function is curried, named for mathematician Haskell Curry, when it takes in multiple arguments one at a time, producing a function accepting the rest of the arguments.For example, the type t1 -> t2 -> t3 is curried, but the type t1 * t2 -> t3 is not (sometimes called "uncurried").790Example150-005Q150-005Q.xmlCurried addition(* add : int * int -> int *)
fun add (x : int, y : int) : int = x + y
val () = Test.int ("uncurried", 51, add (1, 50))
(* cadd : int -> int -> int *)
fun cadd (x : int) : int -> int = (fn y => x + y)
val () = Test.int ("curried", 51, cadd 1 50)
We need not give cadd all of its arguments simultaneously:val incr = cadd 1
val () = Test.int ("curried again, 51, incr 50)
The type of incr is int -> int, and the value is fn y => 1 + y.793Example150-005T150-005T.xmlCurried polymorphic sorting algorithmRecall polymorphic insertion sort. Rather than taking the comparison function and list to sort simultaneously, we can take a comparison function as input and produce a sorting function:(* isort : ('a * 'a -> order) -> ('a list -> 'a list) *)
fun isort (compare : 'a * 'a -> order) (l : 'a list) : 'a list =
let
fun insert (x, nil) = [x]
| insert (x, y :: ys) =
case compare (x, y) of
GREATER => y :: insert (x, ys)
| _ => x :: y :: ys
fun sorter nil = nil
| sorter (x :: xs) = insert (x, sorter xs)
in
sorter l
end
Here, isort is a higher-order function: it takes in a function compare : 'a * 'a -> order, and it produces a function of type 'a list -> 'a list.Since we don't use l in the implementation of insert or sorter, we can equivalently avoid taking in l ourselves, instead just producing the function sorter : 'a list -> 'a list directly:(* isort : ('a * 'a -> order) -> ('a list -> 'a list) *)
fun isort (compare : 'a * 'a -> order) : 'a list -> 'a list =
let
(* insert : 'a * 'a list -> 'a list *)
fun insert (x, nil) = [x]
| insert (x, y :: ys) =
case compare (x, y) of
GREATER => y :: insert (x, ys)
| _ => x :: y :: ys
(* sorter : 'a list -> 'a list *)
fun sorter nil = nil
| sorter (x :: xs) = insert (x, sorter xs)
in
sorter
end
794Remark150-005V150-005V.xmlRedundant lambdas (\eta -reduction)For all values f : t1 -> t2, we have \texttt {f} \cong \texttt {fn x => f x}. Therefore, if you ever write fn x => f x, you might as well just write f.797Example150-005W150-005W.xmlInstantiating curried polymorphic sorting algorithmUsing the definition of isort, we can define sorters for various comparison functions:fun intsort (l : int list) : int list = isort Int.compare l
fun stringsort (l : string list) : string list = isort String.compare l
We can simplify this using \eta -reduction, avoiding the inputs l entirely:val intsort = isort Int.compare
val stringsort = isort String.compare
800Concept150-005Z150-005Z.xmlFunction compositionTo compose two functions f : 'a -> 'b and g : 'b -> 'c, we can define (g o f) : 'a -> 'c:fun (op o) (g : 'b -> 'c, f : 'a -> 'b) : 'a -> 'c = fn (x : 'a) => g (f x)
We can equivalently define composition in the following ways:fun g o f = fn x => g (f x)
fun (g o f) x = g (f x)
805Example150-0060150-0060.xmlReverse list sortingTo sort a list backwards, we can invert the result of the comparison function.fun invert LESS = GREATER
| invert EQUAL = EQUAL
| invert GREATER = LESS
fun intCompareBackwards (x, y) = invert (Int.compare (x, y))
Using function composition, this is even simpler:val intCompareBackwards = invert o Int.compare
We can view this inversion as a function, too:fun invertCompare (compare : 'a * 'a -> order) : 'a * 'a -> order =
invert o compare
We may define a reverse sorting algorithm as follows:val revSort = isort o invertCompare
806Warning150-0061150-0061.xmlValue restrictionSometimes, val declarations involving polymorphism can trigger a warning called the "value restriction", rendering type variables like 'a as ?.X1. You may safely ignore such warnings in this class.807Concept150-0067150-0067.xmlStagingCurried functions can perform some intermediate computation before receiving all of their arguments.811Example150-0069150-0069.xmlStaged list searchConsider the below function:(* nth : 'a list -> int -> 'a
* REQUIRES: 0 <= i < length l
* ENSURES: nth l i ==> x, where x is the i'th element of l
*)
fun nth nil _ = raise Fail "impossible by REQUIRES"
| nth (x :: _ ) 0 = x
| nth (_ :: xs) i = nth xs (i - 1)
(* earliest : string list -> int -> string
* REQUIRES: 0 <= i < length l
* ENSURES: earliest l i ==> s, where s is the i'th earliest string in l alphabetically
*)
fun earliest l i = nth (msort String.compare l) i
(* equivalent: *)
val earliest = fn l => fn i => nth (msort String.compare l) i
Here, each call to earliest l i takes \mathcal {O}(n\log n) time. Instead, we can stage the call to msort:fun earliest (l : string list) : int -> string =
let
val sorted = msort String.compare l
in
nth sorted
end
(* equivalent: *)
val earliest = fn l => nth (msort String.compare l)
Each call to earliest l i still takes \mathcal {O}(n\log n) time. However:val f : int -> string = earliest ["Bob", "Charlie", "Alice"]
This call to earliest still takes \mathcal {O}(n\log n), but each call f i only takes \mathcal {O}(n).827150-005N150-005N.xmlList transformers813Remark150-005U150-005U.xmlCompositionality via higher-order functionsIn our principles of functional programming, we include compositionality, the idea that big problems can be broken down into smaller components which are often reusable. Higher-order functions let us accomplish this.818Concept150-005X150-005X.xmlList mapConsider the following functions:(* incAll : int list -> int list
* REQUIRES: true
* ENSURES: incAll [x1, ..., xn] = [x1 + 1, ..., xn + 1]
*)
fun incAll nil = nil
| incAll (x :: xs) = (x + 1) :: incAll xs
(* stringAll : int list -> string list
* REQUIRES: true
* ENSURES: stringAll [x1, ..., xn] = [Int.toString x1, ..., Int.toString xn]
*)
fun stringAll nil = nil
| stringAll (x :: xs) = Int.toString x :: stringAll xs
(* bool list -> bool list
* REQUIRES: true
* ENSURES: flipAll [x1, ..., xn] = [not x1, ..., not xn]
*)
fun flipAll nil = nil
| flipAll (x :: xs) = not x :: flipAll xs
All share a common structure, applying a function to each element of the input list. For a function f : t1 -> t2, we have:(* fAll : t1 list -> t2 list
* REQUIRES: true
* ENSURES: fAll [x1, ..., xn] = [f x1, ..., f xn]
*)
fun fAll nil = nil
| fAll (x :: xs) = f x :: fAll xs
So, we can define a higher-order function, map, that takes in such a function f and produces the corresponding fAll function:(* map : ('a -> 'b) -> 'a list -> 'b list
* REQUIRES: true
* ENSURES: map f [x1, ..., xn] = [f x1, ..., f xn]
*)
fun map f nil = nil
| map f (x :: xs) = f x :: map f xs
Then, we can define the other functions very simply:val incAll = map (fn x => x + 1)
val stringAll = map Int.toString
val flipAll = map not
821Example150-005Y150-005Y.xmlAdding a number to every element of a listUsing list map, we can easily implement the following specification:(* addToAll : int * int list -> int list
* REQUIRES: true
* ENSURES: addToAll (x, [y1, ..., yn]) = [x + y1, ..., x + yn]
*)
fun addToAll (x, l) = map (fn y => x + y) l
Or, even more concisely using curried addition:fun addToAll (x, l) = map (cadd x) l
826Concept150-0062150-0062.xmlList filterConsider the following functions:(* keepEvens : int list -> int list
* REQUIRES: true
* ENSURES: keepEvens l ==> l', where l' contains the even elements of l in the same order
*)
fun keepEvens nil = nil
| keepEvens (x :: xs) =
if isEven x
then x :: keepEvens xs
else keepEvens xs
(* keepMammals : animal list -> animal list
* REQUIRES: true
* ENSURES: keepMammals l ==> l', where l' contains the mammals of l in the same order
*)
fun keepMammals nil = nil
| keepMammals (x :: xs) =
if isMammal x
then x :: keepMammals xs
else keepMammals xs
Both share a common structure, only keeping the elements of the input list satisfying some condition. For a predicate f : t -> bool, we have:(* keepP : t list -> t list
* REQUIRES: true
* ENSURES: keepP l ==> l', where l' contains the elements of l satisfying p in the same order
*)
fun keepP nil = nil
| keepP (x :: xs) =
if p x
then x :: keepP xs
else keepP xs
So, we can define a higher-order function, filter, that takes in such a predicate p and produces the corresponding keepP function:(* filter : ('a -> bool) -> 'a list -> 'a list
* REQUIRES: true
* ENSURES: filter p l ==> l', where l' contains the elements of l satisfying p in the same order
*)
fun filter p nil = nil
| filter p (x :: xs) =
if p x
then x :: filter p xs
else filter p xs
Then, we can define the other functions very simply:val keepEvens = filter isEven
val keepMammals = filter isMammal
840150-005O150-005O.xmlList folds832Concept150-0063150-0063.xmlList foldrConsider the following functions:(* sum : int list -> int
* REQUIRES: true
* ENSURES: sum [x1, ..., xn] = x1 + (x2 + (... + (xn + 0)))
*)
fun sum nil = 0
| sum (x :: xs) = x + sum xs
(* concat : 'a list list -> 'a list
* REQUIRES: true
* ENSURES: concat [x1, ..., xn] = x1 @ (x2 @ (... @ (xn @ nil)))
*)
fun concat nil = nil
| concat (x :: xs) = x @ concat xs
(* commas : string list -> string
* REQUIRES: true
* ENSURES: commas [x1, ..., xn] = (x1 ^ ", ") ^ ((x2 ^ ", ") ^ (... ^ ((xn ^ ", ") ^ ".")))
*)
fun commas nil = "."
| commas (x :: xs) = (x ^ ", ") ^ commas xs
(* rebuild : 'a list -> 'a list *)
fun rebuild nil = nil
| rebuild (x :: xs) = x :: rebuild xs
(* isort : int list -> int list *)
fun isort nil = nil
| isort (x :: xs) = insert (x, isort xs)
All three share a common structure, combining x into the recursive call on xs. For a base case init : t2 and a recursive case f : t1 * t2 -> t2, we have:(* combine : t1 list -> t2
* REQUIRES: true
* ENSURES: combine [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun combine nil = init
| combine (x :: xs) = f (x, combine xs)
So, we can define a higher-order function, foldr, that takes in such an initial value init and a combining function f and produces the corresponding combine function:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
* REQUIRES: true
* ENSURES: foldr f init [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun foldr f init nil = init
| foldr f init (x :: xs) = f (x, foldr f init xs)
Then, we can define the other functions very simply:val sum = foldr (op +) 0
val concat = foldr (op @) nil
val commas = foldr (fn (x, y) => x ^ ", " ^ y) "."
val rebuild = foldr (op ::) nil
val isort = foldr insert nil
836Concept150-0065150-0065.xmlList foldlWe can also traverse a list in the other direction:(* foldl : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
* REQUIRES: true
* ENSURES: foldr f acc [x1, ..., xn] = f (xn, ... f (x2, f (x1, acc)))
*)
fun foldl f acc nil = acc
| foldl f acc (x :: xs) = foldl f (f (x, acc)) xs
Here, we traverse the list in the other direction. Rather than the 'b input serving as a base case, it serves as an accumulator.Equivalently, we can implement foldl using foldr and list reverse:fun foldl f acc = foldr f acc o rev
This makes it clear that if we choose the first implementation, we could implement rev using foldl:val rev = foldl (op ::) nil
838Taxon150-0066150-0066.xmlList folds and for loopsThe for loops of other languages are analogous to foldl. The imperative pseudocodeacc = init
for x in l:
acc = f(x, acc)
corresponds to the Standard ML code foldl f init l.839Remark150-0064150-0064.xmlUniversality of list foldrThe definition of foldr is very natural.It takes in a replacement for each constructor: init for nil and f for ::.
By construction, every function defined by structural recursion can be implemented using foldr: the base case is init, and the recursive case is f.
It goes right-to-left (i.e., bottom-to-top) on lists, which are constructed right-to-left (i.e., bottom-to-top): nil is all the way to the right (or at the bottom, depending on how you think about it).These properties naturally generalize to other datatypes. In contrast, foldl does not: datatypes in general need not have a notion of "top down", as trees may have more than one branch.907Lecture150-lect10150-lect10.xmlHigher-order functions II: map, bind, and fold2024613Harrison Grodin
hljs.highlightAll();
This lecture is inspired by lectures by Michael Erdmann, Stephen Brookes, and Brandon Wu.
852150-006X150-006X.xmlWarm-up: data pipelines849Concept150-006P150-006P.xmlPipe functionThe following function, pronounced "pipe", is useful for building data pipelines:infix 4 |>
(* op |> : 'a * ('a -> 'b) -> 'b *)
fun x |> f = f x
851Example150-006T150-006T.xmlPipeline of higher-order functions using |>Using |>, we can take the sum of the squares of the numbers in a list l greater than some threshold n : int.fun dotProduct (l : int list, n : int) : int list =
l
|> List.filter (fn x => x > n)
|> List.map (fn x => x * x)
|> List.foldr op+ 0
862150-006B150-006B.xmlGeneralized map: one-to-one transformations854Concept150-006D150-006D.xmlMap abstractionWe previously saw map, which takes a function f : 'a -> 'b and a list 'a list and applies the function on each 'a to get a resulting 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* map : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - map id = id, ie map id s = s
* - map (f o g) = map f o map g, ie map f (map g s) = map (f o g) s
*)
In other words, the ENSURES guarantees that map is structure-preserving.856Example150-006E150-006E.xmlTree mapRecall polymorphic trees. We can implement tmap to map over trees:datatype 'a tree = Empty | Node of 'a tree * 'a * 'a tree
(* tmap : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - tmap id = id, ie tmap id t = t
* - tmap (f o g) = tmap f o tmap g, ie tmap f (tmap g t) = tmap (f o g) t
*)
fun tmap f Empty = Empty
| tmap f (Node (l, x, r)) = Node (tmap f l, f x, tmap f r)
859Example150-006F150-006F.xmlShrub mapConsider the following type of polymorphic "shrubs", trees which store data at the leaves rather than the nodes.datatype 'a shrub = SEmpty | SLeaf of 'a | SNode of 'a shrub * 'a shrub
We can implement smap to map over shrubs:(* smap : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - smap id = id, ie smap id s = s
* - smap (f o g) = smap f o smap g, ie smap f (smap g s) = smap (f o g) s
*)
fun smap f SEmpty = SEmpty
| smap f (SLeaf x) = SLeaf (f x)
| smap f (SNode (l, r)) = SNode (smap f l, smap f r)
861Example150-006G150-006G.xmlOption mapWe can implement omap to map over options:(* omap : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - omap id = id, ie omap id opt = opt
* - omap (f o g) = omap f o omap g, ie omap f (omap g opt) = omap (f o g) opt
*)
fun omap f NONE = NONE
| omap f (SOME x) = SOME (f x)
875150-006A150-006A.xmlGeneralized foldr: natural folds865Concept150-006H150-006H.xmlFold abstractionWe previously saw foldr. Crucially, it sent [x1, x2, ..., xn], i.e., op:: (x1, op:: (x2, ..., op:: (xn, nil))) to f (x1, f (x2, ..., f (xn, init))) by replacing op:: with f and nil with init.If we rewrite the list datatype as follows:datatype 'a list = Cons of 'a * 'a list | Nil
We might as well write foldr as:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b *)
fun foldr (cons : 'a * 'b -> 'b) (nil : 'b) (l : 'a list) : 'b =
case l of
Cons (x, xs) => cons (x, foldr cons nil xs)
| Nil => nil
The type of each argument matches the type of the constructor, swapping 'a list for 'b. Here, cons is just a function (not a constructor!) to replace every Cons with, and nil is just a value to replace every Nil with.The general recipe is as follows:For each constructor, replace the name of the type with 'b, including recursive uses.
Take in each of these functions/values meant to replace the constructor as arguments.
In the implementation, replace each constructor with its function, performing recursive calls on substructures if there are any.For example:We have Cons : 'a * 'a list -> 'a list and Nil : 'a list, so we get cons : 'a * 'b -> 'b and nil : 'b.
We take in cons and nil as arguments.
The implementation is as above.This perspective justifies the universality of list foldr.868Example150-006I150-006I.xmlTree foldWe can implement a fold for type 'a tree as follows:(* tfold : 'b -> ('b * 'a * 'b -> 'b) -> 'a tree -> 'b *)
fun tfold (empty : 'b) (node : 'b * 'a * 'b -> 'b) (t : 'a tree) : 'b =
case t of
Empty => empty
| Node (l, x, r) => node (tfold empty node l, x, tfold empty node r)
We can use it to concisely implement tree functions, such as tree sum and in-order traversal of a tree:val sum : int tree -> int =
tfold 0 (fn (m, x, n) => m + x + n)
val inordSlow : 'a tree -> 'a list =
tfold nil (fn (l1, x, l2) => l1 @ x :: l2)
871Example150-006J150-006J.xmlShrub foldWe can implement a fold for type 'a shrub as follows:(* sfold : 'b -> ('a -> 'b) -> ('b * 'b -> 'b) -> 'a shrub -> 'b *)
fun sfold (sempty : 'b) (sleaf : 'a -> 'b) (snode : 'b * 'b -> 'b) (s : 'a shrub) : 'b =
case s of
SEmpty => sempty
| SLeaf x => sleaf x
| SNode (l, r) => snode (sfold sempty sleaf snode l, sfold sempty sleaf snode r)
We can use it to concisely implement shrub functions. For example, we can take the in-order traversal, and we can count the characters used in a string shrub:val sinord : 'a tree -> 'a list =
sfold nil (fn x => [x]) (op @)
val countChars : string tree -> int =
sfold 0 String.size (op +)
874Example150-006K150-006K.xmlOption foldWe can implement a fold for 'a option as follows:(* ofold : 'b -> ('a -> 'b) -> 'a option -> 'b *)
fun ofold (none : 'b) (some : 'a -> 'b) (opt : 'a option) : 'b =
case opt of
NONE => none
| SOME x => some x
Note that we never take in a 'b or perform recursive calls since 'a option is not recursive.We can use ofold to implement functions concisely without explicitly pattern matching. For example, since we sometimes use int option as integers extended with \infty , we can convert one to a string:val toString : int option -> string =
ofold "infinity" Int.toString
906150-006C150-006C.xmlGeneralized filter: one-to-many transformationsRecall list filter, where filter : ('a -> bool) -> 'a list -> 'a list. This function sends each element in a list to at most one element. We can generalize this to a new tool, bind, that sends each element to arbitrarily many elements.877Concept150-006L150-006L.xmlList bindThe function bind takes in a function f : 'a -> 'b list that produces as many 'bs as it wishes; we accumulate all of them in a list.(* bind : ('a -> 'b list) -> 'a list -> 'b list *)
fun bind f nil = nil
| bind f (x :: xs) = f x @ bind f xs
It generalizes list map, whose function input must always produce exactly one 'b.879Example150-006Y150-006Y.xmlList filter using bindWe can implement list filter concisely using list bind:(* filter : ('a -> bool) -> 'a list -> 'a list *)
fun filter p = bind (fn a => if p a then [a] else [])
Here, filter either produces a singleton list or an empty list. In this way, bind is a generalization: we can return any length list we choose.881Example150-006M150-006M.xmlTree roots via bindWe can use bind to extract data, in addition to merely filtering. For example, we can use bind to get all of the roots from a list of potentially-empty trees:(* roots : 'a tree list -> 'a list *)
val roots = bind (fn Empty => [] | Node (_, x, _) => [x])
883Example150-006N150-006N.xmlTree elements using bindWe can use bind to turn each element into multiple elements. For example, we can get all of he elements out of a list of trees:(* elements : 'a tree list -> 'a list *)
val elements = bind inord
885Example150-006W150-006W.xmlCartesian product using bindWe can use bind to get all possible pairs of 'as and 'bs:fun product (l1 : 'a list, l2 : 'b list) : ('a * 'b) list =
bind (fn a => bind (fn b => [(a, b)]) l2) l1
887Example150-006Z150-006Z.xmlConditional duplication using bindWe may choose to send some elements to multiple values in the result list. For example:val f : int list -> int list =
bind (fn x => if x >= 2 then [x, x] else [x])
If we apply this function to the list [1, 2, 3], then 2 and 3 get duplicated in the result list:\texttt {f [1, 2, 3]} \Longrightarrow \texttt {[1, 2, 2, 3, 3]}893Example150-006O150-006O.xmlExact change using bindUsing bind, we can figure out how to make exact change using 25¢ (quarter), 10¢ (nickel), and 5¢ (dime) coins. First, we implement a helper function that generates all of the possible ways to use a given coin to shrink the total money needed:(* INVARIANT: >= 0 *)
type money = int
(* INVARIANT: >= 0 *)
type coin = int
(* useCoin : coin -> money -> (int * money) list
* REQUIRES: true
* ENSURES: useCoin c total ==> l, where l contains all pairs (n, total') with n,total' >= 0 such that c*n + total' = total
*)
fun useCoin (c : coin) (total : money) : (int * money) list =
(0, total) ::
(if total < c
then []
else map (fn (n, total') => (n + 1, total')) (useCoin c (total - c)))
First, we can use bind on the result of useCoin to filter the valid results. Assume we only wish to use quarters:fun makeChangeQ (total : int) : int list =
bind (fn (q, 0) => [q] | _ => [])
(useCoin 25 total)
Rather than immediately filtering, we can try to use smaller coins instead:fun makeChange (total : int) : (int * int * int) list =
bind (fn (q, d, n, 0) => [(q, d, n)] | _ => []) (
bind (fn (q, d, total'') => map (fn (n, total''') => (q, d, n, total''')) (useCoin 5 total'')) (
bind (fn (q, total') => map (fn (d, total'') => (q, d, total'')) (useCoin 10 total'))
(useCoin 25 total)
)
)
We can rewrite this in the order operations happen using the pipe function:fun makeChange (total : int) : (int * int * int) list =
useCoin 25 total
|> bind (fn (q, total') => map (fn (d, total'') => (q, d, total'')) (useCoin 10 total'))
|> bind (fn (q, d, total'') => map (fn (n, total''') => (q, d, n, total''')) (useCoin 5 total''))
|> bind (fn (q, d, n, 0) => [(q, d, n)] | _ => [])
To avoid constructing the intermediate tuples (q, d, total'') and (q, d, n, total''') just to immediately pattern match on it in the next bind, we can nest the binds as folllows. In this way, q and d remain in scope for the inner binds, and we avoid the maps.fun makeChange (total : int) : (int * int * int) list =
bind (fn (q, total') =>
bind (fn (d, total'') =>
bind (fn (n, 0) => [(q, d, n)] | _ => [])
(useCoin 5 total''))
(useCoin 10 total'))
(useCoin 25 total)
895Concept150-006Q150-006Q.xmlInfix >>= notation for bindSimilar to the pipe function, we can reverse the argument order of list bind and view it as an infix function:infix 4 >>=
(* op >>= : 'a list * ('a -> 'b list) -> 'b list *)
fun l >>= f = bind f l
898Example150-006U150-006U.xmlExact change using >>=We can rewrite using the infix function >>=:fun makeChange (total : int) : (int * int * int) list =
(useCoin 25 total) >>=
bind (fn (q, d, n, 0) => [(q, d, n)] | _ => []) (
bind (fn (q, d, total'') => map (fn (n, total''') => (q, d, n, total''')) (useCoin 5 total')) (
bind (fn (q, total') => map (fn (d, total'') => (q, d, total'')) (useCoin 10 total'))
)
)
fun makeChange (total : int) : (int * int * int) list =
useCoin 25 total >>= (fn (q, total') =>
useCoin 10 total' >>= (fn (d, total'') =>
useCoin 5, total'' >>= (fn (n, total''') =>
case total''' of
0 => [(q, d, n)]
| _ => [])))
Notice that each line is inside a fn started by the previous line.You may observe a similarity to imperative programming languages. Informally: (q, total') <- useCoin 25 total;
(d, total'') <- useCoin 10 total';
(n, total''') <- useCoin 5 total'';
match total''':
case 0:
return (q, d, n)
case _:
error()
901Concept150-006R150-006R.xmlBind abstractionWe previously saw bind, which takes a function f : 'a -> 'b list and a list 'a list and applies the function on each 'a to get a resulting flattened 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* bind : ('a -> 'b t) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES: ...
*)
The ENSURES should contain some conditions similar to those given for the map abstraction, but we elide them in this class.We can always implement the infix >>= using a bind implementation:fun (x : 'a t) >>= (f : 'a -> 'b t) : 'b t = bind f x
903Example150-006S150-006S.xmlOption bindWe can implement obind to map over the element of an option, if there is one:(* obind : ('a -> 'b option) -> 'a option -> 'b option *)
fun obind f NONE = NONE
| obind f (SOME a) = f a
905Example150-006V150-006V.xmlChecking an in-order traversal using option bindUsing option bind, we can propagate failures implicitly.(* tmatch : int list -> int tree -> int list option
* REQUIRES: true
* ENSURES: tmatch goal t ==>
* - SOME remainder, if there exists some l2 such that (inord t @ remainder) = goal
* - NONE otherwise
*)
fun tmatch goal Empty = SOME goal
| tmatch goal (Node (l, x, r)) =
tmatch goal l >>= (fn remainder =>
case remainder of
nil => NONE
| y :: ys => if x = y then tmatch ys r else NONE)
If checking l produces NONE, then >>= will guarantee that the entire computation produces NONE.948Lecture150-lect11150-lect11.xmlLazy programming2024618Harrison Grodin
hljs.highlightAll();
This lecture is inspired by lectures by Michael Erdmann, Stephen Brookes, and Giselle Reis.
936150-0070150-0070.xmlProgramming with streams915Idea150-0071150-0071.xmlInfinite listsLots of data in the world comes "lazily", sometimes even as an "infinite list":texts, messages, and emails;
packets for a video call;
song bytes;
digits of \frac {1}{7} or \pi ;
shipping orders;
tapped buttons on a keyboard or mobile app;
...You never have all the data at once, but it comes one element at a time.What happens if we try to make an infinite list?(* repeat : 'a -> 'a list *)
fun repeat (x : 'a) : 'a list = x :: repeat x
This definition does not go by recursion on anything (no parameter shrinks), and an evaluation trace would go on forever: \begin {aligned} &\texttt {repeat x} \\ &\Longrightarrow \texttt {x :: repeat x} \\ &\Longrightarrow \texttt {x :: x :: repeat x} \\ &\Longrightarrow \texttt {x :: x :: x :: repeat x} \\ &\Longrightarrow \cdots \end {aligned} What if all of the data wasn't computed immediately, but instead it was available on request?Since functions are values, fn () => repeat x is a value. However, applying it to () still loops.What if we could define (informally) repeat x = fn () => x :: repeat x? This doesn't exactly typecheck, since x : 'a but repeat x : unit -> 'a list, so we cannot use ::. But, we can define a different type to make this work.916Definition150-0072150-0072.xmlSuspensionA value v : unit -> t is called a suspension (or thunk), since it contains a "suspended", not-yet-evaluated expression of type t.We suspend an expression e : t via fn () => e.To compute the result of the expression e, we evaluate v ().920Definition150-0073150-0073.xmlStreamUsing a suspension, we can define a type of streams as follows:datatype 'a stream = Stream of unit -> 'a * 'a stream
Here, Stream takes the role of ::, but storing a suspension of a first element and the remainder of the stream.Note that in this formulation, every stream is infinite.The following helper function computes the first element of a stream and its tail:(* expose : 'a stream -> 'a * 'a stream *)
fun expose (Stream susp : 'a stream) : 'a * 'a stream = susp ()
We call the first element of a stream its head, and the remainder its tail.fun fst (x, y) = x
fun snd (x, y) = y
fun head (s : 'a stream) : 'a = fst (expose s)
fun tail (s : 'a stream) : 'a stream = snd (expose s)
923Example150-0074150-0074.xmlInfinite stream of a repeated valueWe can make into a reality by defining a stream instead of a list:fun repeat (x : 'a) : 'a stream = Stream (fn () => (x, repeat x))
We can ask for a few elements from the stream as follows:val ones : int stream = repeat 1
val (a, ones') = expose ones
val () = Test.int ("first element of ones", 1, a)
val (b, ones'') = expose ones'
val () = Test.int ("second element of ones", 1, b)
val (c, ones''') = expose ones''
val () = Test.int ("third element of ones", 1, c)
924Concept150-0077150-0077.xmlCorecursionDefinitions such as do not go by recursion on an input; nothing needs to ever shrink. Instead, they go by corecursion, producing a finite amount of data but offering to produce more if desired.925Remark150-0075150-0075.xmlInfinite data, finite observationEven though a stream represents infinite data, we can only ask to compute finitely much of it.Similarly, the function fn n => n * n stores infinitely much data (0, 1, 4, 9, ...), and we can ask for whatever data we want, but only finitely many times.928Example150-0076150-0076.xmlStream of natural numbersHow could we make the stream 0, 1, 2, 3, 4, ... of all natural numbers? We might try:val nats : int stream =
Stream (fn () => (0,
Stream (fn () => (1,
Stream (fn () => (2, ...
))))))
However, we can never finish typing the .... Instead, we compute something more general: all of the natural numbers starting from n. Then, nats is a special case, choosing 0 for n.(* natsFrom : int -> int stream
* REQUIRES: true
* ENSURES: natsFrom n ==> s, where the elements of s are n, (n + 1), (n + 2), (n + 3), ...
*)
fun natsFrom (n : int) : int stream =
Stream (fn () => (n, natsFrom (n + 1)))
val nats : int stream = natsFrom 0
930Example150-0078150-0078.xmlFinite prefix of a streamTo get the first n elements of a stream, we can go by recursion on n:(* take : 'a stream * int -> 'a list
* REQUIRES: n >= 0
* ENSURES: take (s, n) ==> l, where l is the first n elements of s
*)
fun take (s, 0) = nil
| take (s, n) =
let
val (a, s') = expose s
in
a :: take (s', n - 1)
end
934Example150-0079150-0079.xmlStream mapWe can implement the map abstraction for streams:fun map (f : 'a -> 'b) (s : 'a stream) : 'b stream =
Stream (fn () =>
let
val (a, s') = expose s
in
(f a, map f s')
end
)
To get the stream 0, 2, 4, 8, ... of even numbers, we can use map on nats:val evens : int stream = map (fn x => 2 * x) nats
Notice that this function is lazy, only exposing s to get its head when necessary.In contrast, the following function is not lazy, staging expose s unnecessarily:fun badMap (f : 'a -> 'b) (s : 'a stream) : 'b stream =
let
val (a, s') = expose s
in
Stream (fn () => (f a, badMap f s'))
end
It is not desirable to expose s immediately, since we only need a once asked to compute the first element of the result.935Concept150-007A150-007A.xmlMaximal lazinessWe say that a function on streams is maximally lazy when it exposes as few elements of input streams as possible at any given point in evaluation.947150-007C150-007C.xmlCoinduction937Concept150-007D150-007D.xmlExtensional equivalence at stream type: coinductionLet t be an arbitrary type, and let s0 and s0' be of type t stream. To show that \texttt {s0} \cong \texttt {s0'}:Choose a relation R(-, -) on pairs of t streams that relates pairs of streams that you expect to be equivalent.
Start State: Show that R(\texttt {s0}, \texttt {s0'}), guaranteeing that the streams you care about are related.
Preservation: Then, show that for all s and s', if R(\texttt {s}, \texttt {s'}), then:
the heads are the same, \texttt {head s} \cong \texttt {head s'} (the "co-base case", since no more stream data comes after the head); and
the tails stay related, R(\texttt {tail s}, \texttt {tail s'}) (the "coinductive conclusion", dual to the inductive hypothesis).This proof technique is called coinduction.Notice that this definition has some similarities with extensional equivalence at function types: both check that you see equivalent results when you use the expressions in equivalent ways.939Theorem150-007G150-007G.xmlMapping over a repeated streamFor all types t1 and t2 and values f : t1 -> t2 and x : t1, we have \texttt {map f (repeat x)} \cong \texttt {repeat (f x)}.
938Proof#184unstable-184.xml150-007G
Let f and x be arbitrary.
We prove that \texttt {map f (repeat x)} \cong \texttt {repeat (f x)} by coinduction.
Choose R = \{(\texttt {map f (repeat x)}, \texttt {repeat (f x)})\} \}. In other words, choose to relate exactly the two sides of the equation.
Start State: Clearly, we have R(\texttt {map f (repeat x)}, \texttt {repeat (f x)}) by construction.
Preservation:
Let s and s' be arbitrary, and assume that R(\texttt {s}, \texttt {s'}).
In other words, we have that s is map f (repeat x) and s' is repeat (f x).
First, we show that \texttt {head s} \cong \texttt {head s'} (co-base case).
\begin {aligned} &\texttt {head s} \\ &\cong \texttt {head (map f (repeat x))} \\ &\cong \texttt {f (head (repeat x))} &&\text {(def of \texttt {map})} \\ &\cong \texttt {f x} &&\text {(def of \texttt {repeat})} \end {aligned}
\begin {aligned} &\texttt {head s'} \\ &\cong \texttt {head (repeat (f x))} \\ &\cong \texttt {f x} &&\text {(def of \texttt {repeat})} \end {aligned}
Both sides are equivalent, as desired.
Then, we show that R(\texttt {tail s}, \texttt {tail s'}) (coinductive conclusion).
\begin {aligned} &\texttt {tail s} \\ &\cong \texttt {tail (map f (repeat x))} \\ &\cong \texttt {map f (tail (repeat x))} &&\text {(def of \texttt {map})} \\ &\cong \texttt {map f (repeat x)} &&\text {(def of \texttt {repeat})} \end {aligned}
\begin {aligned} &\texttt {tail s'} \\ &\cong \texttt {tail (repeat (f x))} \\ &\cong \texttt {repeat (f x)} &&\text {(def of \texttt {repeat})} \end {aligned}
Both sides are related by R, as desired. (In fact, they don't change from where we are now!)
This completes the proof.
942Theorem150-007F150-007F.xmlIncreased stream of natural numbersDefine the following helper function:fun add m x = m + x
For all m and n, we have \texttt {map (add m) (natsFrom n)} \cong \texttt {natsFrom (m + n)}.
941Proof#185unstable-185.xml150-007F
Let m and n be arbitrary.
We prove that \texttt {map (add m) (natsFrom n)} \cong \texttt {natsFrom (m + n)} by coinduction.
Choose R = \{(\texttt {map (add m) (natsFrom n')}, \texttt {natsFrom (m + n')}) \mid \texttt {n' : int} \}. In other words, choose to relate both sides of the equation, for all n', not just the n we have.
Start State: Clearly, we have R(\texttt {map (add m) (natsFrom n)}, \texttt {natsFrom (m + n)}) by construction.
Preservation:
Let s and s' be arbitrary, and assume that R(\texttt {s}, \texttt {s'}).
In other words, we have that s is map (add m) (natsFrom n') and s' is natsFrom (m + n'), for some n'.
First, we show that \texttt {head s} \cong \texttt {head s'} (co-base case).
\begin {aligned} &\texttt {head s} \\ &\cong \texttt {head (map (add m) (natsFrom n'))} \\ &\cong \texttt {add m (head (natsFrom n'))} &&\text {(def of \texttt {map})} \\ &\cong \texttt {add m n'} &&\text {(def of \texttt {natsFrom})} \\ &\cong \texttt {m + n'} \end {aligned}
\begin {aligned} &\texttt {head s'} \\ &\cong \texttt {head (natsFrom (m + n'))} \\ &\cong \texttt {m + n'} &&\text {(def of \texttt {natsFrom})} \end {aligned}
Both sides are equivalent, as desired.
Then, we show that R(\texttt {tail s}, \texttt {tail s'}) (coinductive conclusion).
\begin {aligned} &\texttt {tail s} \\ &\cong \texttt {tail (map (add m) (natsFrom n'))} \\ &\cong \texttt {map (add m) (tail (natsFrom n'))} &&\text {(def of \texttt {map})} \\ &\cong \texttt {map (add m) (natsFrom (n' + 1))} &&\text {(def of \texttt {natsFrom})} \end {aligned}
\begin {aligned} &\texttt {tail s'} \\ &\cong \texttt {tail (natsFrom (m + n'))} \\ &\cong \texttt {natsFrom ((m + n') + 1)} &&\text {(def of \texttt {natsFrom})} \\ &\cong \texttt {natsFrom (m + (n' + 1))} &&\text {(math)} \end {aligned}
Both sides are related by R, as desired.
This completes the proof.
944Example150-007B150-007B.xmlStream scanWe can implement the following stream function to accumulate the results of a function incorporating each element of a stream:(* scanl : ('a * 'b -> 'b) -> 'b -> 'a stream -> 'b stream
* REQUIRES: true
* ENSURES: scanl f acc (x0, x1, x2, ...) ==>
* (acc, f (x0, acc), f (x1, f (x0, acc)), ...)
*)
fun scanl (f : 'a * 'b -> 'b) (acc : 'b) (s : 'a stream) : 'b stream =
Stream (fn () =>
let
val (a, s') = expose s
in
(acc, scanl f (f (a, acc)) s')
end
)
This function is reminiscent of list foldl, but it produces a stream of the results over time rather than a single value.Warning: This implementation is not maximally lazy, since it exposes s even though the first element is not necessary to compute the first element of the result. However, this code will be more straightforward to prove correct.946Theorem150-007E150-007E.xmlNatural numbers using stream scan and repeatFor all n, we have \texttt {scanl op+ n (repeat 1)} \cong \texttt {natsFrom n}.
945Proof#186unstable-186.xml150-007E
Let n be arbitrary.
We prove that \texttt {scanl op+ n (repeat 1)} \cong \texttt {natsFrom n} by coinduction.
Choose R = \{(\texttt {scanl op+ n' (repeat 1)}, \texttt {natsFrom n'}) \mid \texttt {n' : int} \}.
Start State: This clearly holds by construction.
Preservation:
Let s and s' be arbitrary, and assume that R(\texttt {s}, \texttt {s'}).
In other words, s is scanl op+ n' (repeat 1) and s' is natsFrom n', for some n'.
First, we show that \texttt {head s} \cong \texttt {head s'}.
\begin {aligned} &\texttt {head s} \\ &\cong \texttt {head (scanl op+ n' (repeat 1))} \\ &\cong \texttt {n'} &&\text {(def of \texttt {scanl})} \end {aligned}
\begin {aligned} &\texttt {head s'} \\ &\cong \texttt {head (natsFrom n')} \\ &\cong \texttt {n'} &&\text {(def of \texttt {natsFrom})} \end {aligned}
Both sides are equivalent, as desired.
Then, we show that R(\texttt {tail s}, \texttt {tail s'}).
\begin {aligned} &\texttt {tail s} \\ &\cong \texttt {tail (scanl op+ n' (repeat 1))} \\ &\cong \texttt {scanl op+ (1 + n') (repeat 1)} &&\text {(def of \texttt {scanl})} \end {aligned}
\begin {aligned} &\texttt {tail s'} \\ &\cong \texttt {tail (natsFrom n')} \\ &\cong \texttt {natsFrom (n' + 1)} &&\text {(def of \texttt {natsFrom})} \\ &\cong \texttt {natsFrom (1 + n')} &&\text {(math)} \end {aligned}
Both sides are related by R, as desired.
This completes the proof.
1018Lecture150-lect12150-lect12.xmlRegular expressions I: the inductive approach2024625Harrison Grodin
hljs.highlightAll();
This lecture is inspired by lectures by Michael Erdmann, Robert Harper, and Frank Pfenning.
1006150-007H150-007H.xmlRegular languages955Example150-007V150-007V.xmlReal-world string filter goalsWe hope to easily find strings of various forms. For example:Instances of Carnegie Mellon and Carnegie Melon, accounting for typos in a dataset.
Image files of a certain form, like IMG-<num>.png, where <num> is representing a potentially-many-digit number.
CMU emails, either @cs or @andrew.
Usernames containing polly.985Definition150-007U150-007U.xmlRegular expressions
Regex \texttt {r}
Language \mathcal {L}(\tt r)
s \in \mathcal {L}(r) when...
\text {a}
\{\text {a}\}
s = \text {a}
\mathbf {0}
\varnothing
never
\mathbf {1}
\{\texttt {""}\}
s is empty
{r_1 + r_2}
\mathcal {L}(\tt r_1) \cup \mathcal {L}(\tt r_2)
s matches r_1 or r_2
r_1r_2
\{s_1s_2 \mid {s_1 \in \mathcal {L}(\tt r_1)} \text { and } {s_2 \in \mathcal {L}(\tt r_2)}\}
s = s_1s_2, where s_1 matches r_1 and s_2 matches r_2
r^\ast
\{s_1 \cdots s_n \mid {s_i \in \mathcal {L}(\tt r)} {\text { for all } i}, \text {where } n \ge 0\}
s is empty, or s = s_1s_2 where s_1 matches r and s_2 matches r^\ast
In these notes, we conflate strings and character lists, e.g. "ab" with [#"a", #"b"] and "" with [].986Example150-007W150-007W.xmlSample regular expressionsWe can represent our goals as regular expressions.First, we give the following definitions for matching letters and numbers, treating \text {0} and \text {1} as characters (vs. regular expression primitives \mathbf {0} and \mathbf {1}): \begin {aligned} r_{\text {a-z}} &= \text {a} + \text {b} + \cdots + \text {z} \\ r_{\text {0-9}} &= \text {0} + \text {1} + \cdots + \text {9} \end {aligned} Now, we can define regular expressions for our examples.\text {Carnegie Mel}(\text {l} + 1)\text {on}.
\text {IMG-}r_{\text {0-9}}r_{\text {0-9}}^\ast \text {.png}, assuming the number should be nonempty.
r_{\text {a-z}} r_{\text {a-z}}^\ast r_{\text {0-9}}^\ast \text {@}(\text {andrew} + \text {cs})\text {.cmu.edu}.
r_{\text {a-z}}^\ast \text {polly} r_{\text {a-z}}^\ast , assuming usernames only have letters.There are infinitely many other options for each. For example, the first could alternatively be written as \text {Carnegie Mellon} + \text {Carnegie Melon}.1003Example150-007T150-007T.xmlSearching for patterns with regular expressionsAssume for simplicity that we only have two characters, \text {a} and \text {b}.
Regular Expression r
Language \mathcal {L}(\tt r)
\text {aa}
just "aa"
(\text {a + b})^\ast
all strings
(\text {a + b})^\ast \text {aa}(\text {a + b})^\ast
all with two consecutive #"a"s
(\text {a} + \mathbf {1})(b + ba)^\ast
all strings without two consecutive #"a"s
1005Definition150-007K150-007K.xmlregexp datatypedatatype regexp
= Char of char
| Zero
| One
| Plus of regexp * regexp
| Times of regexp * regexp
| Star of regexp
1017150-007I150-007I.xmlThe inductive regular expression matcherWe now try to efficiently determine if a given string matches a regular expression.1008Example150-007M150-007M.xmlAn inefficient attempt at matchWhen we try to implement the match function directly, we notice that the Times and Star cases are very slow:(* match : regexp -> char list -> bool
* REQUIRES: true
* ENSURES: match r s ~= true iff s in L(r)
*)
fun match (r : regexp) (s : char list) : bool =
case r of
Char a =>
( case s of
nil => false
| c :: cs => a = c andalso List.null cs
)
| Zero => false
| One => List.null s
| Plus (r1, r2) => match r1 s orelse match r2 s
| Times (r1, r2) => (* check all splits of s *)
| Star r' => List.null s orelse (* check all splits of s *)
(* accept : regexp -> string -> bool
* REQUIRES: true
* ENSURES: accept r s ~= true iff s in L(r)
*)
fun accept (r : regexp) (s : string) : bool = match r (String.explode s)
These cases have to try all possible splits of the string. For example consider matching the string "hello world" against the regexp (\texttt {hello})(\texttt {world}). This algorithm behaves as follows:First, try to match "h" against \texttt {hello} and "elloworld" against \texttt {world}, which fails.
Then, try to match "he" against \texttt {hello} and "lloworld" against \texttt {world}, which fails.
Then, try to match "hel" against \texttt {hello} and "loworld" against \texttt {world}, which fails.
Then, try to match "hell" against \texttt {hello} and "oworld" against \texttt {world}, which fails.
Then, try to match "hello" against \texttt {hello} and "world" against \texttt {world}, which (finally!) succeeds.We may wish to avoid the first four attempts, though, by being more clever: ideally, we could match whatever string is possible against \texttt {hello} (here, hello) and find out that the remainder is "world", which we can also immediately match.1011Idea150-007S150-007S.xmlaccept via auxiliary function match(* match : regexp -> char list -> (char list -> bool) -> bool
* REQUIRES: true
* ENSURES:
* match r s p ~= true iff there exist x and y with x @ y ~= s and
* 1. x in L(r) and
* 2. p y ~= true.
*)
Using this stronger function, we can implement accept as desired:(* accept : regexp -> char list -> bool
* REQUIRES: true
* ENSURES: accept r s ~= true iff s in L(r)
*)
fun accept (r : regexp) (s : char list) : bool =
match r s List.null
1013Algorithm150-007L150-007L.xmlThe match algorithmWe implement this specification as follows:infix <<
(* op << : char list * char list -> bool
* REQUIRES: s' is a suffix of s
* ENSURES:
* s' << s ==> true iff s' is a proper suffix of s, and
* s' << s ==> false iff s' = s.
*)
fun s' << s = length s' < length s
(* match : regexp -> char list -> (char list -> bool) -> bool
* REQUIRES: true
* ENSURES:
* match r s p ~= true iff there exist x and y with x @ y ~= s and
* 1. x in L(r) and
* 2. p y ~= true.
*)
fun match (r : regexp) (s : char list) (p : char list -> bool) : bool =
case r of
Char a =>
( case s of
nil => false
| c :: cs => a = c andalso p cs
)
| Zero => false
| One => p s
| Plus (r1, r2) => match r1 s p orelse match r2 s p
| Times (r1, r2) => match r1 s (fn s' => match r2 s' p)
| Star r' =>
p s orelse
match r' s (fn s' => s' << s andalso match (Star r') s' p)
The first four cases are similar to the inefficient implementation, using a predicate p : char list -> bool in place of List.null. The Times and Star cases are more interesting:In the Times (r1, r2) case, we recursively change the predicate being used on the tail. We match s against r1, and then we ask that the remainder match r2, which in turn asks that its remainder meets p as needed.
In the Star r' case, we essentially match for Plus (One, Times (r', Star r')). First, we check if s is already sufficient. If not, we match s against r once, and ask that the remainder s' match Star r' again.In all cases but Star r', we are going by recursion on the regular expression. In the second branch of the Star r' clause, though, we match against Star r' again. To guarantee termination, we make sure that s' is strictly smaller than s, so this function goes by lexicographic (dictionary-order) recusion on the regular expression r and then the character list s. Either r shrinks, or r stays the same size and s shrinks.1016Theorem150-007X150-007X.xmlCorrectness of matchOur implementation of match meets its specification: \texttt {match r s p} \cong \texttt {true} \iff \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt r) \text { and } \texttt {p y} \cong \texttt {true} We prove each direction separately.
1014Proof#182unstable-182.xml150-007X
We prove soundness: if \texttt {match r s p} \cong \texttt {true}, then \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt r) \text { and } \texttt {p y} \cong \texttt {true}.
We go by lexicographic induction on r and then s.
Case One:
Suppose \texttt {match One s p} \cong \texttt {true}.
Then:
\begin {aligned} &\texttt {p s} \\ &\cong \texttt {match One s p} &&\text {(\texttt {One} clause of \texttt {match})} \\ &\cong \texttt {true} &&\text {(assumption)} \end {aligned}
We have to show that there exist \texttt {x} and \texttt {y} such that \texttt {x} \in \mathcal {L}(\tt One) = \{\texttt {""}\} and \texttt {p y} \cong \texttt {true}.
We are forced to choose \texttt {x} = \texttt {""}, since that is the only element of \mathcal {L}(\tt One).
Choosing y = s gets us \texttt {p s} \cong \texttt {true} by the above argument, and we have \texttt {\texttt {x @ y}} \cong \texttt {"" @ s} \cong \texttt {s}
Case Plus (r1, r2):
IH1: if \texttt {match r1 s p} \cong \texttt {true}, then \exists \texttt {x1}, \texttt {y1}.\ \texttt {x1 @ y1} \cong \texttt {s} \text { and } \texttt {x1} \in \mathcal {L}(\tt r1) \text { and } \texttt {p y1} \cong \texttt {true}.
IH2: if \texttt {match r2 s p} \cong \texttt {true}, then \exists \texttt {x2}, \texttt {y2}.\ \texttt {x2 @ y2} \cong \texttt {s} \text { and } \texttt {x2} \in \mathcal {L}(\tt r2) \text { and } \texttt {p y2} \cong \texttt {true}.
WTS: if \texttt {match (Plus (r1, r2)) s p} \cong \texttt {true}, then \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt Plus (r1, r2)) \text { and } \texttt {p y} \cong \texttt {true}.
So, suppose \texttt {match (Plus (r1, r2)) s p} \cong \texttt {true}.
By definition:
\begin {aligned} &\texttt {match (Plus (r1, r2)) s p} \\ &\cong \texttt {match r1 s p orelse match r2 s p} &&\text {(\texttt {Plus} clause of \texttt {match})} \end {aligned}
By the assumption and the definition of orelse, we have that either \texttt {match r1 s p} \cong \texttt {true} or \texttt {match r2 s p} \cong \texttt {true}.
Without loss of generality, assume it is the former.
Then, we get some \texttt {x1} and \texttt {y1} such that \texttt {x1 @ y1} \cong \texttt {s} and \texttt {x1} \in \mathcal {L}(\tt r1) and \texttt {p y1} \cong \texttt {true}.
Let \texttt {x} = \texttt {x1} and \texttt {y} = \texttt {y1}.
Since \texttt {x1} \in \mathcal {L}(\tt r1), it is also in \mathcal {L}(\tt Plus (r1, r2)) = \mathcal {L}(\tt r1) \cup \mathcal {L}(\tt r2).
We omit the remaining cases for brevity.
1015Proof#183unstable-183.xml150-007X
We prove completeness: if \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt r) \text { and } \texttt {p y} \cong \texttt {true}, then \texttt {match r s p} \cong \texttt {true}.
We go by lexicographic induction on r and then s.
Case One:
Suppose \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt One) \text { and } \texttt {p y} \cong \texttt {true}.
If \texttt {x} \in \mathcal {L}(\tt One), then \texttt {x} = \texttt {""}, since that is the only element of \mathcal {L}(\tt One).
Then, since we have \texttt {x @ y} \cong \texttt {s}, we must have \texttt {y} = \texttt {s}, and so \texttt {p s} \cong \texttt {true}.
By the One clause of match, this means that \texttt {match r s p} \cong \texttt {true}, as desired.
Case Plus (r1, r2):
IH1: if \exists \texttt {x1}, \texttt {y1}.\ \texttt {x1 @ y1} \cong \texttt {s} \text { and } \texttt {x1} \in \mathcal {L}(\tt r1) \text { and } \texttt {p y1} \cong \texttt {true}, then \texttt {match r1 s p} \cong \texttt {true}.
IH2: if \exists \texttt {x2}, \texttt {y2}.\ \texttt {x2 @ y2} \cong \texttt {s} \text { and } \texttt {x2} \in \mathcal {L}(\tt r2) \text { and } \texttt {p y2} \cong \texttt {true}, then \texttt {match r2 s p} \cong \texttt {true}.
WTS: if \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt Plus (r1, r2)) \text { and } \texttt {p y} \cong \texttt {true}, then \texttt {match (Plus (r1, r2)) s p} \cong \texttt {true}.
So, suppose \exists \texttt {x}, \texttt {y}.\ \texttt {x @ y} \cong \texttt {s} \text { and } \texttt {x} \in \mathcal {L}(\tt Plus (r1, r2)) \text { and } \texttt {p y} \cong \texttt {true}.
If \texttt {x} \in \mathcal {L}(\tt Plus (r1, r2)) = \mathcal {L}(\tt r1) \cup \mathcal {L}(\tt r2), then we must have \texttt {x} \in \mathcal {L}(\tt r1) or \texttt {x} \in \mathcal {L}(\tt r2).
Without loss of generality, assume it is the former.
Then, by IH1, we have \texttt {match r1 s p} \cong \texttt {true}.
So, \texttt {match (Plus (r1, r2)) s p} \cong \texttt {true}, by the definition of orelse.
We omit the remaining cases for brevity.
1056Lecture150-lect13150-lect13.xmlRegular expressions II: the coinductive approach2024627Harrison Grodin
hljs.highlightAll();
1045150-007Y150-007Y.xmlLazy state machinesWe now use lazy programming to implement regular expression matching.1026Definition150-007Z150-007Z.xmlLazy state machineWe define state machines (sometimes known as automata) as a lazy datatype like streams, but instead of having a single tail via unit ->, we have one tail per character with char ->.datatype machine = Machine of bool * (char -> machine)
We always expect a current value of type bool, representing whether or not the machine is in an accepting state (i.e., would accept the empty string). We could suspend the bool, but we choose not to for convenience.Similar to head and tail for streams, we define the following helpers:(* status : machine -> bool *)
fun status (Machine (b, _)) = b
(* feed : machine -> char -> machine *)
fun feed (Machine (_, f)) c = f c
1028Definition150-0082150-0082.xmlRunning a matching machineWe can run a machine m : machine on a string s : char list by recursively traversing s, feeding each character to m's transition function and reading the status at the end.(* run : machine -> char list -> bool *)
fun run m nil = status m
| run m (c :: cs) = run (feed m c) cs
1029Definition150-0080150-0080.xmlAccepted language of a machineWe say that a string \texttt {s} is accepted by a machine \texttt {m} when \texttt {run m s} \cong \texttt {true}. We write \mathcal {A}(\tt m) = \{\texttt {s : char list} \mid \texttt {run m s} \cong \texttt {true}\} for the set of all strings accepted by machine \texttt {m}.1032Example150-0081150-0081.xmlAlways-reject machineWe can implement a machine that rejects every string:(* zero : unit -> machine
* REQUIRES: true
* ENSURES: A(zero ()) is empty
*)
fun zero () =
Machine (false, fn _ => zero ())
We prove the specification as follows.
1031Proof#181unstable-181.xml150-0081
We show that \mathcal {A}(\tt \texttt {zero ()}) is empty.
To show a set is empty, we give a proof of negation: we show that for all s : char list, if \texttt {s} \in \mathcal {A}(\tt \texttt {zero ()}), then we have a contradiction.
Equivalently, by the definition of accepted language of a machine, we must show that \texttt {run (zero ()) s} \cong \texttt {false}.
We prove by structural induction on s.
Case nil:
\begin {aligned} &\texttt {run (zero ()) nil} \\ &\cong \texttt {false} &&\text {(clause 1 of \texttt {run})} \end {aligned}
Case c :: cs:
IH: \texttt {run (zero ()) cs} \cong \texttt {false}.
WTS: \texttt {run (zero ()) (c :: cs)} \cong \texttt {false}.
\begin {aligned} &\texttt {run (zero ()) (c :: cs)} \\ &\cong \texttt {run (zero ()) cs} &&\text {(clause 2 of \texttt {run})} \\ &\cong \texttt {false} &&\text {(IH)} \end {aligned}
This concludes the proof.
1034Example150-0083150-0083.xmlAccept empty string machineWe can implement a machine that accepts the empty string but rejects everything after that:(* one : unit -> machine
* REQUIRES: true
* ENSURES: A(one ()) = {""}
*)
fun one () =
Machine (true, fn _ => zero ())
The status is true, but no matter what character we receive afterwards, we give back the always-reject machine.1036Example150-0084150-0084.xmlAccept single character machineUsing the always-reject machine and accept empty string machine, we can implement a machine that only accepts the string "a":(* char : char -> machine
* REQUIRES: true
* ENSURES: A(char a) = {"a"}
*)
fun char a =
Machine (false, fn c => if a = c then one () else zero ())
We do not accept the empty string initially. After receiving character c, we check if it is equal to a. If so, we provide accept empty string machine, accepting the empty string afterwards; if not, we provide always-reject machine, failing to accept.1038Example150-0085150-0085.xmlUnion of two machinesWe can take the union of two machines, running them in parallel and using orelse to see if either will accept:(* plus : machine * machine -> machine
* REQUIRES: true
* ENSURES: A(plus (m1, m2)) = A(m1) union A(m2)
*)
fun plus (m1, m2) =
Machine
( status m1 orelse status m2
, fn c => plus (feed m1 c, feed m2 c)
)
1040Example150-0086150-0086.xmlConcatenation of two machinesWe can concatenate two machines, matching via the first one until an accept state and then matching via the second one in parallel:(* times : machine * machine -> machine
* REQUIRES: true
* ENSURES: A(times (m1, m2)) = {s1s2 | s1 in A(m1) and s2 in A(m2)}
*)
fun times (m1, m2) =
Machine
( status m1 andalso status m2
, fn c =>
if status m1
then plus (feed m2 c, times (feed m1 c, m2))
else times (feed m1 c, m2)
)
We accept the empty string if both machines do. Given a new character c, we consider whether or not the first machine would accept the empty string.If so, we start up the second machine f2 c, while simultaneously continuing to feed c to the first machine with times (f1 c, m2).
If not, we keep matching c on the first machine with times (f1 c, m2), waiting to start the second machine.1042Example150-0087150-0087.xmlIteration of a machineWe can iterate a machine using the concatenation of two machines:(* star : machine -> machine
* REQUIRES: true
* ENSURES: A(star m) = {s1s2...sn | n >= 0, and forall i, si in A(m)}
*)
fun star m =
Machine (true, fn c => times (feed m c, star m))
We always accept the empty string. Upon receiving a character c, we ask that m match it, and then we match star m again.1044Definition150-0088150-0088.xmlRegular expression matching using machinesWe can compile every regular expression to a lazy state machine, and then we can use run to figure out if a given string is accepted.(* compile : regexp -> machine
* REQUIRES: true
* ENSURES: A(compile r) = L(r)
*)
fun compile (Char a) = char a
| compile Zero = zero ()
| compile One = one ()
| compile (Plus (r1, r2)) = plus (compile r1, compile r2)
| compile (Times (r1, r2)) = times (compile r1, compile r2)
| compile (Star r) = star (compile r)
(* accept : regexp -> string -> bool *)
fun accept r s = run (compile r) (String.explode s)
1055150-0089150-0089.xmlCoinductive proofs of language equivalence1046Concept150-008A150-008A.xmlExtensional equivalence of lazy state machinesLet m0 and m0' be of type machine. To show that \texttt {m0} \cong \texttt {m0'}:Choose a relation R(-, -) on pairs of machines that relates pairs of machines that you expect to be equivalent.
Start State: Show that R(\texttt {m0}, \texttt {m0'}), guaranteeing that the streams you care about are related.
Preservation: Then, show that for all m and m', if R(\texttt {m}, \texttt {m'}), then:
the statuses are the same, \texttt {status m} \cong \texttt {status m'} (the "co-base case", since no more characters are read after the status is checked); and
for all c : char, the feeding the machines c causes them to stay related, R(\texttt {feed m c}, \texttt {feed m' c}) (the "coinductive conclusion", dual to the inductive hypothesis).This proof technique is called coinduction.This definition is analogous to extensional equivalence of streams.1047Corollary150-008F150-008F.xmlExtensional equivalence implies equal accepted languagesIf \texttt {m0} \cong \texttt {m0'}, then \mathcal {A}(\tt \texttt {m0}) = \mathcal {A}(\tt \texttt {m0'}).1049Theorem150-008B150-008B.xmlzero is the right identity for plusFor all m : machine, we have that \texttt {plus (m, zero ())} \cong \texttt {m}.
1048Proof#180unstable-180.xml150-008B
Let m be arbitrary.
We prove that \texttt {plus (m, zero ())} \cong \texttt {m} by coinduction.
Choose R = \{(\texttt {plus (m', zero ())}, \texttt {m'}) \mid \texttt {m' : machine}\}. In other words, choose to relate exactly the two sides of the equation, for all m'.
Start State: Clearly, we have R(\texttt {plus (m, zero ())}, \texttt {m}) by construction.
Preservation:
Let m1 be arbitrary, and assume that R(\texttt {plus (m1, zero ())}, \texttt {m1}).
First, we show that \texttt {status (plus (m1, zero ()))} \cong \texttt {status m1}.
\begin {aligned} &\texttt {status (plus (m1, zero ()))} \\ &\cong \texttt {status m1 orelse status (zero ())} &&\text {(def of \texttt {plus})} \\ &\cong \texttt {status m1 orelse false} &&\text {(def of \texttt {zero})} \\ &\cong \texttt {status m1} \end {aligned}
Let c be arbitrary.
We show that R(\texttt {feed (plus (m1, zero ())) c}, \texttt {feed m1 c}).
\begin {aligned} &\texttt {feed (plus (m1, zero ())) c} \\ &\cong \texttt {plus (feed m1 c, feed (zero ()) c)} &&\text {(def of \texttt {plus})} \\ &\cong \texttt {plus (feed m1 c, zero ())} &&\text {(def of \texttt {zero})} \end {aligned}
We have that \texttt {plus (feed m1 c, zero ())} and \texttt {feed m1 c} are R-related, as desired.
This completes the proof.
1050Theorem150-008C150-008C.xmlzero is the left annhilator for timesFor all m : machine, we have that \texttt {times (zero (), m)} \cong \texttt {zero ()}.1052Lemma150-008D150-008D.xmlProving extensional equivalence instead of relatedness in coinductionWhen proving the feed case of preservation for extensional equivalence of lazy state machines, it is sufficient to show that \texttt {feed m c} \cong \texttt {feed m' c} (rather than showing that the sides are merely related).
1051Proof#179unstable-179.xml150-008D
Suppose we have a relation R' for coinduction, and suppose R'(\texttt {m}, \texttt {m'}) implies \texttt {status m} \cong \texttt {status m'} and for all c, \texttt {feed m c} \cong \texttt {feed m' c}.
Then, we can prove \texttt {feed m c} \cong \texttt {feed m' c} by coinduction.
Choose R = R' \cup \{(\texttt {m}, \texttt {m}) \mid \texttt {m : matcher}\}. In other words, relate everything related by R', and relate everything that is extensionally equivalent.
Start State: The start states are R-related because they are R'-related by assumption.
Preservation:
Let m and m' be arbitrary, and assume that R(\texttt {m}, \texttt {m'}).
First, we show that \texttt {status m} \cong \texttt {status m'}.
Since R(\texttt {m}, \texttt {m'}), by the definition of R, either R'(\texttt {m}, \texttt {m'}) or \texttt {m} \cong \texttt {m'}.
If the former, we use the assumed proof; if the latter, the result is immediate.
Let c be arbitrary.
We show that R(\texttt {feed m c}, \texttt {feed m' c}).
Since R(\texttt {m}, \texttt {m'}), by the definition of R, either R'(\texttt {m}, \texttt {m'}) or \texttt {m} \cong \texttt {m'}.
If the former, we use the assumed proof showing extensional equivalence, which is sufficient for R'; if the latter, the result is immediate.
1054Theorem150-008E150-008E.xmlone is the left identity for timesFor all m : machine, we have that \texttt {times (one (), m)} \cong \texttt {m}.
1053Proof#178unstable-178.xml150-008E
Let m be arbitrary.
We prove that \texttt {times (one (), m)} \cong \texttt {m} by coinduction.
Choose R = \{(\texttt {times (one (), m')}, \texttt {m'}) \mid \texttt {m' : matcher}\}. In other words, choose to relate exactly the two sides of the equation, for all m'.
Start State: Clearly, we have R(\texttt {times (one (), m)}, \texttt {m}) by construction.
Preservation:
Let m1 be arbitrary, and assume that R(\texttt {times (one (), m1)}, \texttt {m1}).
First, we show that \texttt {status (times (one (), m1))} \cong \texttt {status m1}.
\begin {aligned} &\texttt {status (times (one (), m1))} \\ &\cong \texttt {status (one ()) andalso status m1} &&\text {(def of \texttt {times})} \\ &\cong \texttt {true andalso status m1} &&\text {(def of \texttt {one})} \\ &\cong \texttt {status m1} \end {aligned}
Let c be arbitrary.
We show that R(\texttt {feed (times (one (), m1)) c}, \texttt {feed m1 c}).
\begin {aligned} &\texttt {feed (times (one (), m1)) c} \\ &\cong \texttt {if status (one ()) then ... else ...} &&\text {(def of \texttt {times})} \\ &\cong \texttt {if true then plus... else ...} &&\text {(def of \texttt {one})} \\ &\cong \texttt {plus (feed m1 c, times (feed (one ()) c, m1))} \\ &\cong \texttt {plus (feed m1 c, times (zero (), m1))} &&\text {(def of \texttt {one})} \\ &\cong \texttt {plus (feed m1 c, zero ())} &&\text {(\texttt {zero} is left annhilator)} \\ &\cong \texttt {feed m1 c} &&\text {(\texttt {zero} is right identity)} \end {aligned}
Here, we used and .
Since we found that both sides were \texttt {feed m1 c}, then by , we are done.
This completes the proof.
1096Lecture150-lect14150-lect14.xmlModules I: signatures and structures202472Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
1075150-008G150-008G.xmlStructures and signatures1063Idea150-008I150-008I.xmlStructures as namespacesWe can organize our declarations into namespaces, called structures. For example:We have List.map, List.foldr, List.filter, etc. in the List structure.
We have Int.compare in the Int structure, and String.compare in the String structure.1065Concept150-008J150-008J.xmlStructureWe can declare a structure using the structure keyword, and we can write a structure using struct ... end.structure MyStructure =
struct
(* declarations here *)
end
1066Convention150-008L150-008L.xmlStructure namingWe usually name structures using UpperCamelCase.1068Example150-008K150-008K.xmlList structureThe List structure is defined as follows:structure List =
struct
datatype 'a list = nil | :: of 'a * 'a list
val null = fn nil => true | _ => false
fun revAppend (nil , acc) = acc
| revAppend (x :: xs, acc) = revAppend (xs, x :: acc)
fun rev l = revAppend (l, nil)
fun map f nil = nil
| map f (x :: xs) = f x :: map f xs
(* ...and more... *)
end
1070Concept150-008H150-008H.xmlSignatureA signature is the type of a structure. We say that a structure ascribes to a signature.We can declare a signature using the signature keyword, and we can write a signature using sig ... end.signature MY_SIGNATURE =
sig
(* signature specification here *)
end
1071Convention150-008M150-008M.xmlSignature namingWe usually name signatures name using SCREAMING_SNAKE_CASE.1074Example150-008N150-008N.xmlStream signatureWe can define a signature for streams:signature STREAM =
sig
datatype 'a stream = Stream of unit -> 'a * 'a stream
val expose : 'a stream -> 'a * 'a stream
val head : 'a stream -> 'a
val tail : 'a stream -> 'a stream
val take : 'a stream * int -> 'a list
val map : ('a -> 'b) -> 'a stream -> 'b stream
end
Then, we can define a structure Stream that ascribes to STREAM:structure Stream : STREAM =
struct
datatype 'a stream = Stream of unit -> 'a * 'a stream
fun expose (Stream susp) = susp ()
fun fst (x, y) = x
fun head s = fst (expose s)
(* ... *)
end
1090150-008O150-008O.xmlAbstract typesIn principles of functional programming, we discussed abstraction as a goal: types should guide our program structure. Some types might even be hidden from view, or left abstract, to avoid clouding our thinking.1076Concept150-008P150-008P.xmlTransparent and opaque ascriptionWhen we write MyStruct : MY_SIG, the structure transparently ascribes to MY_SIG: all types in the signature are visible from the outside.When we write MyStruct :> MY_SIG, the structure opaquely ascribes to MY_SIG: all type t specifications in the signature are hidden from the outside.1081Example150-008Q150-008Q.xmlQueues as listsWe can describe a queue implementation via the following signature:signature QUEUE =
sig
type 'a queue (* abstract *)
val empty : 'a queue
val enqueue : 'a queue -> 'a -> 'a queue
val dequeue : 'a queue -> ('a * 'a queue) option
end
We can implement the signature using lists:structure ListQueue : QUEUE =
struct
type 'a queue = 'a list
val empty = nil
fun enqueue (l : 'a queue) (x : 'a) : 'a queue = l @ [x]
fun dequeue nil = NONE
| dequeue (x :: xs) = SOME (x, xs)
end
Here, since we used transparent ascription, we have that [1, 2, 3] : int ListQueue.queue. To hide the implementation type, we use opaque ascription:structure ListQueue :> QUEUE =
We can use ListQueue as follows:- structure LQ = ListQueue;
- val q : int LQ.queue = LQ.enqueue (LQ.enqueue LQ.empty 1) 2;
val q = - : int ListQueue.queue
- LQ.dequeue q;
val it = SOME (1,-) : (int * int ListQueue.queue) option
1083Example150-008R150-008R.xmlBatched queueWe can implement the queues of the previous example more efficiently using pairs of lists:structure BatchedQueue :> QUEUE =
struct
type 'a queue = 'a list * 'a list
val empty = (nil, nil)
fun enqueue (front, back) x = (front, x :: back)
fun dequeue (x :: front, back) = SOME (x, (front, back))
| dequeue (nil, back) =
case List.rev back of
nil => NONE
| x :: front => SOME (x, (front, nil))
end
We represent a queue via a back list and a front list. We enqueue to the back and dequeue from the front. If we try to dequeue from an empty front, we reverse the back and move it to the front.The list represented by (front, back) is front @ rev back. For example, the queue 1, 2, 3, 4, 5 can be represented by ([1, 2, 3], [5, 4]) or ([], [5, 4, 3, 2, 1]).Implementing queues as lists, each enqueue is \mathcal {O}(n) recursive calls. Here, most enqueues and dequeues perform no recursive calls; occasionally, we get an \mathcal {O}(n) call to List.rev, but this averages out over time to \mathcal {O}(1) per operation (formally, via amortized analysis).1085Example150-008S150-008S.xmlDictionary signatureWe can define a signature for dictionaries, which are (finite) mappings from keys (here, strings) to values (here, 'a):signature DICT =
sig
type key = string (* concrete *)
type 'a entry = key * 'a (* concrete *)
type 'a dict (* abstract *)
val empty : 'a dict
val find : key -> 'a dict -> 'a option
val insert : 'a entry -> 'a dict -> 'a dict
end
1087Example150-008W150-008W.xmlDictionaries as listsWe can implement the dictionary signature using sorted lists:structure ListDict :> DICT =
struct
type key = string
type 'a entry = key * 'a
(* invariant: elements are key-sorted *)
type 'a dict = 'a entry list
val empty = nil
fun find _ nil = NONE
| find k' ((k, v) :: d) =
case String.compare (k', k) of
EQUAL => SOME v
| LESS => NONE
| GREATER => find k' d
fun insert (k', v') nil = [(k', v')]
| insert (k', v') ((k, v) :: d) =
case String.compare (k', k) of
EQUAL => (k', v') :: d
| LESS => d
| GREATER => (k, v) :: insert (k', v') d
end
However, this has the disadvantage that larger keys will be towards the end of the list, leading to slow lookup times.1089Example150-008Z150-008Z.xmlUsing the dictionary signature as a clientUsing the pipe function for readability, we can interact with the implementations as follows:structure D = ListDict
val d : int D.dict =
D.empty
|> D.insert ("Polly", 150)
|> D.insert ("Honk", 122)
|> D.insert ("Theo", 251)
val answer : int option = D.find "Polly" d
1095150-008T150-008T.xmlStructure equivalence1091Concept150-008V150-008V.xmlStructure equivalence via representation independenceTwo structures M1, M2 : S are equivalent when:For each abstract type t, we give a relation R_\texttt {t}(-, -) relating M1.t to M2.t.
All values declared are \cong , where R_\texttt {t} is taken as the notion of equivalence for type t.1094Example150-008U150-008U.xmlEquivalence of queuesWe show that ListQueue and BatchedQueue are equivalent by giving a relation on their types and showing that the relation is preserved.Recall the signature QUEUE:signature QUEUE =
sig
type 'a queue (* abstract *)
val empty : 'a queue
val enqueue : 'a queue -> 'a -> 'a queue
val dequeue : 'a queue -> ('a * 'a queue) option
end
1093Proof#177unstable-177.xml150-008U
Let R_\texttt {t}(\texttt {l}, \texttt {(front, back)}) be the relation \texttt {l} \cong \texttt {front @ rev back}.
We must prove that:
R_\texttt {t}(\texttt {LQ.empty}, \texttt {BQ.empty})
If R_\texttt {t}(\texttt {lq}, \texttt {bq}), then for all \texttt {x}, we have R_\texttt {t}(\texttt {LQ.enqueue lq x}, \texttt {BQ.enqueue bq x}).
If R_\texttt {t}(\texttt {lq}, \texttt {bq}), then either:
\texttt {LQ.dequeue lq} \cong \texttt {BQ.dequeue lq} \cong \texttt {NONE}, or
\texttt {LQ.dequeue lq} \cong \texttt {SOME (x, lq')} and \texttt {BQ.dequeue bq} \cong \texttt {SOME (x, bq')} where R_\texttt {t}(\texttt {lq'}, \texttt {bq'}).
1157Lecture150-lect15150-lect15.xmlModules II: functors and type classes202479Harrison Grodin
hljs.highlightAll();
This lecture is heavily inspired by an analogous lecture by Michael Erdmann.
1109150-008Y150-008Y.xmlDictionaries revisited1103Example150-008S150-008S.xmlDictionary signatureWe can define a signature for dictionaries, which are (finite) mappings from keys (here, strings) to values (here, 'a):signature DICT =
sig
type key = string (* concrete *)
type 'a entry = key * 'a (* concrete *)
type 'a dict (* abstract *)
val empty : 'a dict
val find : key -> 'a dict -> 'a option
val insert : 'a entry -> 'a dict -> 'a dict
end
1105Example150-008X150-008X.xmlDictionaries as treesAlternatively, for lower cost, we can implement the dictionary signature using sorted trees (also called binary search trees):structure TreeDict :> DICT =
struct
type key = string
type 'a entry = key * 'a
datatype 'a tree = Empty | Node of 'a tree * 'a * 'a tree
(* invariant: elements are key-sorted *)
type 'a dict = 'a entry tree
val empty = Empty
fun find _ Empty = NONE
| find k' (Node (l, (k, v), r)) =
case String.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
fun insert (k', v') Empty = Node (Empty, (k', v'), Empty)
| insert (k', v') (Node (l, (k, v), r)) =
case String.compare (k', k) of
EQUAL => Node (l, (k', v'), r)
| LESS => Node (insert (k', v') l, (k, v), r)
| GREATER => Node (l, (k, v), insert (k', v') r)
end
1108Issue150-0096150-0096.xmlKey-polymorphic dictionariesHow could we implement the dictionary signature such that we're not committed to the key type being string? In the implementations, we only used the fact that strings can be compared. So, one idea would be as follows, making everything key-polymorphic via a type variable 'k:signature DICT =
sig
type ('k, 'a) entry = 'k * 'a (* concrete *)
type ('k, 'a) dict (* abstract *)
val empty : ('k, 'a) dict
val find : ('k * 'k -> order) -> 'k -> ('k, 'a) dict -> 'a option
val insert : ('k * 'k -> order) -> ('k, 'a) entry -> ('k, 'a) dict -> ('k, 'a) dict
end
We parameterize everything by 'k, and we have find and insert take in comparison functions 'k * 'k -> order. However, there is a problem: what if a client used different comparison functions for different function calls? For example, adapting the previous usage adversarially using the invert : order -> order function:structure D = TreeDict
val d : int D.dict =
D.empty
|> D.insert String.compare ("Polly", 150)
|> D.insert (invert String.compare) ("Honk", 122)
val answer : int option = D.find String.compare "Honk" d (* NONE *)
Here, we make a dictionary with "Polly", and then we add "Honk" to the dictionary using the reversed string comparison function, placing "Honk" to the right of "Polly". However, when we look for "Honk", we use the usual string comparison function, resulting in an answer of NONE even though "Honk" was present.To improve the situation, we will make sure that the key type is fixed in the functor along with a known comparison function, before a client even has the opportunity to use the find or insert functions.1124150-0090150-0090.xmlType classes1111Definition150-0091150-0091.xmlType classA type class is a signature containing a type parameter (meant to be transparent) alongside some operations involving the type.signtaure MY_TYPE_CLASS =
sig
type t (* parameter *)
val f1 : (* ...involving t... *)
val f2 : (* ...involving t... *)
(* ... *)
end
The type should be transparent, since a client is meant to use the operations freely. Type classes do not hide type information; they simply classify types supporting some operations.1116Example150-0092150-0092.xmlORDERED type classThe ORDERED type class classifies types t whose elements can be compared:signature ORDERED =
sig
type t (* parameter *)
val compare : t * t -> order
end
We can implement this typeclass using integers compared in the usual way:structure IntOrdered : ORDERED =
struct
type t = int
val compare = Int.compare
end
Note the use of transparent ascription.Alternatively, we can compare integers using the reverse ordering:structure IntOrdered' : ORDERED =
struct
type t = int
fun compare (x, y) = Int.compare (y, x)
end
Or, we can compare strings:structure StringOrdered : ORDERED =
struct
type t = string
val compare = String.compare
end
1117Concept150-0093150-0093.xmlVarieties of types in signaturesEvery type in a signature can be annotated to be abstract, parameter, or concrete.If the type is unspecified via type t, it can be:
abstract, if it is meant to be hidden with opaque ascription; or
parameter, if it is meant to be known to clients with transparent ascription.
If type type is specified via type t = ..., it is concrete.1118Concept150-0094150-0094.xmlPartial transparency using where typeUsing the signature former where type, we can make parts of a signature transparent, even if parts remain opaque. We write MY_SIGATURE where type t = someKnownType to make type t transparently be someKnownType, leaving all other types abstract.This feature is commonly used alongside type classes to reveal the definition of some type in a signature.1123Example150-0095150-0095.xmlDictionary signature with ordered keysWe can update the dictionary signature to include ordered keys that are a parameter:signature DICT =
sig
structure Key : ORDERED (* parameter *)
type 'a entry = Key.t * 'a (* concrete *)
type 'a dict (* abstract *)
val empty : 'a dict
val find : Key.t -> 'a dict -> 'a option
val insert : 'a entry -> 'a dict -> 'a dict
end
Via partial transparency using where type, we can reveal Key.t while keeping 'a dict hidden:signature STRING_DICT = DICT where type Key.t = string
signature INT_DICT = DICT where type Key.t = int
When we implement a structure, we must follow the given type specifications. For example, using STRING_DICT:structure StringTreeDict :> STRING_DICT =
struct
structure Key = StringOrdered
type 'a entry = Key.t * 'a
type 'a dict = 'a entry tree
(* ... *)
end
Often, we will not name signatures like STRING_DICT, using their definitions inline:structure StringTreeDict :> DICT where type Key.t = string =
(* ... *)
How could we make an implementation of TreeDict for an arbitrary key Key : ORDERED?1156150-0097150-0097.xmlFunctors1138Concept150-0098150-0098.xmlFunctorA functor is a function that takes in a structure and produces another structure. The analogy is:
Expression Level
Module Level
type
signature
expression
structure
function
functor
(Unfortunately, ideas such as "functors are values", "higher-order functors", and "functor signatures" are not present in Standard ML itself.)1141Example150-0099150-0099.xmlParametric tree dictionariesWe can define a functor that takes in a structure K : ORDERED and produces an implementation of dictionaries where the key type is known to be Key.t, realized as signature DICT where type Key.t = Key.t.functor TreeDict (K : ORDERED) :> DICT where type Key.t = K.t =
struct
structure Key = K
type 'a entry = Key.t * 'a
datatype 'a tree = Empty | Node of 'a tree * 'a * 'a tree
(* invariant: elements are key-sorted *)
type 'a dict = 'a entry tree
val empty = Empty
fun find _ Empty = NONE
| find k' (Node (l, (k, v), r)) =
case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
fun insert (k', v') Empty = Node (Empty, (k', v'), Empty)
| insert (k', v') (Node (l, (k, v), r)) =
case Key.compare (k', k) of
EQUAL => Node (l, (k', v'), r)
| LESS => Node (insert (k', v') l, (k, v), r)
| GREATER => Node (l, (k, v), insert (k', v') r)
end
The implementation is the same as before, except writing structure Key = K, using Key.t instead of key, and using Key.compare instead of String.compare.Now, we can create dictionaries with any comparable key type. For example:structure StringTreeDict = TreeDict (StringOrdered)
structure IntTreeDict = TreeDict (IntOrdered)
1143Concept150-009A150-009A.xmlFreshness of typesIf a functor is called multiple times, the opaquely-ascribed types created at each call will be fresh. For example:structure D1 = TreeDict (StringOrdered)
structure D2 = TreeDict (StringOrdered)
val d : D1.empty = D2.empty (* type error: D1.empty is different from D2.empty *)
1148Example150-009B150-009B.xmlLexicographically ordered pairsSuppose we wish to implement a Chess game, where the pieces are stored as values in a dictionary. The key type might be char * int, where the char represents the column and the int represents the row. Suppose we have:structure CharOrdered : ORDERED =
struct
type t = char
val compare = Char.compare
end
How could combine this with IntCompared to get a structure with type t = char * int? We can define the following functor:functor PairOrdered
( Arg :
sig
structure X : ORDERED
structure Y : ORDERED
end
) : ORDERED =
struct
type t = Arg.X.t * Arg.Y.t
fun compare ((x1, y1), (x2, y2)) =
case Arg.X.compare (x1, x2) of
EQUAL => Arg.Y.compare (y1, y2)
| ord => ord
end
Here, we take in a single structure Arg which contains two sub-structures, X : ORDERED and Y : ORDERED. Like functions, structures only take in a single argument; as functions take in tuples of values, functors can take structures containing multiple structures.This functor can be applied as follows, passing in a structure written using struct ... end:structure ChessOrdered =
PairOrdered
( struct
structure X = CharOrdered
structure Y = IntOrdered
end
)
Then, we can apply the TreeDict functor to get a dictionary for storing data at chess locations:structure Board = TreeDict (ChessOrdered)
(* or, equivalently, inlining the definition of ChessOrdered: *)
structure Board =
TreeDict
( PairOrdered
( struct
structure X = CharOrdered
structure Y = IntOrdered
end
)
)
1151Concept150-009C150-009C.xmlFunctor argument syntactic sugarWhen structures take multiple arguments, it is cumbersome to write Arg. before every sub-component. So, Standard ML provides syntactic sugar where the Arg : sig and end can be left off of inputs:functor PairOrdered
( structure X : ORDERED
structure Y : ORDERED
) : ORDERED =
struct
type t = X.t * Y.t
fun compare ((x1, y1), (x2, y2)) =
case X.compare (x1, x2) of
EQUAL => Y.compare (y1, y2)
| ord => ord
end
This is functionally the same, but it is typically more ergonomic and leads to more readable code.Analogous syntactic sugar is available when a functor is applied, allowing the struct and end of an argument to be left off:structure ChessOrdered =
PairOrdered
( structure X = CharOrdered
structure Y = IntOrdered
)
1155Warning150-009D150-009D.xmlSingle-argument functorsThe following pairs are different:functor TreeDict1 (K : ORDERED) = (* ... *)
structure StringTreeDict1 = TreeDict1 (StringOrdered)
functor TreeDict2 (structure K : ORDERED) = (* ... *)
structure StringTreeDict2 = TreeDict2 (structure K = StringOrdered)
Both pairs individually typecheck. However, the second is syntactic sugar for taking in a structure containing a single structure (analogous to a one-element tuple):functor TreeDict2' (Arg : sig structure K : ORDERED end) = (* ... *)
structure StringTreeDict2' = TreeDict2' (struct structure K = StringOrdered end)
This version is equivalent to TreeDict2, just expanding out the syntactic sugar.In this class, we will always use the first approach, choosing not to wrap a single structure argument in a struct ... end.Note that in the following, Alternative does not typecheck:structure Alternative = TreeDict2 (structure K' = StringOrdered)
We only changed K to K' compared to StringTreeDict2. However, since we took in a structure with a single element structure K : ORDERED, passing in structure K' : ORDERED does not meet the criterion (just like if the signature asked for val insert, it would not be sufficient to write val insert').1186Lecture150-lect16150-lect16.xmlModules III: red-black trees2024711Harrison Grodin
hljs.highlightAll();
This lecture is inspired by lectures by Michael Erdmann and Brandon Wu.
1170150-009E150-009E.xmlIntuition for red-black trees1164Goal150-009G150-009G.xmlSelf-balancing binary search treeOur implementation of dictionaries using trees has a major cost issue: while the operations are efficient (logarithmic time) when the tree is balanced, nothing prevents the tree from getting unbalanced.We hope to implement dictionaries using trees with invariants that force them to remain balanced. Recall from earlier that a perfectly balanced tree has depth \log _2(n + 1) when there are n nodes in the tree.1165Definition150-009H150-009H.xmlRed-black invariantsA full, balanced tree has the same number of nodes on every path from the root to each Empty. However, such trees only can have 2^d - 1 nodes, where d is the height (depth) of the tree. In order to maintain a similar invariant, we color some nodes black and some nodes red and only count the black nodes. The red nodes are just to fix "off-by-one" errors, where we want to add more data to a tree but don't want to increase the black height. This leads us to the following pair of invariants.The red-black tree invariants require that:
Every path from the root to each Empty have the same number of black nodes, called the black height. (We treat Empty as black with black height zero.)
There are no two red nodes adjacent to each other (referred to as red-red violations), i.e. every red parent node has two black child nodes.
The first invariant guarantees that the trees are balanced ignoring red nodes, and the second invariant ensures that ther aren't "too many" red nodes in a given tree.1167Theorem150-009I150-009I.xmlRed-black balance propertiesEvery red-black tree is somewhat balanced: we have d \le 2\log _2(n + 1) + 1, where d is the depth of the tree and n is the number of nodes.
1166Proof#176unstable-176.xml150-009I
By the red-black invariants, we have that:
Every red-black tree t has depth d between the black height \text {BH}(t) (black nodes only) and 2\text {BH}(t) + 1 (black and red alternating): \text {BH}(t) \le d \le 2\text {BH}(t) + 1.
Every red-black tree t has size n between 2^{\text {BH}(t)} - 1 and and 2^{2\text {BH}(t) + 1} - 1: 2^{\text {BH}(t)} - 1 \le n \le 2^{2\text {BH}(t) + 1} - 1. Or, equivalently: 2^{\text {BH}(t)} \le n + 1 \le 2^{2\text {BH}(t) + 1}.
So, therefore:
\begin {aligned} d &\le 2\text {BH}(t) + 1 \\ &= 2\log _2(2^{\text {BH}(t)}) + 1 \\ &\le 2\log _2(n + 1) + 1 \end {aligned}
1168Example150-009J150-009J.xmlSample red-black treeThe following is a valid red-black tree:
\usepackage {tikz}
\usetikzlibrary {arrows}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth']
\node [blacknode] {2}
child{ node [blacknode] {1}}
child{ node [rednode] {7}
child{ node [blacknode] {4}
child{ node [rednode] {3}}
child{ node [rednode] {5}}
}
child{ node [blacknode] {8}}
}
;
\end {tikzpicture}
Every path from the root to a leaf has two black nodes, so the black-height is 2. Additionally, there are no two adjacent red nodes.1169Example150-009K150-009K.xmlNode insertion into sample red-black treeTo insert into a red-black tree, we find its correct position and add a red node, resolving problems recursively. Recall the sample red-black tree; consider inserting a node with the data 6 to maintain the order of the tree. We add a red node:
\usepackage {tikz}
\usetikzlibrary {arrows}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth']
\node [blacknode] {2}
child{ node [blacknode] {1}}
child{ node [rednode] {7}
child{ node [blacknode] {4}
child{ node [rednode] {3}}
child{ node [rednode] {5}
child [missing]
child{ node [rednode] {6}}
}
}
child{ node [blacknode] {8}}
}
;
\end {tikzpicture}
This 6 node is not problematic, but adding the 5 node above (at the prior recursive layer) causes a red-red violation. However, since the tree was originally valid, the next node above (4) must be black. Therefore, we can "rotate" the nodes 4, 5, and 7, recoloring 6 to remove the red-red violation:
\usepackage {tikz}
\usetikzlibrary {arrows}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth']
\node [blacknode] {2}
child{ node [blacknode] {1}}
child{ node [rednode] {7}
child{ node [rednode] {5}
child{ node [blacknode] {4}
child{ node [rednode] {3}}
child [missing]
}
child{ node [blacknode] {6}}
}
child{ node [blacknode] {8}}
}
;
\end {tikzpicture}
Now, this subtree has root 5 instead of 4 and is a valid red-black tree. However, the new red root 5 clashes with the red 7 above it! So, we perform the same trick with nodes 2, 5, and 7, rotating the tree and recoloring.
\usepackage {tikz}
\usetikzlibrary {arrows}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth',level/.style={sibling distance = 2.5cm/#1}]
\node [rednode] {5}
child{ node [blacknode] {2}
child{ node [blacknode] {1}}
child{ node [blacknode] {4}
child{ node [rednode] {3}}
child [missing]
}
}
child{ node [blacknode] {7}
child{ node [blacknode] {6}}
child{ node [blacknode] {8}}
}
;
\end {tikzpicture}
Finally, we have a valid red-black tree, and a very balanced one at that! By inserting 6, we were forced to reorganize the rest of the nodes, leading to a very balanced tree.Notice that at each step, the black height was preserved: even when the tree was invalid, the black height was always 2.Now, we will implement this algorithm.1185150-009F150-009F.xmlImplementation of red-black trees1172Example150-009L150-009L.xmlRed-black tree type and starter codeTo implement the dictionary signature, we write some starter code similar to the parametric tree dictionaries:functor RBTDict (K : ORDERED) :> DICT where type Key.t = K.t =
struct
structure Key = K
type 'a entry = Key.t * 'a
(* INVARIANTS:
* 1. number of black nodes on all paths from root to Empty are the same (black height)
* 2. all Red nodes have black children (no red-red violations)
*)
datatype 'a rbt
= Empty
| Red of 'a rbt * 'a * 'a rbt
| Black of 'a rbt * 'a * 'a rbt
(* INVARIANT: elements are ordered by Key.compare *)
type 'a dict = 'a entry tree
val empty = Empty
fun find _ Empty = NONE
| find k' (Red (l, (k, v), r)) =
( case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
)
| find k' (Black (l, (k, v), r)) =
( case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
)
(* TODO: insert *)
end
The 'a rbt type comes equipped with the red-black invariants. The implementation of find stays the same, aside from the fact that we must search both Red and Black nodes.1181Algorithm150-009M150-009M.xmlInsert algorithm for red-black treesThe goal is to implement the following function:(* insert : 'a entry -> 'a dict -> 'a dict
* REQUIRES: true
* ENSURES: insert (k, v) t ==> t', which is t with (k, v) inserted
*)
We require true, since we assume all dictionaries satisfy the given invariant implicitly due to the (* INVARIANT *) comment. To do this, we will write two helper functions that factor the problem through helper type 'a almost:datatype 'a almost
= OK of 'a rbt
| BadL of ('a rbt * 'a * 'a rbt) * 'a * 'a rbt (* INVARIANT: all three rbts are black with the same black height *)
| BadR of 'a rbt * 'a * ('a rbt * 'a * 'a rbt) (* INVARIANT: all three rbts are black with the same black height *)
Here, 'a almost represents a red-black tree that may have a single red-red violation at the root, based on the issue demonstrated. An 'a almost can be:OK t, where t is a valid red-black tree.
BadL ((t1, x, t2), y, t3), representing the data associated with a red-red violation on the left:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em},
subtree/.style = {text centered, regular polygon, regular polygon sides = 3, inner sep=0pt, white, draw=black, fill=black, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth']
\node [rednode] {$y$}
child{ node [rednode] {$x$}
child{ node [subtree] {$t_1$}}
child{ node [subtree] {$t_2$}}
}
child{ node [subtree] {$t_3$}}
;
\end {tikzpicture}
BadR (t1, x, (t2, y, t3)), representing the data associated with a red-red violation on the right:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
treenode/.style = {align=center, inner sep=0pt, text centered, font=\sffamily },
blacknode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=black, fill=black, text width=1.5em},
rednode/.style = {treenode, circle, white, font=\sffamily \bfseries , draw=red, fill=red, text width=1.5em},
subtree/.style = {text centered, regular polygon, regular polygon sides = 3, inner sep=0pt, white, draw=black, fill=black, text width=1.5em}
}
\begin {tikzpicture}[->,>=stealth']
\node [rednode] {$x$}
child{ node [subtree] {$t_1$}}
child{ node [rednode] {$y$}
child{ node [subtree] {$t_2$}}
child{ node [subtree] {$t_3$}}
}
;
\end {tikzpicture}
We then will write the following helper functions:(* ins : 'a entry -> 'a entry rbt -> 'a entry almost
* REQUIRES: true
* ENSURES:
* 1. ins (k, v) t ==> a, representing t with (k, v) inserted
* 2. BH(a) = BH(t)
* 3. if t is black, then a is OK
*)
(* recolor : 'a almost -> 'a rbt
* REQUIRES: true
* ENSURES: recolor a ==> t, where inord a = inord t and the BH(t) <= BH(a) + 1
*)
fun recolor (OK t) = t
| recolor (BadL (d1, y, t2)) = Black (Red d1, y, t2)
| recolor (BadR (t1, x, d2)) = Black (t1, x, Red d2)
(* insert : 'a entry -> 'a entry rbt -> 'a entry rbt *)
fun insert (k, v) t = recolor (ins (k, v) t)
The ins function will recursively insert, preserving black height and producing an almost. The recolor function will do at most one recoloring step (in the case a red-red violation propagates all the way to the top), increasing the black height by at most 1.Now, it remains to implement ins, the central function. The invariants guarantee that the result is correct, the black-height is preserved (a property we noticed in our example), and the fact that black nodes should never immediately cause violations. We begin as follows:(* ins : 'a entry -> 'a entry rbt -> 'a entry almost
* REQUIRES: true
* ENSURES:
* 1. ins (k, v) t ==> a, representing t with (k, v) inserted
* 2. BH(a) = BH(t)
* 3. if t is black, then a is OK
*)
fun ins ((k', v') : 'a entry) (t : 'a entry rbt) : 'a entry almost =
case t of
Empty => OK (Red (Empty, (k', v'), Empty))
As informally specified, we always start by creating a red node. | Red (l, (k, v), r) =>
( case Key.compare (k', k) of
EQUAL => OK (Red (l, (k', v'), r))
| LESS =>
( case ins (k', v') l of
OK (Red data) => BadL (data, (k, v), r)
| OK l' => OK (Red (l', (k, v), r))
| _ => raise Fail "impossible by ENSURES"
)
| GREATER =>
( case ins (k', v') r of
OK (Red data) => BadR (l, (k, v), data)
| OK r' => OK (Red (l, (k, v), r'))
| _ => raise Fail "impossible by ENSURES"
)
)
When we see a red node, we look the key. If the key is at the current node, we simply replace the data. Otherwise, suppose the key is to the left; we recursively insert into l. Since this node is red, we know that l must be black; so by the ENSURES, we know that ins (k', v') l is OK. When forming the node, though, we no longer know the color of ins (k', v') l - it may be either black or red. If the left tree is red, we give back BadL (a red-red violation on the left), or otherwise we create a usual Red node. The other case is symmetric.Finally, for the Black case. | Black (l, (k, v), r) =>
OK
( case Key.compare (k', k) of
EQUAL => Black (l, (k, v), r)
| LESS =>
( case ins (k', v') l of
OK l' => Black (l', (k, v), r)
| BadL ((t1, x, t2), y, t3) =>
Red (Black (t1, x, t2), y, Black (t3, (k, v), r))
| BadR (t1, x, (t2, y, t3)) =>
Red (Black (t1, x, t2), y, Black (t3, (k, v), r))
)
| GREATER =>
( case ins (k', v') r of
OK r' => Black (l, (k, v), r')
| BadL ((t1, x, t2), y, t3) =>
Red (Black (l, (k, v), t1), x, Black (t2, y, t3))
| BadR (t1, x, (t2, y, t3)) =>
Red (Black (l, (k, v), t1), x, Black (t2, y, t3))
)
)
To meet the ENSURES, we must always give back OK. If the key is at the current node, we still replace the data. Otherwise, suppose the key is to the left; we recursively insert into l. We find that the insertion was either OK (in which case we rebuild a Black node) or a red-red violation. In the case of a violation, we perform the given tree rotations, which preserve black height and always give back valid red-black trees.This completes the implementation! While the details are complex, the interface remains the same: a client can use the red-black dictionary just like any of the other dictionaries and get the same resulting behavior. We include the full code below:
1180Snippet#175unstable-175.xml150-009Mfunctor RBTDict (K : ORDERED) :> DICT where type Key.t = K.t =
struct
structure Key = K
type 'a entry = Key.t * 'a
(* INVARIANTS:
* 1. number of black nodes on all paths from root to Empty are the same (black height)
* 2. all Red nodes have black children (no red-red violations)
*)
datatype 'a rbt
= Empty
| Red of 'a rbt * 'a * 'a rbt
| Black of 'a rbt * 'a * 'a rbt
(* INVARIANT: elements are ordered by Key.compare *)
type 'a dict = 'a entry rbt
val empty = Empty
fun find _ Empty = NONE
| find k' (Red (l, (k, v), r)) =
( case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
)
| find k' (Black (l, (k, v), r)) =
( case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
)
local
datatype 'a almost
= OK of 'a rbt
| BadL of ('a rbt * 'a * 'a rbt) * 'a * 'a rbt (* INVARIANT: all three rbts are black with the same black height *)
| BadR of 'a rbt * 'a * ('a rbt * 'a * 'a rbt) (* INVARIANT: all three rbts are black with the same black height *)
(* ins : 'a entry -> 'a entry rbt -> 'a entry almost
* REQUIRES: true
* ENSURES:
* 1. ins (k, v) t ==> a, representing t with (k, v) inserted
* 2. BH(a) = BH(t)
* 3. if t is black, then a is OK
*)
fun ins ((k', v') : 'a entry) (t : 'a entry rbt) : 'a entry almost =
case t of
Empty => OK (Red (Empty, (k', v'), Empty))
| Red (l, (k, v), r) =>
( case Key.compare (k', k) of
EQUAL => OK (Red (l, (k', v'), r))
| LESS =>
( case ins (k', v') l of
OK (Red data) => BadL (data, (k, v), r)
| OK l' => OK (Red (l', (k, v), r))
| _ => raise Fail "impossible by ENSURES"
)
| GREATER =>
( case ins (k', v') r of
OK (Red data) => BadR (l, (k, v), data)
| OK r' => OK (Red (l, (k, v), r'))
| _ => raise Fail "impossible by ENSURES"
)
)
| Black (l, (k, v), r) =>
OK
( case Key.compare (k', k) of
EQUAL => Black (l, (k, v), r)
| LESS =>
( case ins (k', v') l of
OK l' => Black (l', (k, v), r)
| BadL ((t1, x, t2), y, t3) =>
Red (Black (t1, x, t2), y, Black (t3, (k, v), r))
| BadR (t1, x, (t2, y, t3)) =>
Red (Black (t1, x, t2), y, Black (t3, (k, v), r))
)
| GREATER =>
( case ins (k', v') r of
OK r' => Black (l, (k, v), r')
| BadL ((t1, x, t2), y, t3) =>
Red (Black (l, (k, v), t1), x, Black (t2, y, t3))
| BadR (t1, x, (t2, y, t3)) =>
Red (Black (l, (k, v), t1), x, Black (t2, y, t3))
)
)
(* recolor : 'a almost -> 'a rbt
* REQUIRES: true
* ENSURES: recolor a ==> t, where inord a = inord t and the BH(t) <= BH(a) + 1
*)
fun recolor (OK t) = t
| recolor (BadL (d1, y, t2)) = Black (Red d1, y, t2)
| recolor (BadR (t1, x, d2)) = Black (t1, x, Red d2)
in
(* insert : 'a entry -> 'a dict -> 'a dict
* REQUIRES: true
* ENSURES: insert (k, v) t ==> t', which is t with (k, v) inserted
*)
fun insert (k, v) t = recolor (ins (k, v) t)
end
end
1184Example150-009N150-009N.xmlAlternative implementationTo avoid the raise Fail "impossible by ensures", we can design our types differently. For rbt, we can give the following mutually-recursive definitions:datatype 'a rbt = Black of 'a black | Red of 'a red
and 'a black = Empty | Node of 'a rbt * 'a * 'a rbt
withtype 'a red = 'a black * 'a * 'a black
This defines a datatype 'a rbt, a datatype 'a black, and a type 'a red. We have that an 'a rbt is either an 'a black or an 'a red; an 'a black is either Empty or a Node of 'a rbts; and an 'a red is always two 'a black children. These types guarantee that the color invariant is always met: it is impossible to have a red-red violation in an rbt, by construction.Then, we adjust the other definitions as follows:fun find k' (Black b) = findBlack k' b
| find k' (Red (l, (k, v), r)) =
( case Key.compare (k', k) of
EQUAL => SOME v
| LESS => findBlack k' l
| GREATER => findBlack k' r
)
and findBlack k' Empty = NONE
| findBlack k' (Node (l, (k, v), r)) =
case Key.compare (k', k) of
EQUAL => SOME v
| LESS => find k' l
| GREATER => find k' r
local
datatype 'a almost
= OK of 'a rbt
| BadL of ('a black * 'a * 'a black) * 'a * 'a black
| BadR of 'a black * 'a * ('a black * 'a * 'a black)
fun ins ((k', v') : 'a entry) (t : 'a entry rbt) : 'a entry almost =
case t of
Black t => OK (insBlack (k', v') t)
| Red (l, (k, v), r) =>
case Key.compare (k', k) of
EQUAL => OK (Red (l, (k', v'), r))
| LESS =>
( case insBlack (k', v') l of
Red data => BadL (data, (k, v), r)
| Black l' => OK (Red (l', (k, v), r))
)
| GREATER =>
( case insBlack (k', v') r of
Red data => BadR (l, (k, v), data)
| Black r' => OK (Red (l, (k, v), r'))
)
and insBlack ((k', v') : 'a entry) (t : 'a entry black) : 'a entry rbt =
case t of
Empty => Red (Empty, (k', v'), Empty)
| Node (l, (k, v), r) =>
case Key.compare (k', k) of
EQUAL => Black (Node (l, (k', v'), r))
| LESS =>
( case ins (k', v') l of
OK l' => Black (Node (l', (k, v), r))
| BadL ((t1, x, t2), y, t3) =>
Red (Node (Black t1, x, Black t2), y, Node (Black t3, (k, v), r))
| BadR (t1, x, (t2, y, t3)) =>
Red (Node (Black t1, x, Black t2), y, Node (Black t3, (k, v), r))
)
| GREATER =>
( case ins (k', v') r of
OK r' => Black (Node (l, (k, v), r'))
| BadL ((t1, x, t2), y, t3) =>
Red (Node (l, (k, v), Black t1), x, Node (Black t2, y, Black t3))
| BadR (t1, x, (t2, y, t3)) =>
Red (Node (l, (k, v), Black t1), x, Node (Black t2, y, Black t3))
)
fun recolor (OK t) = t
| recolor (BadL (d1, y, t2)) = Black (Node (Red d1, y, Black t2))
| recolor (BadR (t1, x, d2)) = Black (Node (Black t1, x, Red d2))
in
fun insert (k, v) t = recolor (ins (k, v) t)
end
Here, the types tell us what invariants we have about the colors, so we can make insBlack always return an 'a rbt (a valid tree) whereas ins returns an 'a almost (which may have a violation).1275Lecture150-lect17150-lect17.xmlSequences I: introduction2024716Harrison Grodin
hljs.highlightAll();
This lecture is inspired by an analogous lecture by Michael Erdmann and Brandon Wu.
1231150-009O150-009O.xmlMotivation1193Principle150-009P150-009P.xmlFunctional parallelismParallelism and functional programming go hand-in-hand.At a low level, parallelism involves scheduling work to processors;
but at a high level, parallelism involves indicating which expressions can be evaluated in parallel, without baking in a schedule.Functional programming helps:Since there are no effects (like memory updates) available, evaluation order doesn't matter, and race conditions are impossible to even describe in code.
Higher-order functions and abstract types allow complex parallelism techniques to be implemented under the hood but retain a simple interface.
Work and span analysis lets us predict the parallel speedup without fixing the number of processors in advance.1230Goal150-009Q150-009Q.xmlSequences: parallel listsLists are a common type to use for solving problems; however, their definition is inherently sequential. What if we could define a type like list that had better parallel cost? We will call such an abstract type a sequence.We would want operations like nth (to get an element of the sequence), tabulate (to create a new sequence), and append (to append new sequences, where cons is a special case of appending a singleton).There are many ways we could implement these operations on various underlying types, including lists, trees, and arrays. These operations have the following work/span bounds when implemented using each of these:
Operation
List (W/S)
Tree (W)
Tree (S)
Array (W)
Array (S)
nth
\mathcal {O}(n)
\mathcal {O}(\log n )
\mathcal {O}(\log n )
\mathcal {O}(1 )
\mathcal {O}(1)
tabulate
\mathcal {O}(n)
\mathcal {O}(n )
\mathcal {O}(\log n )
\mathcal {O}(n )
\mathcal {O}(1)
append
\mathcal {O}(m)
\mathcal {O}(\log m + \log n)
\mathcal {O}(\log m + \log n)
\mathcal {O}(m + n)
\mathcal {O}(1)
cons
\mathcal {O}(1)
\mathcal {O}(\log n )
\mathcal {O}(\log n )
\mathcal {O}(n )
\mathcal {O}(1)
Lists are entirely sequential and only beat other implementations with the work of append, when m is small and n is large. Sequentially, trees are a nice middle ground between lists and arrays, always having good work. In parallel, though, arrays are unbeatable. Therefore, in this course, we will analyze the cost of sequences assuming they are implemented by arrays.1236150-009R150-009R.xmlCost graphs1232Definition150-009S150-009S.xmlCost graphA cost graph is a visualization technique for parallel processes consisting of a directed acyclic graph with designated start and end nodes. They are defined inductively as follows, where we implicitly treat all edges as top-to-bottom:
Atomic units are variables representing cost of an abstract operation, drawn using a hexagon:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] {\texttt {f}};
\end {tikzpicture}
There is an empty cost graph 0:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node {$\bullet $};
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in sequence, written G_1 \triangleright G_2, representing data dependency:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (0,1) {$G_1$};
\node (G2) at (0,0) {$G_2$};
\path (G1) edge (G2);
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in parallel, written G_1 \otimes G_2, representing data independence:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (-1,0) {$G_1$};
\node (G2) at (1,0) {$G_2$};
\node (start) at (0,1) {$\bullet $};
\node (end) at (0,-1) {$\bullet $};
\path (start) edge (G1);
\path (start) edge (G2);
\path (G1) edge (end);
\path (G2) edge (end);
\end {tikzpicture}
1233Definition150-009V150-009V.xmlWork and span of a cost graphThe work of a cost graph is the sum of the costs of all hexagonal nodes in the graph.
The span of a cost graph is the sum of the costs of the hexagonal nodes on the path from the start node to the end node with the highest cost.1234Example150-009T150-009T.xmlCost graph of arithmetic expression
The cost graph of (1 + 2) * (3 + 4) is (\texttt {+} \otimes \texttt {+}) \triangleright \texttt {*}, depicted visually as:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 2) {$\bullet $};
\node [hexagon] (P1) at (-1, 1) {\texttt {+}};
\node [hexagon] (P2) at (1, 1) {\texttt {+}};
\node [hexagon] (T) at (0, 0) {\texttt {*}};
\path (start) edge (P1);
\path (start) edge (P2);
\path (P1) edge (T);
\path (P2) edge (T);
\end {tikzpicture}
Assuming the cost of each arithmetic operation is 1, the work of this graph is 3 and the span is 2.1235Assumption150-009U150-009U.xmlIgnoring constantsAt this point in the course, we will ignore constants and cost metrics, instead only counting evaluation steps asymptotically for simplicity. Therefore, we will count all atomic hexagonal nodes as taking constant cost.1249150-009W150-009W.xmlSequences: indexed collections1239Concept150-009X150-009X.xmlLimited sequence signature: indexed collectionThe sequence signature includes the following specifications:signature SEQUENCE =
sig
type 'a t (* abstract *)
type 'a seq = 'a t (* concrete *)
val tabulate : (int -> 'a) -> int -> 'a seq
val length : 'a seq -> int
val nth : 'a seq -> int -> 'a
(* ...more to come... *)
end
The abstract type 'a t represents a sequence of 'as, where 'a seq is an alias for signature readability.The implementation of SEQUENCE is called Seq:structure Seq :> SEQUENCE = (* ... *)
The full signature and documentation is available on the course website.1240Notation150-009Z150-009Z.xmlMathematical representation of sequencesWe denote a sequence using the mathematical notation (not SML syntax) as \langle x_0, x_1, \cdots , x_{n-1}\rangle or <x0, x1, ..., x_{n-1}>.1242Definition150-009Y150-009Y.xmlSequence tabulateThe function Seq.tabulate creates a new sequence of length n, calling a function on 0 through n - 1:(* Seq.tabulate : (int -> 'a) -> int -> 'a Seq.t
* REQUIRES: n >= 0
* ENSURES: Seq.tabulate f n ~= <f 0, f 1, ..., f (n - 1)>
*)
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 1) {$\bullet $};
\node [hexagon] (f0) at (-2.5, 0) {\texttt {f}};
\node [hexagon] (f1) at (-1, 0) {\texttt {f}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (f2) at (1, 0) {\texttt {f}};
\node [hexagon] (f3) at (2.5, 0) {\texttt {f}};
\node (end) at (0, -1) {$\bullet $};
\path (start) edge (f0);
\path (start) edge (f1);
\path (start) edge (f2);
\path (start) edge (f3);
\path (f0) edge (end);
\path (f1) edge (end);
\path (f2) edge (end);
\path (f3) edge (end);
\end {tikzpicture}
Its work and span depend on the cost of f, but assuming f is constant-time, then tabulate f n has work \mathcal {O}(n) and span \mathcal {O}(1).
1244Definition150-00A0150-00A0.xmlSequence lengthThe function Seq.length computes the length of a sequence:(* Seq.length : 'a Seq.t -> int
* REQUIRES: true
* ENSURES: Seq.length <x0, ..., x_{n-1}> ~= n
*)
Its cost graph is depicted as a single node, which by we assume has constant-time cost:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (0, 0) {\small \texttt {length}};
\end {tikzpicture}
1246Definition150-00A1150-00A1.xmlSequence nthThe function Seq.nth retrieves an element of a sequence:(* Seq.nth : 'a Seq.t -> int -> 'a
* REQUIRES: 0 <= i < Seq.length S
* ENSURES: Seq.nth <x0, ..., x_{n-1}> i ~= x_i
*)
Its cost graph is depicted as a single node, which by we assume has constant-time cost:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (0, 0) {\texttt {nth}};
\end {tikzpicture}
1248Definition150-00A2150-00A2.xmlSequence mapUsing sequence tabulate, sequence length, and sequence nth, we can define a map function:(* map : ('a -> 'b) -> 'a Seq.t -> 'b Seq.t
* REQUIRES: true
* ENSURES: map f <x0, ..., x_{n-1}> ~= <f x0, ..., f x_{n-1}>
*)
fun map f S = Seq.tabulate (fn i => f (Seq.nth S i)) (Seq.length S)
(* or equivalently: *)
fun map f S = Seq.tabulate (f o Seq.nth S) (Seq.length S)
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] (start) at (0, 2) {\small \texttt {length}};
\node [hexagon] (nth0) at (-2.5, 1) {\texttt {nth}};
\node [hexagon] (f0) at (-2.5, 0) {\texttt {f}};
\node [hexagon] (nth1) at (-1, 1) {\texttt {nth}};
\node [hexagon] (f1) at (-1, 0) {\texttt {f}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (nth2) at (1, 1) {\texttt {nth}};
\node [hexagon] (f2) at (1, 0) {\texttt {f}};
\node [hexagon] (nth3) at (2.5, 1) {\texttt {nth}};
\node [hexagon] (f3) at (2.5, 0) {\texttt {f}};
\node (end) at (0, -1) {$\bullet $};
\path (start) edge (nth0);
\path (start) edge (nth1);
\path (start) edge (nth2);
\path (start) edge (nth3);
\path (nth0) edge (f0);
\path (nth1) edge (f1);
\path (nth2) edge (f2);
\path (nth3) edge (f3);
\path (f0) edge (end);
\path (f1) edge (end);
\path (f2) edge (end);
\path (f3) edge (end);
\end {tikzpicture}
The work and span of map depend on the cost of f, but assuming f is constant-time, then map f S has work \mathcal {O}(n) and span \mathcal {O}(1).
Although it can be easily implemented as above, this function is included in the SEQUENCE signature for convenience.1274150-00A3150-00A3.xmlSequences: the free monoidAs given, it is not very easy to combine the data within a sequence: we have to explicitly use recursion, parallel evaluation of tuples, and sequence nth. Here, we view sequences as an inductive tree-like structure, allowing them to come equipped with a fold abstraction akin to list foldr.1251Concept150-00A4150-00A4.xmlLimited sequence signature: free monoidWe can also view sequences inductively, where every sequence arises as the combination of some singletons:signature SEQUENCE =
sig
(* ...as before... *)
val singleton : 'a -> 'a seq
val empty : unit -> 'a seq
val append : 'a seq * 'a seq -> 'a seq
val mapreduce : ('a -> 'b) -> 'b -> ('b * 'b -> 'b) -> 'a seq -> 'b
(* ...more to come... *)
end
The functions singleton, empty, and append can be implemented using sequence tabulate. (Alternatively, they can be viewed as the primitive way to construct sequences, where sequence tabulate is implemented in terms of them; however, when working with an array-based implementation of sequences, this cost bound will be worse.) The mapreduce function is the fold abstraction for sequences built this way.1253Definition150-00AG150-00AG.xmlSingleton sequenceUsing sequence tabulate, we can define a function to create a sequence with one element:(* singleton : 'a -> 'a Seq.t
* REQUIRES: true
* ENSURES: singleton a ~= <a>
*)
fun singleton a = Seq.tabulate (fn _ => a) 1
This function has constant work and span.1255Definition150-00AH150-00AH.xmlEmpty sequenceUsing sequence tabulate, we can define a function to create an empty sequence:(* empty : unit -> 'a Seq.t
* REQUIRES: true
* ENSURES: empty () ~= <>
*)
fun empty () = Seq.tabulate (fn _ => raise Fail "impossible") 0
This function has constant work and span.1257Definition150-00AI150-00AI.xmlSequence appendUsing sequence tabulate, sequence length, and sequence nth, we can define a function to append two sequences:(* append : 'a Seq.t * 'a Seq.t -> 'a Seq.t
* REQUIRES: true
* ENSURES: append (<x0, ..., x_{m-1}>, <y0, ..., y_{n-1}>) ~= <x0, ..., x_{m-1}, y0, ..., y_{n-1}>
*)
fun append (S1, S2) =
Seq.tabulate
(fn i => if i < Seq.length S1 then Seq.nth S1 i else Seq.nth S2 (i - Seq.length S1))
(Seq.length S1 + Seq.length S2)
Based on the cost graphs for sequence tabulate, sequence length, and sequence nth, we find that this function has work \mathcal {O}(m + n) and span \mathcal {O}(1).1258Idea150-00A5150-00A5.xmlParallel reduction of dataWe might wish to combine data in a sequence in parallel. For example, we might wish to compute:
\texttt {sum <1, 2, 3, 4, 5, 6>} \Longrightarrow \texttt {21}.
If we extended the SEQUENCE signature with a foldr primitive, though, we would have no better parallelism than lists:
\begin {aligned} &\texttt {foldr op+ 0 <1, 2, 3, 4, 5, 6>} \\ &\Longrightarrow \texttt {1 + (2 + (3 + (4 + (5 + (6 + 0)))))} \end {aligned}
Instead, we can reparenthesize, pairing numbers up and evaluating the sums in parallel. We call this function reduce:
\begin {aligned} &\texttt {reduce op+ 0 <1, 2, 3, 4, 5, 6>} \\ &\Longrightarrow \texttt {((1 + 2) + (3 + 0)) + ((4 + 5) + (6 + 0))} \end {aligned} Here, we use the fact that 0 is the identity for + to add 0s at will, balancing out the computation tree.What happens if we did the same thing for subtraction, though?
\begin {aligned} &\texttt {foldr op- 0 <1, 2, 3, 4, 5, 6>} \\ &\Longrightarrow \texttt {1 - (2 - (3 - (4 - (5 - (6 - 0)))))} \\ &\Longrightarrow \texttt {\textasciitilde 3} \\ &\texttt {reduce op- 0 <1, 2, 3, 4, 5, 6>} \\ &\Longrightarrow \texttt {((1 - 2) - (3 - 0)) - ((4 - 5) - (6 - 0))} \\ &\Longrightarrow \texttt {3} \end {aligned}
The different parenthesizations give different results, ~3 and 3!
To avoid this issue, we restrict the inputs with which we can use reduce.
1259Definition150-00A7150-00A7.xmlIdentity elementLet z : t and g : t * t -> t. We say that z is an identity element for g when for all a: \texttt {g (a, z)} \cong \texttt {a} \cong \texttt {g (z, a)}.1260Definition150-00A8150-00A8.xmlAssociative functionLet g : t * t -> t. We say that g is a associative when for all a, b, c: \texttt {g (g (a, b), c)} \cong \texttt {g (a, g (b, c))}.1261Definition150-00A6150-00A6.xmlMonoidA monoid consists of:a type t,
some z : t,
and some g : t * t -> t such that
z is an identity element for g, and
g is an associative function.1262Example150-00A9150-00A9.xmlInteger addition monoidThe following data form a monoid:type t is int,
z is 0,
g is op+.1263Example150-00AA150-00AA.xmlString concatenation monoidThe following data form a monoid:type t is string,
z is "",
g is op^.1265Definition150-00AB150-00AB.xmlSequence reduceThe function Seq.reduce combines the data in a sequence using a monoid:(* Seq.reduce : ('a * 'a -> 'a) -> 'a -> 'a Seq.t -> 'a
* REQUIRES: g and z form a monoid
* ENSURES: Seq.reduce g z <x0, x1, ..., x_{n-1}> ~= g (x0, g (x1, ..., g (x_{n-1}, z)))
*)
Notice that the behavior of reduce exactly mirrors list foldr, and its type is an instance of the type of list foldr. However, thanks to the assumption that g and z form a monoid, reduce is more efficient than foldr in parallel.
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 1) {$\bullet $};
\node [hexagon] (g0) at (-2.5, 0) {\texttt {g}};
\node [hexagon] (g1) at (-1, 0) {\texttt {g}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (g2) at (1, 0) {\texttt {g}};
\node [hexagon] (g3) at (2.5, 0) {\texttt {g}};
\node [hexagon] (gg0) at (-1.75, -1) {\texttt {g}};
\node [hexagon] (gg1) at (1.75, -1) {\texttt {g}};
\node [hexagon] (ggg) at (0, -2) {\texttt {g}};
\path (start) edge (g0);
\path (start) edge (g1);
\path (start) edge (g2);
\path (start) edge (g3);
\path (g0) edge (gg0);
\path (g1) edge (gg0);
\path (g2) edge (gg1);
\path (g3) edge (gg1);
\path (gg0) edge (ggg);
\path (gg1) edge (ggg);
\end {tikzpicture}
Its work and span depend on the cost of g, but assuming g is constant-time, then reduce g z S has work \mathcal {O}(n) and span \mathcal {O}(\log n).
1267Example150-00AC150-00AC.xmlSequence sumUsing sequence reduce, we can sum a sequence as follows:val sum : int seq -> int = Seq.reduce op+ 0
The code is analogous to list sum, where the call to Seq.reduce is justified by the integer addition monoid.1269Example150-00AD150-00AD.xmlSequence filterWe can attempt to implement an analogue to list filter using sequence reduce, using the pipe function for readability:fun filter p S =
S (* <1, 2, 3, 4> *)
|> Seq.map
(fn i =>
if p (Seq.nth S i)
then Seq.singleton (Seq.nth S i)
else Seq.empty ()) (* <<>, <2>, <>, <4>> *)
|> Seq.reduce Seq.append (Seq.empty ()) (* <2, 4> *)
The commented case on the right shows how filtering the even data from a sequence would work.Using the cost graphs for sequence map and sequence reduce, we find that the work of this implementation (assuming a constant-time predicate p) is \mathcal {O}(n\log n) and the span is \mathcal {O}(\log n).
In fact, we can modify this implementation to recover an implementation with \mathcal {O}(n) work (and the same span); however, while we use the \mathcal {O}(n) cost bound in this course, we leave the development of the more efficient algorithm to 15-210: Parallel and Sequential Data Structures and Algorithms.1271Concept150-00AE150-00AE.xmlSequence mapreduceThe pattern of sequence map followed by sequence reduce is very common. We define a hybrid function mapreduce accordingly:(* mapreduce : ('a -> 'b) -> 'b -> ('b * 'b -> 'b) -> 'a Seq.t -> 'b
* REQUIRES: g and z form a monoid
* ENSURES: Seq.mapreduce f z g ~= Seq.reduce g z o Seq.map f
*)
fun mapreduce f z g = Seq.reduce g z o Seq.map f
In fact, this function is the fold for sequences defined using singleton, empty, and append:
\begin {aligned} \texttt {mapreduce f z g (Seq.singleton a)} &\cong \texttt {f a} \\ \texttt {mapreduce f z g (Seq.empty ())} &\cong \texttt {z} \\ \texttt {mapreduce f z g (Seq.append (s1, s2))} &\cong \texttt {g (mr f z g s1, mr f z g s2)} \end {aligned}
We abbreviate mapreduce as mr here for brevity.The monoid requirements on reduce (and mapreduce) are justified by the behavior of append and empty. For example:
\begin {aligned} &\texttt {g (z, mapreduce f z g s)} \\ &\cong \texttt {g (mapreduce f z g (Seq.empty ()), mapreduce f z g s)} \\ &\cong \texttt {mapreduce f z g (Seq.append (Seq.empty (), s))} \\ &\cong \texttt {mapreduce f z g s} \end {aligned}
Here, since \texttt {Seq.append (Seq.empty (), s)} \cong \texttt {s}, we must have that z is a left identity for g. Similar reasoning justifies that z must be a right identity and g must be associative.
1273Example150-00AF150-00AF.xmlMatrix sumRepresenting an m \times n matrix as an m-length sequence of n-length sequences, we can ask for the sum of the elements of a matrix using sequence sum:type 'a matrix = 'a Seq.t Seq.t
fun msum (S : int Seq.t Seq.t) : int Seq.t =
S (* <<1,2>, <3,4>, <5,6>> *)
|> Seq.map sum (* <3, 7, 11> *) (* W: O(mn), S: O(log(n)) *)
|> sum (* 21 *) (* W: O(m) , S: O(log(m)) *)
Each line is annotated with cost. Thus, the total work of msum S is O(mn), and the total span is O(\log m + \log n).1304Lecture150-lect18150-lect18.xmlSequences II: sorting2024718Harrison Grodin
hljs.highlightAll();
This lecture is inspired by a similar lecture by Michael Erdmann.
In the previous lecture, we saw that sequences can be viewed as indexed collections and trees (that "self-balance", since they are implemented using arrays). Now, we use the tree-based perspective to implement a divide-and-conquer algorithm we previously implemented on lists: merge sort.1289150-00AJ150-00AJ.xmlSequence mergeFirst, we implement the merge auxiliary function.1282Idea150-00AK150-00AK.xmlParallel mergeThe implementation of the merge auxiliary function on lists was inherently sequential, traversing the lists one-by-one. To parallelize, we hope to divide a sequence in halves before recursively merging both sides.Suppose we split both sequences in half naively: \begin {aligned} \langle a, c, d, g, i\rangle &\mapsto \langle a, c\rangle , \langle d, g, i\rangle \\ \langle b, e, f, h\rangle &\mapsto \langle b, e\rangle , \langle f, h\rangle \end {aligned} We could recursively merge both sides to get \langle a,b,c,e\rangle and \langle d,f,g,h,i\rangle , but it's not immediately clear how to combine these results to get the full sequence \langle a,b,c,d,e,f,g,h,i\rangle .Instead, what if we split the first sequence in halves, and then split the second sequence to match the split of the first sequence? \begin {aligned} \langle a, c, d, g, i\rangle &\mapsto \langle a, c\rangle , d, \langle g, i\rangle \\ \langle b, e, f, h\rangle &\mapsto \langle b\rangle , \langle e, f, h\rangle \end {aligned} We split the first sequence with midpoint d, and then we split the second sequence into elements less than d and greater than d. Then, we can recursively merge to get \langle a,b,c\rangle and \langle e,f,g,h,i\rangle , which we can append with d in the middle to get the final result. To find the split point of the second sequence, we can use binary search.1284Algorithm150-00AL150-00AL.xmlBinary search on sequencesWe can implement binary search as follows:(* binarySearch : string Seq.t -> string -> int
* REQUIRES: S is sorted
* ENSURES: binarySearch S x ~= i, such that
- 0 <= i <= Seq.length S and
- Seq.split S i ~= (Sa, Sb) such that
- every element of Sa is <= x and
- every element of Sb is >= x.
*)
fun binarySearch S x =
if Seq.null S then 0 else
let
val n = Seq.length S div 2
val (Sa, y, Sb) = (Seq.take S n, Seq.nth S n, Seq.drop S (n + 1))
in
case String.compare (x, y) of
EQUAL => n
| LESS => binarySearch Sa x
| GREATER => n + 1 + binarySearch Sb x
end
The work and span are \mathcal {O}(\log n), as all of the sequence functions involved are constant time, and we repeatedly divide n, the length of input S, in half.1286Algorithm150-00AM150-00AM.xmlSequence mergeWe implement parallel merge as follows:(* merge : string Seq.t * string Seq.t -> int
* REQUIRES: S1 and S2 are sorted
* ENSURES: merge (S1, S2) ~= S, where S is a sorted permutation of Seq.append (S1, S2)
*)
fun merge (S1, S2) =
if Seq.null S1 then S2 else (* O(1) *)
let
val n = Seq.length S1 div 2 (* O(1) *)
val (S1a, x, S1b) = (Seq.take S1 n, Seq.nth S1 n, Seq.drop S1 (n + 1)) (* O(1) *)
val i = binarySearch S2 x (* O(log(n)) *)
val (S2a, S2b) = Seq.split S2 i (* O(1) *)
val (Sa, Sb) = (merge (S1a, S2a), merge (S1b, S2b))
in
Seq.append (Sa, Seq.append (Seq.singleton x, Sb)) (* W: O(m + n), S: O(1) *)
end
1287Example150-00AN150-00AN.xmlSequence merge work analysisWe analyze the work of sequence merge using an informal recurrence: \begin {aligned} W(0, n) &= \mathcal {O}(1) \\ W(m, n) &\le 2W(m/2, n) + \mathcal {O}(m + n) \end {aligned} Since we don't know how the second sequence (of length n) will be split, we give an upper bound in the recursive case by assuming that n stays fixed, even though it must shrink in at least one of the branches.Solving this recurrence, we find that W(m, n) \in \mathcal {O}(mn).Unfortunately, this is severely worse than the \mathcal {O}(m + n) cost we achieved for sequential merge! Notice, though, that the dominating cost is that of appending the sequences at the end. Using techniques developed in 15-210: Parallel and Sequential Data Structures and Algorithms, we can avoid the cost of the append, recovering a merge algorithm with the more reasonable work \mathcal {O}(m + n). In our further analyses (and the sequence documentation), we will use this bound instead.1288Example150-00AO150-00AO.xmlSequence merge span analysisWe analyze the span of sequence merge using an informal recurrence: \begin {aligned} S(0, n) &= \mathcal {O}(1) \\ S(m, n) &\le S(m/2, n) + \mathcal {O}(\log (n)) \end {aligned} As described in the sequence merge work analysis, we upper bound the recursive call by leaving n unchanged. Solving this recurrence, we find that S(m, n) \in \mathcal {O}(\log (m)\log (n)).Using techniques developed in 15-210: Parallel and Sequential Data Structures and Algorithms, we can improve this bound to \mathcal {O}(\log (m) + \log (n)) = \mathcal {O}(\log (mn)). In our further analyses (and the sequence documentation), we will use this bound instead.1295150-00AP150-00AP.xmlSequence merge sort1290Idea150-00AQ150-00AQ.xmlParallel merge sortWhen implementing merge sort, we described essentially the following idea: \begin {aligned} \texttt {msort}~\langle x\rangle &= \langle x\rangle \\ \texttt {msort}~\langle \rangle &= \langle \rangle \\ \texttt {msort}~(\texttt {append}(S_1, S_2)) &= \texttt {merge}(\texttt {msort}~S_1, \texttt {msort}~S_2) \end {aligned} 1292Algorithm150-00AR150-00AR.xmlSequence merge sortWe can implement parallel merge sort cleanly using sequence mapreduce:(* msort : string Seq.t -> string Seq.t
* REQUIRES: true
* ENSURES: msort S ~= S', where S' is a sorted permutation of S
*)
val msort = Seq.mapreduce Seq.singleton (Seq.empty ()) merge
Observe that merge and Seq.empty () form a monoid, meeting the precondition for Seq.mapreduce.1293Example150-00AS150-00AS.xmlSequence merge sort work analysisWe analyze the work of sequence merge sort using the cost graphs for sequence mapreduce, singleton sequence, and empty sequence. Assuming the work of merge is \mathcal {O}(m + n) as discussed, we find that the work of msort is \mathcal {O}(n\log (n)).1294Example150-00AT150-00AT.xmlSequence merge sort span analysisWe analyze the span of sequence merge sort using the cost graphs for sequence mapreduce, singleton sequence, and empty sequence. Assuming the span of merge is \mathcal {O}(\log (m) + \log (n)) as discussed, we find that the work of msort is \mathcal {O}(\log ^2(n)).This is an improvement over the span found earlier, \mathcal {O}(n), thanks to the use of a parallel merge algorithm.1303150-00AU150-00AU.xmlSequence viewsTo avoid excessive use of indexing and if Seq.null checks, we can use an idea called views to pattern match on sequences instead, viewing sequences as if they were balanced trees.1298Concept150-00AV150-00AV.xmlMiddle-tree viewWe can implement a view of sequences as trees with data at the nodes as follows:signature SEQUENCE =
sig
(* ...as before... *)
datatype 'a mview = Bud | Branch of 'a seq * 'a * 'a seq
val join : 'a seq * 'a * 'a seq -> 'a seq
val showm : 'a seq -> 'a mview
val hidem : 'a mview -> 'a seq
end
These functions make sequences look like trees, hiding away some indexing:fun join (S1, x, S2) =
Seq.append (S1, Seq.append (Seq.singleton x, S2))
fun showm (S : 'a Seq.t) : 'a Seq.mview =
if Seq.null S then Seq.Bud else
let
val n = Seq.length S div 2
in
Branch (Seq.take S n, Seq.nth S n, Seq.drop S (n + 1))
end
fun hidem Seq.Bud = Seq.empty ()
| hidem (Seq.Branch (s1, x, s2)) = join (s1, x, s2)
Note: while other views (lview and tview) are available in the given sequence signature, this mview is not included by default.1300Example150-00AW150-00AW.xmlBinary search on sequences using middle-tree viewWe can reimplement binary search on sequences without explicit indexing using the middle-tree view (which we assume is included in the sequence signature).fun binarySearch S x =
case Seq.showm S of
Seq.Bud => 0
| Seq.Branch (Sa, y, Sb) =>
case String.compare (x, y) of
EQUAL => n
| LESS => binarySearch Sa x
| GREATER => n + 1 + binarySearch Sb x
1302Example150-00AX150-00AX.xmlSequence merge using middle-tree viewWe can reimplement sequence merge without as much explicit indexing using the middle-tree view (which we assume is included in the sequence signature).fun merge (S1, S2) =
case Seq.showm of
Seq.Bud => S2
| Seq.Branch (S1a, x, S1b) =>
let
val (S2a, S2b) = Seq.split S2 (binarySearch S2 x)
in
Seq.join (merge (S1a, S2a), x, merge (S1b, S2b))
end
1349Lecture150-lect19150-lect19.xmlImperative programming I: effects2024723Harrison Grodin
hljs.highlightAll();
This lecture is inspired by lectures by Michael Erdmann and Brandon Wu.
1329150-00AY150-00AY.xmlExceptions1311Concept150-00AZ150-00AZ.xmlraise expressionThe expression raise Fail "TODO" has most general type 'a, filling in for any type we wish. More generally, raise e has most general type 'a, for any exception e.Unlike other expressions, it does not evaluate to any value.1313Concept150-00B0150-00B0.xmlexn typeAn exception, like Fail "TODO" or Div, has type exn. So, note that Fail : string -> exn. We can write raise e for any e : exn.The type exn can be thought of as a datatype with infinitely many constructors:datatype exn = Fail of string | Div | ...
1315Concept150-00B1150-00B1.xmlException declarationAn exception can be declared as follows:exception Constructor1
exception Constructor2 of dataToContain2
Notice the similarity to datatype declaration. However, here, we only give one constructor per declaration: since the exn type has infinitely many constructors, we only provide one more.Like a datatype declaration, an exception declaration can also go in a signature, requiring that the structure provide a matching exception declaration.1318Example150-00B2150-00B2.xmlQueue with exceptionsWe can augment the queue signature to include an exception Empty, to be raised if there is no data remaining when dequeue is used:signature QUEUE =
sig
type 'a queue (* abstract *)
exception Empty
val empty : 'a queue
val enqueue : 'a queue -> 'a -> 'a queue
val dequeue : 'a queue -> 'a * 'a queue (* may raise Empty *)
end
The structure implementing this QUEUE signature must define a matching exception Empty:structure ListQueue :> QUEUE =
struct
type 'a queue = 'a list
exception Empty
val empty = nil
fun enqueue (l : 'a queue) (x : 'a) : 'a queue = l @ [x]
fun dequeue nil = raise Empty
| dequeue (x :: xs) = (x, xs)
end
1320Concept150-00B5150-00B5.xmlhandle expressionA handle expression has the following structure:e handle pat1 => e1
| pat2 => e2
| ...
| patn => en
Note the similarity to a case expressions. For this to typecheck, we must have that e : t for some type t, and each ei : t, and each pati is a pattern matching the exn type. A handle expression will first evaluate e. If it evaluates to a value, that value is provided immediately; or, if it raises an exception, the corresponding handler is evaluated. If no patterns match, the expression is simply re-raised.1322Example150-00B6150-00B6.xmlList averageUsing handle, we can choose a default value in case an exception is raised. For example, we may choose that the average of an empty list is 0:fun average (l : int list) : int =
(sum l div length l)
handle Div => 0
1324Example150-00B7150-00B7.xmlSafe division with optionsUsing a handler, we can catch the Div exception, if it is raised, and give back NONE:fun safeDiv (x : int, y : int) : int option =
SOME (x div y) handle Div => NONE
Note that we wrap x div y in SOME, since if it evaluates successfully, we must also return an option.1326Example150-00B9150-00B9.xmlBypassed handlerConsider the following:fun avert (f : unit -> string) : string =
f () handle Fail s => s ^ " averted"
| ListQueue.Empty => "empty queue averted"
When we evaluate the following two expressions, they evaluate to values: \begin {aligned} \texttt {avert (fn () => "done")} &\hookrightarrow \texttt {"done"} \\ \texttt {avert (fn () => raise Fail "explosion")} &\hookrightarrow \texttt {"explosion averted"} \end {aligned} However, avert (fn () => Int.toString (150 div 0)) raises Div, since Div is never handled by a clause of the handle expression in avert.1328Warning150-00B8150-00B8.xmlHandlers and evaluation orderConsider the following:fun puzzle (x : int) : string =
Int.toString x
handle Div => "divided by zero!"
Since the argument is evaluated first in a function application, puzzle (1 div 0) does not evaluate to "divided by zero!". Instead, it raises Div, never stepping into the definition of puzzle.1348150-00BA150-00BA.xmlExtensional equivalence with effects1330Definition150-00BB150-00BB.xmlEffectAn effect is something the evaluation of a program can do aside from returning a value.1331Example150-00BC150-00BC.xmlException effectRaising an exception is an effect.1333Example150-00BD150-00BD.xmlInfinite loopTypically, we ensure that programs go by well-founded recursion on some input. However, nothing prevents us from writing nonsensical programs such as the following:fun evil n = evil (n - 1)
Since evil has no base case, evaluating evil n for any n will never terminate. Thus, infinite looping is another effect: rather than return a value, one can infinite loop.1334Example150-00BI150-00BI.xmlprint effectThe print : string -> unit function is an effect, causing the given string to be displayed in the terminal.1335Definition150-00B4150-00B4.xmlPureSay that an expression e is pure when there exists some value v such that e \hookrightarrow v without performing any observable effects.1336Definition150-00B3150-00B3.xmlExtensional equivalence with effectsWhen considering exceptions, we say that e_1 \cong e_2 when both:e_1 and e_2 perform indistinguishable effects; for example, they raise the same exceptions, loop infinitely, or print the same string.
If e_1 \hookrightarrow v_1 and e_2 \hookrightarrow v_2, then v_1 \cong v_2 as pure expressions (i.e., as described before).1337Example150-00BJ150-00BJ.xmlExtensional equivalence with handlersThe expressions 150 and 1 div 0 handle Div => 150 are extensionally equivalent: even though the latter raises Div initially, the exception is handled, so the result values are ultimately the same.1338Example150-00BE150-00BE.xmlNoncommutativity of addition with effectsIn general, it need not be true that e_1 \texttt { + } e_2 \cong e_2 \texttt { + } e_1. For example, if e1 = raise Fail "A" and e2 = raise Fail "B", then these are distinguishable: the first raises "A", while the second raises "B".However, if e_1 and e_2 are pure, this equivalence does hold.1339Concept150-00BK150-00BK.xmlPattern matching and purityIn the presence of effects, a function defined by pattern matching only says what happens given a pure argument, since in a function application, arguments are evaluated first.1340Example150-00BL150-00BL.xmlPattern matching with an effectful argumentRecall the slow list reverse. Its definition can be interpreted as following two statements:\texttt {revSlow nil} \cong \texttt {nil}, and
for all pure e1 and e2, we have \texttt {revSlow (e1 :: e2)} \cong \texttt {revSlow e2 @ [e1]}.Notice that the statement for the second clause is not true for general e1 and e2 with effects. For example, if e1 is raise Fail "A" and e2 is raise Fail "B", then the former raises Fail "A" and the latter raises Fail "B".When stepping through revSlow using the second clause, we must justify that our arguments are pure.1341Definition150-00BF150-00BF.xmlTotalA value f : t1 -> t2 is total when for all values x : t1, we have that f x is a pure expression.1347Example150-00BG150-00BG.xmlTotality citation in a proofRecall the slow list reverse. Now, consider the following lemmas:1342Lemma150-00BM150-00BM.xmlTotality of revSlowWe have that revSlow is total.1343Lemma150-00BN150-00BN.xmlmap is a monoid homomorphismFor all total functions f and pure expressions e1 and e2, we have that \texttt {map f (e1 @ e2)} \cong \texttt {map f e1 @ map f e2}.1344Lemma150-00BO150-00BO.xmlTotality of map fFor all total functions f, we have that map f is total.We now prove the following theorem.
1345Theorem#173unstable-173.xml150-00BGFor all total f, \texttt {map f (revSlow l)} \cong \texttt {revSlow (map f l)}.
1346Proof#174unstable-174.xml150-00BG
We use the definitions of revSlow and map. Let l : int list be an arbitrary value; we prove the theorem statement by induction on l.
Case nil:
First, we reason about the left side:
\begin {aligned} &\texttt {map f (revSlow nil)} \\ &\cong \texttt {map f nil} &&\text {(clause 1 of \texttt {revSlow})} \\ &\cong \texttt {nil} &&\text {(clause 1 of \texttt {map})} \end {aligned}
Then, we reason about the right side:
\begin {aligned} &\texttt {revSlow (map f nil)} \\ &\cong \texttt {revSlow nil} &&\text {(clause 1 of \texttt {map})} \\ &\cong \texttt {nil} &&\text {(clause 1 of \texttt {revSlow})} \end {aligned}
Both sides are equivalent, so the case is proven.
Case x :: xs:
IH: \texttt {map f (revSlow xs)} \cong \texttt {revSlow (map f xs)}.
WTS: \texttt {map f (revSlow (x :: xs))} \cong \texttt {revSlow (map f (x :: xs))}.
First, we reason about the left side:
\begin {aligned} &\texttt {map f (revSlow (x :: xs))} \\ &\cong \texttt {map f (revSlow xs @ [x])} &&\text {(clause 2 of \texttt {revSlow})} \\ &\cong \texttt {map f (revSlow xs) @ map f [x]} &&(\ast ) \\ &\cong \texttt {map f (revSlow xs) @ [f x]} &&\text {(clauses 2 and 1 of \texttt {map})} \\ &\cong \texttt {revSlow (map f xs) @ [f x]} &&\text {(IH)} \end {aligned}
Here, the step (\ast ) is justified by , since:
f is total by assumption,
revSlow xs is pure by totality of revSlow, and
[x] is pure since it is a value.
Then, we reason about the right side:
\begin {aligned} &\texttt {revSlow (map f (x :: xs))} \\ &\cong \texttt {revSlow (f x :: map f xs)} &&\text {(clause 2 of \texttt {map})} \\ &\cong \texttt {revSlow (map f xs) @ [f x]} &&(\ast \ast ) \end {aligned}
Here, the step (\ast \ast ) is justified by clause 2 of revSlow, since:
f x is pure since f is total by assumption and
map f xs is pure by totality of map f.
Both sides are then equivalent, so the case is proven.
1381Lecture150-lect20150-lect20.xmlImperative programming II: mutable state2024725Harrison Grodin
hljs.highlightAll();
This lecture is inspired by a similar lecture by Michael Erdmann.
1372150-00BH150-00BH.xmlReference cellsStandard ML supports mutable reference cells. However, this feature does not "infect" the purely functional code we have written before: mutation is isolated to select points in the code using types.1357Definition150-00BP150-00BP.xmlReference primitivesThe standard library includes the following signature:signature REF =
sig
type 'a ref
val ref : 'a -> 'a ref
val ! : 'a ref -> 'a
val := : 'a ref * 'a -> unit (* infix *)
(* ...some helper functions... *)
end
The type t ref represents mutable reference cells that store a value of type t.
The function ref allocates a new reference cell, where the starting value of the cell is the input.
The function ! accesses the current value of the reference cell given.
The infix function op := takes a reference cell (of type t ref) and a compatible value (of type t) and replaces the data in the reference cell with the given value.All of these definitions are available at the top level. The use of references is considered an effect.1358Concept150-00BT150-00BT.xmlEquality of reference cellsReference cells can be compared for equality using op = : 'a ref * 'a ref -> bool. This compares the "addresses", not the contained data. Every reference cell created (using ref) is fresh and not equal to any previously-defined reference cells.1361Example150-00BS150-00BS.xmlPuzzles with refConsider the following example:val r1 : int ref = ref 0
val r2 : int ref = ref 0
val () = r1 := 1
val result : int * int * bool = (!r1, !r2, r1 = r2)
The value bound to result is (1, 0, false). Alternatively, we could set r2 to be an alias for r1:val r1 : int ref = ref 0
val r2 : int ref = r1
val () = r1 := 1
val result : int * int * bool = (!r1, !r2, r1 = r2)
The value bound to result is (1, 1, true).1364Example150-00BU150-00BU.xmlPuzzles with nested refsConsider the following example:val r1 : int ref = ref 0
val r2 : int ref = ref 0
val () = r1 := 1
val r3 : int ref ref = ref r1
val () = !r3 := 2
val () = r3 := r2
val () = !r3 := 3
val result = (!r1, !r2)
Here, r3 is a reference cell containing a reference cell. The value bound to result is (2, 3). Alternatively, we could set r2 to be an alias for r1:val r1 : int ref = ref 0
val r2 : int ref = r1
val () = r1 := 1
val r3 : int ref ref = ref r1
val () = !r3 := 2
val () = r3 := r2
val () = !r3 := 3
val result = (!r1, !r2)
The value bound to result is (3, 3).1365Definition150-00BQ150-00BQ.xmlSemicolon expressionThe expression e1 ; e2 is syntactic sugar for the expression let val _ = e1 in e2 end. In other words, it evaluates e1 (running any effects but ignoring any returned value) and then evaluates e2 (keeping the effects and return value).1367Example150-00BR150-00BR.xmlAuxiliary standard library functions for imperative programmingThe following functions are sometimes useful when writing imperative code:infix 0 before
fun (x : 'a) before () : 'a = x
fun ignore (_ : 'a) : unit = ()
(* Ref.modify : ('a -> 'a) -> 'a ref -> unit *)
fun modify f r = r := f (!r)
1371Example150-00BV150-00BV.xmlImperative factorial algorithmWe can implement the factorial function using imperative programming. A first attempt might look like this:val acc = ref 1
fun fact 0 = !acc
| fact n =
( store := n * !store
; fact (n - 1)
)
val example1 = fact 5
val example2 = fact 0
However, this code contains a bug: acc is never reset, so 120 would be bound to example2, not 1. There are two ways to fix this problem. First, we can reset the reference cell after each call:val acc = ref 1
fun fact 0 = !acc before acc := 1
| fact n =
( acc := n * !acc
; fact (n - 1)
)
Alternatively, we can allocate a fresh ref cell for each call to fact:fun fact n =
let
val acc = ref 1
fun loop 0 = !acc
| loop n =
( acc := n * !acc
; loop (n - 1)
)
in
loop n
end
Every time we call fact, we allocate a new reference cell and run a small "loop" to update it repeatedly.1380150-00BW150-00BW.xmlMutable data structures1374Example150-00BX150-00BX.xmlImperative queue signatureWe can augment the QUEUE signature to give a signature for imperative queues:signature IQUEUE =
sig
type 'a t (* abstract *)
val empty : unit -> 'a t
val enqueue : 'a queue -> 'a -> unit
val dequeue : 'a queue -> 'a option
end
Unlike in QUEUE, the operations in IQUEUE do not return an updated queue; they mutate the given queue.1376Example150-00BY150-00BY.xmlImperative queues using listsWe can implement the imperative queue signature using lists, adapting our implementation of abstract types:structure ListIQueue :> IQUEUE =
struct
type 'a queue = 'a list ref
fun empty () = ref nil
fun enqueue r x = r := !r @ [x]
fun dequeue r =
case !r of
nil => NONE
| x :: xs => SOME x before r := xs
end
The cost is the same as before, but now implemented using mutable state.1379Example150-00BZ150-00BZ.xmlImperative queues using linked listsFor constant-time efficiency, we can implement queues using linked lists. A linked list is defined as follows:datatype 'a front = Nil | Cons of 'a * 'a front ref
type 'a llist = 'a front ref
In other words, a linked list is a reference cell containing either Nil or Cons, where the tail in the Cons case is also a reference cell.structure LinkedIQueue :> IQUEUE =
struct
datatype 'a front = Nil | Cons of 'a * 'a front ref
type 'a llist = 'a front ref
(* INVARIANTs: for (front, back), we have that
* 1. !(!back) = Nil, and
* 2. !back is at the end of !front.
*)
type 'a queue = 'a llist ref * 'a llist ref
fun empty () : 'a queue =
let
val r : 'a llist = ref Nil
in
(ref r, ref r)
end
fun enqueue (_, back : 'a llist ref) (x : 'a) : unit =
let
val r = ref Nil
in
!back := Cons (x, r);
back := r
end
fun dequeue (front, _) =
case !(!front) of
Nil => NONE
| Cons (x, r) => SOME x before front := r
end
The operations work as follows.To make an empty queue, we make an empty linked list, ref Nil, and create two pointers that both point to this same linked list.
To enqueue, we create a new empty linked list r, we mutate the back of the linked list to be Cons (x, r) instead of Nil, and then we update the back pointer to the new back, r.
To dequeue, we look at the data contained at the front of the queue, and we case to see if it is Nil or Cons (x, r). In the former case, the queue has no data remaining. In the latter case, we give back SOME x, and we change the front to point to the tail, r.All three operations are implemented to have constant time. However, since we use mutation, they are not safe for use in parallel: different processes enqueueing and dequeueing in parallel could lead to race conditions.1408Lecture150-lect21150-lect21.xmlAmortized analysis2024730Harrison Grodin
hljs.highlightAll();
In this lecture, we will explore amortized analysis viewed through structure-preserving maps, using an asymmetric analogue of representation independence.1399150-00C0150-00C0.xmlHomomorphisms: structure-preserving transformationsWe consider what a structure-preserving transformation between structures of the same signature.1388Example150-00C1150-00C1.xmlType transformationConsider the following simple signature:signature S =
sig
type t
end
A transformation from M1 : S to M2 : S is a function M1.t -> M2.t.1391Example150-00C2150-00C2.xmlORDERED type class transformationRecall the ORDERED type class:signature ORDERED =
sig
type t
val compare : t * t -> order
end
Consider the following implementations:structure NatOrdered : ORDERED =
struct
type t = int (* INVARIANT: non-negative *)
val compare = Int.compare
end
structure StringOrdered : ORDERED =
struct
type t = string
val compare = String.compare
end
A structure-preserving transformation from NatOrdered to StringOrdered should consist of:A function f : NatOrdered.t -> StringOrdered.t, i.e. a function f : int -> string, such that
for all i1 and i2, we have \texttt {Int.compare (i1, i2)} \cong \texttt {String.compare (f i1, f i2)}, also visualized as the following commutative diagram, where f *** g is defined as fn (x, y) => (f x, g y):
\usepackage {tikz,tikz-cd}
\usetikzlibrary {arrows}
\usetikzlibrary {backgrounds,fit,positioning,calc,shapes}
\tikzset {
diagram/.style = {
on grid,
node distance=4cm,
commutative diagrams/every diagram,
line width = .5pt,
every node/.append style = {
commutative diagrams/every cell,
}
}
}
\begin {tikzpicture}[diagram]
\node (nw) {$\texttt {int * int}$};
\node [below = of nw] (ne) {$\texttt {string * string}$};
\node [right = of ne] (se) {$\texttt {order}$};
\draw [->] (nw) to node[sloped,above] {$\texttt {Int.compare}$} (se);
\draw [->] (ne) to node[below] {$\texttt {String.compare}$} (se);
\draw [->] (nw) to node[left] {$\texttt {f *** f}$} (ne);
\end {tikzpicture}
Some examples of such functions f would be:fn i => repeat #"a" i,
fn i => "b" ^ repeat #"a" i, and
fn i => repeat #"a" (log10 i) ^ Int.toString i.Some non-examples would be:Int.toString and
fn _ => "".1393Example150-00C3150-00C3.xmlSCALE transformationConsider the following signature, SCALE:signature SCALE =
sig
type t
val scale : int -> t -> t
end
A structure-preserving transformation from M1 : SCALE to M2 : SCALE should consist of:A function f : M1.t -> M2.t such that
for all i : int, we have \texttt {f o M1.scale i} \cong \texttt {M2.scale i o f}, also visualized as the following commutative diagram:
\usepackage {tikz,tikz-cd}
\usetikzlibrary {arrows}
\usetikzlibrary {backgrounds,fit,positioning,calc,shapes}
\tikzset {
diagram/.style = {
on grid,
node distance=4cm,
commutative diagrams/every diagram,
line width = .5pt,
every node/.append style = {
commutative diagrams/every cell,
}
}
}
\begin {tikzpicture}[diagram]
\node (nw) {$\texttt {M1.t}$};
\node [right = of nw] (ne) {$\texttt {M1.t}$};
\node [below = of nw] (sw) {$\texttt {M2.t}$};
\node [below = of ne] (se) {$\texttt {M2.t}$};
\draw [->] (nw) to node[above] {$\texttt {M1.scale i}$} (ne);
\draw [->] (sw) to node[above] {$\texttt {M2.scale i}$} (se);
\draw [->] (nw) to node[right] {$\texttt {f}$} (sw);
\draw [->] (ne) to node[right] {$\texttt {f}$} (se);
\end {tikzpicture}
1397Example150-00C4150-00C4.xmlQUEUE transformationConsider the following simplification of the QUEUE signtaure:signature QUEUE =
sig
type queue
val empty : queue
val enqueue : int -> queue -> queue
val dequeue : queue -> int * queue
end
We fix the element type to be int. Also, we avoid options in the dequeue output; when the queue is empty, we always dequeue 0. As before, we can implement queues using lists or pairs of lists:structure LQ : QUEUE =
struct
type queue = int list
val empty = nil
fun enqueue x l = l @ [x]
fun dequeue nil = (0, nil)
| dequeue (x :: xs) = (x, xs)
end
structure BQ : QUEUE =
struct
type queue = int list * int list
val empty = (nil, nil)
fun enqueue x (front, back) = (front, x :: back)
fun dequeue (x :: front, back) = (x, (front, back))
| dequeue (nil, back) =
case List.rev back of
nil => (0, (nil, nil))
| x :: front => (x, (front, nil))
end
Note: although we typically use opaque ascription for the QUEUE signature, we will only use transparent ascription in this lecture.A structure-preserving transformation from BQ : QUEUE to LQ : QUEUE should consist of:A function f : BQ.t -> LQ.t, i.e. a function f : int list * int list -> int list, such that
\texttt {f BQ.empty} \cong \texttt {LQ.empty},
for all i : int, we have \texttt {f o BQ.enqueue i} \cong \texttt {LQ.enqueue i o f}, also visualized as the following commutative diagram:
\usepackage {tikz,tikz-cd}
\usetikzlibrary {arrows}
\usetikzlibrary {backgrounds,fit,positioning,calc,shapes}
\tikzset {
diagram/.style = {
on grid,
node distance=6cm,
commutative diagrams/every diagram,
line width = .5pt,
every node/.append style = {
commutative diagrams/every cell,
}
}
}
\begin {tikzpicture}[diagram]
\node (nw) {$\texttt {int list * int list}$};
\node [right = of nw] (ne) {$\texttt {int list * int list}$};
\node [below = of nw] (sw) {$\texttt {int list}$};
\node [below = of ne] (se) {$\texttt {int list}$};
\draw [->] (nw) to node[above] {$\texttt {BQ.enqueue i}$} (ne);
\draw [->] (sw) to node[above] {$\texttt {LQ.enqueue i}$} (se);
\draw [->] (nw) to node[right] {$\texttt {f}$} (sw);
\draw [->] (ne) to node[right] {$\texttt {f}$} (se);
\end {tikzpicture}
we have \texttt {(Fn.id *** f) o BQ.dequeue} \cong \texttt {LQ.dequeue o f}, also visualized as the following commutative diagram:
\usepackage {tikz,tikz-cd}
\usetikzlibrary {arrows}
\usetikzlibrary {backgrounds,fit,positioning,calc,shapes}
\tikzset {
diagram/.style = {
on grid,
node distance=6cm,
commutative diagrams/every diagram,
line width = .5pt,
every node/.append style = {
commutative diagrams/every cell,
}
}
}
\begin {tikzpicture}[diagram]
\node (nw) {$\texttt {int list * int list}$};
\node [right = of nw] (ne) {$\texttt {int * (int list * int list)}$};
\node [below = of nw] (sw) {$\texttt {int list}$};
\node [below = of ne] (se) {$\texttt {int * int list}$};
\draw [->] (nw) to node[above] {$\texttt {BQ.dequeue}$} (ne);
\draw [->] (sw) to node[above] {$\texttt {LQ.dequeue}$} (se);
\draw [->] (nw) to node[right] {$\texttt {f}$} (sw);
\draw [->] (ne) to node[right] {$\texttt {Fn.id *** f}$} (se);
\end {tikzpicture}
The classic example of such a function f is:fun f ((front, back) : int list * int list) : int list =
front @ List.rev back
Notice the close parallels to the representation independence proof of the equivalence of queues! If we replace the function f with a relation R, we recover the representation independence conditions and proof exactly.1398Remark150-00C5150-00C5.xmlStacked commutative squaresNotice that the commutative squares of can be stacked next to each other. For example:
\usepackage {tikz,tikz-cd}
\usetikzlibrary {arrows}
\usetikzlibrary {backgrounds,fit,positioning,calc,shapes}
\tikzset {
diagram/.style = {
on grid,
node distance=6cm,
commutative diagrams/every diagram,
line width = .5pt,
every node/.append style = {
commutative diagrams/every cell,
}
}
}
\begin {tikzpicture}[diagram]
\node (nw) {$\texttt {int list * int list}$};
\node [right = of nw] (ne) {$\texttt {int * (int list * int list)}$};
\node [below = of nw] (sw) {$\texttt {int list}$};
\node [below = of ne] (se) {$\texttt {int * int list}$};
\node [left = of nw] (nww) {$\texttt {int list * int list}$};
\node [left = of sw] (sww) {$\texttt {int list}$};
\node [left = of nww] (nwww) {$\texttt {int list * int list}$};
\node [left = of sww] (swww) {$\texttt {int list}$};
\draw [->] (nw) to node[above] {$\texttt {BQ.dequeue}$} (ne);
\draw [->] (sw) to node[above] {$\texttt {LQ.dequeue}$} (se);
\draw [->] (nw) to node[right] {$\texttt {f}$} (sw);
\draw [->] (ne) to node[right] {$\texttt {Fn.id *** f}$} (se);
\draw [->] (nww) to node[above] {$\texttt {BQ.enqueue i}$} (nw);
\draw [->] (sww) to node[above] {$\texttt {LQ.enqueue i}$} (sw);
\draw [->] (nww) to node[right] {$\texttt {f}$} (sww);
\draw [->] (nwww) to node[above] {$\texttt {BQ.enqueue i}$} (nww);
\draw [->] (swww) to node[above] {$\texttt {LQ.enqueue i}$} (sww);
\draw [->] (nwww) to node[right] {$\texttt {f}$} (swww);
\end {tikzpicture}
1407150-00C6150-00C6.xmlAmortized analysis via homomorphisms1400Idea150-00C7150-00C7.xmlAmortized analysisAmortized analysis formalizes the idea that an expensive operation can occur infrequently enough that the high cost "averages out" over time.For example, if you pay $300 of rent at the end of each month (30 days), you can fictionally imagine that you pay $10 per day. While reality and fiction do not line up exactly, they do at a large scale: at the end of a 30-day period, both reality and fiction agree that $300 must be the total paid during the month.1401Concept150-00C8150-00C8.xmlVisualizing cost with the print effectTo visualize the cost of a program, we can run print "$" every time our cost model says we used one abstract unit of cost.1403Example150-00C9150-00C9.xmlCost-annotated queuesWe can adapt our QUEUE implementations to visualize cost:structure BQ : QUEUE =
struct
type queue = int list * int list
val empty = (nil, nil)
fun enqueue x (front, back) = (front, x :: back)
fun dequeue (x :: front, back) = (x, (front, back))
| dequeue (nil, back) =
case (print (repeat #"$" (List.length back)); List.rev back) of
nil => (0, (nil, nil))
| x :: front => (x, (front, nil))
end
structure LQ : QUEUE =
struct
type queue = int list
val empty = nil
fun enqueue x l = (print "$"; l @ [x])
fun dequeue nil = (0, nil)
| dequeue (x :: xs) = (x, xs)
end
The BQ annotation is realistic, tracking the number of recursive calls. The LQ annotation, on the other hand, is entirely fictional: we will only read LQ as a specification of what BQ is intended to cost, up to amortization.1406Example150-00CA150-00CA.xmlTransformation for cost-visualized queuesIn , we gave a function that transformed BQ to LQ:fun f ((front, back) : int list * int list) : int list =
front @ List.rev back
Given our cost-annotated queues, is this still a transformation between the QUEUE implementations? While some of the squares commute, such as the stacked commutative squares shown, the requisite conditions do not all hold.To fix this, we can make f itself visualize some fictional cost:fun f ((front, back) : int list * int list) : int list =
( print (repeat "$" (List.length back));
; front @ List.rev back
)
This causes all of the necessary conditions to hold. The number of $s printed by f represents the amount of $s that LQ has pretended we printed even though BQ hasn't gotten around to it yet. Traditionally, this quantity is called the potential of a data structure state (here, (front, back)).This function f, along with the proof that it satisfies the conditions, justifies that the operations for purely-functional batched queues have amortized constant cost: the same as imperative queues using linked lists.1511Lecture150-lect22150-lect22.xmlReview202481Harrison Grodin
hljs.highlightAll();
This lecture is inspired by an analogous lecture by Michael Erdmann and Brandon Wu.
1424150-00CB150-00CB.xmlImperative programmingLesson: mutability is not so bad, as long as it's contained.1416Example150-00BX150-00BX.xmlImperative queue signatureWe can augment the QUEUE signature to give a signature for imperative queues:signature IQUEUE =
sig
type 'a t (* abstract *)
val empty : unit -> 'a t
val enqueue : 'a queue -> 'a -> unit
val dequeue : 'a queue -> 'a option
end
Unlike in QUEUE, the operations in IQUEUE do not return an updated queue; they mutate the given queue.1417Definition150-00BP150-00BP.xmlReference primitivesThe standard library includes the following signature:signature REF =
sig
type 'a ref
val ref : 'a -> 'a ref
val ! : 'a ref -> 'a
val := : 'a ref * 'a -> unit (* infix *)
(* ...some helper functions... *)
end
The type t ref represents mutable reference cells that store a value of type t.
The function ref allocates a new reference cell, where the starting value of the cell is the input.
The function ! accesses the current value of the reference cell given.
The infix function op := takes a reference cell (of type t ref) and a compatible value (of type t) and replaces the data in the reference cell with the given value.All of these definitions are available at the top level. The use of references is considered an effect.1418Definition150-00BF150-00BF.xmlTotalA value f : t1 -> t2 is total when for all values x : t1, we have that f x is a pure expression.1419Concept150-00BK150-00BK.xmlPattern matching and purityIn the presence of effects, a function defined by pattern matching only says what happens given a pure argument, since in a function application, arguments are evaluated first.1420Definition150-00B3150-00B3.xmlExtensional equivalence with effectsWhen considering exceptions, we say that e_1 \cong e_2 when both:e_1 and e_2 perform indistinguishable effects; for example, they raise the same exceptions, loop infinitely, or print the same string.
If e_1 \hookrightarrow v_1 and e_2 \hookrightarrow v_2, then v_1 \cong v_2 as pure expressions (i.e., as described before).1421Definition150-00B4150-00B4.xmlPureSay that an expression e is pure when there exists some value v such that e \hookrightarrow v without performing any observable effects.1422Definition150-00BB150-00BB.xmlEffectAn effect is something the evaluation of a program can do aside from returning a value.1423Concept150-00AZ150-00AZ.xmlraise expressionThe expression raise Fail "TODO" has most general type 'a, filling in for any type we wish. More generally, raise e has most general type 'a, for any exception e.Unlike other expressions, it does not evaluate to any value.1432150-00CC150-00CC.xmlSequencesLesson: functional programming and mathematical reasoning provide elegant primitives for parallelism.1426Definition150-00AB150-00AB.xmlSequence reduceThe function Seq.reduce combines the data in a sequence using a monoid:(* Seq.reduce : ('a * 'a -> 'a) -> 'a -> 'a Seq.t -> 'a
* REQUIRES: g and z form a monoid
* ENSURES: Seq.reduce g z <x0, x1, ..., x_{n-1}> ~= g (x0, g (x1, ..., g (x_{n-1}, z)))
*)
Notice that the behavior of reduce exactly mirrors list foldr, and its type is an instance of the type of list foldr. However, thanks to the assumption that g and z form a monoid, reduce is more efficient than foldr in parallel.
Its cost graph is depicted as follows:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (start) at (0, 1) {$\bullet $};
\node [hexagon] (g0) at (-2.5, 0) {\texttt {g}};
\node [hexagon] (g1) at (-1, 0) {\texttt {g}};
\node at (0, 0) {$\cdots $};
\node [hexagon] (g2) at (1, 0) {\texttt {g}};
\node [hexagon] (g3) at (2.5, 0) {\texttt {g}};
\node [hexagon] (gg0) at (-1.75, -1) {\texttt {g}};
\node [hexagon] (gg1) at (1.75, -1) {\texttt {g}};
\node [hexagon] (ggg) at (0, -2) {\texttt {g}};
\path (start) edge (g0);
\path (start) edge (g1);
\path (start) edge (g2);
\path (start) edge (g3);
\path (g0) edge (gg0);
\path (g1) edge (gg0);
\path (g2) edge (gg1);
\path (g3) edge (gg1);
\path (gg0) edge (ggg);
\path (gg1) edge (ggg);
\end {tikzpicture}
Its work and span depend on the cost of g, but assuming g is constant-time, then reduce g z S has work \mathcal {O}(n) and span \mathcal {O}(\log n).
1427Definition150-00A6150-00A6.xmlMonoidA monoid consists of:a type t,
some z : t,
and some g : t * t -> t such that
z is an identity element for g, and
g is an associative function.1428Concept150-009X150-009X.xmlLimited sequence signature: indexed collectionThe sequence signature includes the following specifications:signature SEQUENCE =
sig
type 'a t (* abstract *)
type 'a seq = 'a t (* concrete *)
val tabulate : (int -> 'a) -> int -> 'a seq
val length : 'a seq -> int
val nth : 'a seq -> int -> 'a
(* ...more to come... *)
end
The abstract type 'a t represents a sequence of 'as, where 'a seq is an alias for signature readability.The implementation of SEQUENCE is called Seq:structure Seq :> SEQUENCE = (* ... *)
The full signature and documentation is available on the course website.1429Definition150-009V150-009V.xmlWork and span of a cost graphThe work of a cost graph is the sum of the costs of all hexagonal nodes in the graph.
The span of a cost graph is the sum of the costs of the hexagonal nodes on the path from the start node to the end node with the highest cost.1430Definition150-009S150-009S.xmlCost graphA cost graph is a visualization technique for parallel processes consisting of a directed acyclic graph with designated start and end nodes. They are defined inductively as follows, where we implicitly treat all edges as top-to-bottom:
Atomic units are variables representing cost of an abstract operation, drawn using a hexagon:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node [hexagon] {\texttt {f}};
\end {tikzpicture}
There is an empty cost graph 0:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node {$\bullet $};
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in sequence, written G_1 \triangleright G_2, representing data dependency:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (0,1) {$G_1$};
\node (G2) at (0,0) {$G_2$};
\path (G1) edge (G2);
\end {tikzpicture}
Two cost graphs G_1 and G_2 can be composed in parallel, written G_1 \otimes G_2, representing data independence:
\usepackage {tikz}
\usetikzlibrary {arrows,shapes.geometric}
\tikzset {
hexagon/.style = {text centered, regular polygon, regular polygon sides = 6, inner sep=0pt, draw=black, minimum width=1.5em}
}
\begin {tikzpicture}[->]
\node (G1) at (-1,0) {$G_1$};
\node (G2) at (1,0) {$G_2$};
\node (start) at (0,1) {$\bullet $};
\node (end) at (0,-1) {$\bullet $};
\path (start) edge (G1);
\path (start) edge (G2);
\path (G1) edge (end);
\path (G2) edge (end);
\end {tikzpicture}
1431Principle150-009P150-009P.xmlFunctional parallelismParallelism and functional programming go hand-in-hand.At a low level, parallelism involves scheduling work to processors;
but at a high level, parallelism involves indicating which expressions can be evaluated in parallel, without baking in a schedule.Functional programming helps:Since there are no effects (like memory updates) available, evaluation order doesn't matter, and race conditions are impossible to even describe in code.
Higher-order functions and abstract types allow complex parallelism techniques to be implemented under the hood but retain a simple interface.
Work and span analysis lets us predict the parallel speedup without fixing the number of processors in advance.1441150-00CD150-00CD.xmlModulesLesson: code should be organized around its interface, hiding irrelevant implementation details and maintaining internal invariants as desired.1434Definition150-009H150-009H.xmlRed-black invariantsA full, balanced tree has the same number of nodes on every path from the root to each Empty. However, such trees only can have 2^d - 1 nodes, where d is the height (depth) of the tree. In order to maintain a similar invariant, we color some nodes black and some nodes red and only count the black nodes. The red nodes are just to fix "off-by-one" errors, where we want to add more data to a tree but don't want to increase the black height. This leads us to the following pair of invariants.The red-black tree invariants require that:
Every path from the root to each Empty have the same number of black nodes, called the black height. (We treat Empty as black with black height zero.)
There are no two red nodes adjacent to each other (referred to as red-red violations), i.e. every red parent node has two black child nodes.
The first invariant guarantees that the trees are balanced ignoring red nodes, and the second invariant ensures that ther aren't "too many" red nodes in a given tree.1435Goal150-009G150-009G.xmlSelf-balancing binary search treeOur implementation of dictionaries using trees has a major cost issue: while the operations are efficient (logarithmic time) when the tree is balanced, nothing prevents the tree from getting unbalanced.We hope to implement dictionaries using trees with invariants that force them to remain balanced. Recall from earlier that a perfectly balanced tree has depth \log _2(n + 1) when there are n nodes in the tree.1436Concept150-0098150-0098.xmlFunctorA functor is a function that takes in a structure and produces another structure. The analogy is:
Expression Level
Module Level
type
signature
expression
structure
function
functor
(Unfortunately, ideas such as "functors are values", "higher-order functors", and "functor signatures" are not present in Standard ML itself.)1437Concept150-0093150-0093.xmlVarieties of types in signaturesEvery type in a signature can be annotated to be abstract, parameter, or concrete.If the type is unspecified via type t, it can be:
abstract, if it is meant to be hidden with opaque ascription; or
parameter, if it is meant to be known to clients with transparent ascription.
If type type is specified via type t = ..., it is concrete.1438Definition150-0091150-0091.xmlType classA type class is a signature containing a type parameter (meant to be transparent) alongside some operations involving the type.signtaure MY_TYPE_CLASS =
sig
type t (* parameter *)
val f1 : (* ...involving t... *)
val f2 : (* ...involving t... *)
(* ... *)
end
The type should be transparent, since a client is meant to use the operations freely. Type classes do not hide type information; they simply classify types supporting some operations.1439Concept150-008V150-008V.xmlStructure equivalence via representation independenceTwo structures M1, M2 : S are equivalent when:For each abstract type t, we give a relation R_\texttt {t}(-, -) relating M1.t to M2.t.
All values declared are \cong , where R_\texttt {t} is taken as the notion of equivalence for type t.1440Example150-008S150-008S.xmlDictionary signatureWe can define a signature for dictionaries, which are (finite) mappings from keys (here, strings) to values (here, 'a):signature DICT =
sig
type key = string (* concrete *)
type 'a entry = key * 'a (* concrete *)
type 'a dict (* abstract *)
val empty : 'a dict
val find : key -> 'a dict -> 'a option
val insert : 'a entry -> 'a dict -> 'a dict
end
1449150-00CE150-00CE.xmlRegular expressionsLesson: specification drives implementation, especially with complex code.1443Example150-0085150-0085.xmlUnion of two machinesWe can take the union of two machines, running them in parallel and using orelse to see if either will accept:(* plus : machine * machine -> machine
* REQUIRES: true
* ENSURES: A(plus (m1, m2)) = A(m1) union A(m2)
*)
fun plus (m1, m2) =
Machine
( status m1 orelse status m2
, fn c => plus (feed m1 c, feed m2 c)
)
1444Example150-0084150-0084.xmlAccept single character machineUsing the always-reject machine and accept empty string machine, we can implement a machine that only accepts the string "a":(* char : char -> machine
* REQUIRES: true
* ENSURES: A(char a) = {"a"}
*)
fun char a =
Machine (false, fn c => if a = c then one () else zero ())
We do not accept the empty string initially. After receiving character c, we check if it is equal to a. If so, we provide accept empty string machine, accepting the empty string afterwards; if not, we provide always-reject machine, failing to accept.1445Definition150-007Z150-007Z.xmlLazy state machineWe define state machines (sometimes known as automata) as a lazy datatype like streams, but instead of having a single tail via unit ->, we have one tail per character with char ->.datatype machine = Machine of bool * (char -> machine)
We always expect a current value of type bool, representing whether or not the machine is in an accepting state (i.e., would accept the empty string). We could suspend the bool, but we choose not to for convenience.Similar to head and tail for streams, we define the following helpers:(* status : machine -> bool *)
fun status (Machine (b, _)) = b
(* feed : machine -> char -> machine *)
fun feed (Machine (_, f)) c = f c
1446Algorithm150-007L150-007L.xmlThe match algorithmWe implement this specification as follows:infix <<
(* op << : char list * char list -> bool
* REQUIRES: s' is a suffix of s
* ENSURES:
* s' << s ==> true iff s' is a proper suffix of s, and
* s' << s ==> false iff s' = s.
*)
fun s' << s = length s' < length s
(* match : regexp -> char list -> (char list -> bool) -> bool
* REQUIRES: true
* ENSURES:
* match r s p ~= true iff there exist x and y with x @ y ~= s and
* 1. x in L(r) and
* 2. p y ~= true.
*)
fun match (r : regexp) (s : char list) (p : char list -> bool) : bool =
case r of
Char a =>
( case s of
nil => false
| c :: cs => a = c andalso p cs
)
| Zero => false
| One => p s
| Plus (r1, r2) => match r1 s p orelse match r2 s p
| Times (r1, r2) => match r1 s (fn s' => match r2 s' p)
| Star r' =>
p s orelse
match r' s (fn s' => s' << s andalso match (Star r') s' p)
The first four cases are similar to the inefficient implementation, using a predicate p : char list -> bool in place of List.null. The Times and Star cases are more interesting:In the Times (r1, r2) case, we recursively change the predicate being used on the tail. We match s against r1, and then we ask that the remainder match r2, which in turn asks that its remainder meets p as needed.
In the Star r' case, we essentially match for Plus (One, Times (r', Star r')). First, we check if s is already sufficient. If not, we match s against r once, and ask that the remainder s' match Star r' again.In all cases but Star r', we are going by recursion on the regular expression. In the second branch of the Star r' clause, though, we match against Star r' again. To guarantee termination, we make sure that s' is strictly smaller than s, so this function goes by lexicographic (dictionary-order) recusion on the regular expression r and then the character list s. Either r shrinks, or r stays the same size and s shrinks.1447Idea150-007S150-007S.xmlaccept via auxiliary function match(* match : regexp -> char list -> (char list -> bool) -> bool
* REQUIRES: true
* ENSURES:
* match r s p ~= true iff there exist x and y with x @ y ~= s and
* 1. x in L(r) and
* 2. p y ~= true.
*)
Using this stronger function, we can implement accept as desired:(* accept : regexp -> char list -> bool
* REQUIRES: true
* ENSURES: accept r s ~= true iff s in L(r)
*)
fun accept (r : regexp) (s : char list) : bool =
match r s List.null
1448Definition150-007K150-007K.xmlregexp datatypedatatype regexp
= Char of char
| Zero
| One
| Plus of regexp * regexp
| Times of regexp * regexp
| Star of regexp
1455150-00CF150-00CF.xmlLazy programmingLesson: some data (inductive) is always available, while some data (coinductive) is only available on demand.1451Concept150-007D150-007D.xmlExtensional equivalence at stream type: coinductionLet t be an arbitrary type, and let s0 and s0' be of type t stream. To show that \texttt {s0} \cong \texttt {s0'}:Choose a relation R(-, -) on pairs of t streams that relates pairs of streams that you expect to be equivalent.
Start State: Show that R(\texttt {s0}, \texttt {s0'}), guaranteeing that the streams you care about are related.
Preservation: Then, show that for all s and s', if R(\texttt {s}, \texttt {s'}), then:
the heads are the same, \texttt {head s} \cong \texttt {head s'} (the "co-base case", since no more stream data comes after the head); and
the tails stay related, R(\texttt {tail s}, \texttt {tail s'}) (the "coinductive conclusion", dual to the inductive hypothesis).This proof technique is called coinduction.Notice that this definition has some similarities with extensional equivalence at function types: both check that you see equivalent results when you use the expressions in equivalent ways.1452Example150-0076150-0076.xmlStream of natural numbersHow could we make the stream 0, 1, 2, 3, 4, ... of all natural numbers? We might try:val nats : int stream =
Stream (fn () => (0,
Stream (fn () => (1,
Stream (fn () => (2, ...
))))))
However, we can never finish typing the .... Instead, we compute something more general: all of the natural numbers starting from n. Then, nats is a special case, choosing 0 for n.(* natsFrom : int -> int stream
* REQUIRES: true
* ENSURES: natsFrom n ==> s, where the elements of s are n, (n + 1), (n + 2), (n + 3), ...
*)
fun natsFrom (n : int) : int stream =
Stream (fn () => (n, natsFrom (n + 1)))
val nats : int stream = natsFrom 0
1453Concept150-0077150-0077.xmlCorecursionDefinitions such as do not go by recursion on an input; nothing needs to ever shrink. Instead, they go by corecursion, producing a finite amount of data but offering to produce more if desired.1454Definition150-0073150-0073.xmlStreamUsing a suspension, we can define a type of streams as follows:datatype 'a stream = Stream of unit -> 'a * 'a stream
Here, Stream takes the role of ::, but storing a suspension of a first element and the remainder of the stream.Note that in this formulation, every stream is infinite.The following helper function computes the first element of a stream and its tail:(* expose : 'a stream -> 'a * 'a stream *)
fun expose (Stream susp : 'a stream) : 'a * 'a stream = susp ()
We call the first element of a stream its head, and the remainder its tail.fun fst (x, y) = x
fun snd (x, y) = y
fun head (s : 'a stream) : 'a = fst (expose s)
fun tail (s : 'a stream) : 'a stream = snd (expose s)
1468150-00CG150-00CG.xmlHigher-order functionsLesson: functions are data, and recurring patterns justify abstraction.1457Concept150-006R150-006R.xmlBind abstractionWe previously saw bind, which takes a function f : 'a -> 'b list and a list 'a list and applies the function on each 'a to get a resulting flattened 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* bind : ('a -> 'b t) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES: ...
*)
The ENSURES should contain some conditions similar to those given for the map abstraction, but we elide them in this class.We can always implement the infix >>= using a bind implementation:fun (x : 'a t) >>= (f : 'a -> 'b t) : 'b t = bind f x
1458Concept150-006L150-006L.xmlList bindThe function bind takes in a function f : 'a -> 'b list that produces as many 'bs as it wishes; we accumulate all of them in a list.(* bind : ('a -> 'b list) -> 'a list -> 'b list *)
fun bind f nil = nil
| bind f (x :: xs) = f x @ bind f xs
It generalizes list map, whose function input must always produce exactly one 'b.1459Concept150-006H150-006H.xmlFold abstractionWe previously saw foldr. Crucially, it sent [x1, x2, ..., xn], i.e., op:: (x1, op:: (x2, ..., op:: (xn, nil))) to f (x1, f (x2, ..., f (xn, init))) by replacing op:: with f and nil with init.If we rewrite the list datatype as follows:datatype 'a list = Cons of 'a * 'a list | Nil
We might as well write foldr as:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b *)
fun foldr (cons : 'a * 'b -> 'b) (nil : 'b) (l : 'a list) : 'b =
case l of
Cons (x, xs) => cons (x, foldr cons nil xs)
| Nil => nil
The type of each argument matches the type of the constructor, swapping 'a list for 'b. Here, cons is just a function (not a constructor!) to replace every Cons with, and nil is just a value to replace every Nil with.The general recipe is as follows:For each constructor, replace the name of the type with 'b, including recursive uses.
Take in each of these functions/values meant to replace the constructor as arguments.
In the implementation, replace each constructor with its function, performing recursive calls on substructures if there are any.For example:We have Cons : 'a * 'a list -> 'a list and Nil : 'a list, so we get cons : 'a * 'b -> 'b and nil : 'b.
We take in cons and nil as arguments.
The implementation is as above.This perspective justifies the universality of list foldr.1460Concept150-006D150-006D.xmlMap abstractionWe previously saw map, which takes a function f : 'a -> 'b and a list 'a list and applies the function on each 'a to get a resulting 'b list.This specification can be generalized beyond 'a list to arbitrary types 'a t:(* map : ('a -> 'b) -> 'a t -> 'b t
* REQUIRES: true
* ENSURES:
* - map id = id, ie map id s = s
* - map (f o g) = map f o map g, ie map f (map g s) = map (f o g) s
*)
In other words, the ENSURES guarantees that map is structure-preserving.1461Concept150-006P150-006P.xmlPipe functionThe following function, pronounced "pipe", is useful for building data pipelines:infix 4 |>
(* op |> : 'a * ('a -> 'b) -> 'b *)
fun x |> f = f x
1462Concept150-0063150-0063.xmlList foldrConsider the following functions:(* sum : int list -> int
* REQUIRES: true
* ENSURES: sum [x1, ..., xn] = x1 + (x2 + (... + (xn + 0)))
*)
fun sum nil = 0
| sum (x :: xs) = x + sum xs
(* concat : 'a list list -> 'a list
* REQUIRES: true
* ENSURES: concat [x1, ..., xn] = x1 @ (x2 @ (... @ (xn @ nil)))
*)
fun concat nil = nil
| concat (x :: xs) = x @ concat xs
(* commas : string list -> string
* REQUIRES: true
* ENSURES: commas [x1, ..., xn] = (x1 ^ ", ") ^ ((x2 ^ ", ") ^ (... ^ ((xn ^ ", ") ^ ".")))
*)
fun commas nil = "."
| commas (x :: xs) = (x ^ ", ") ^ commas xs
(* rebuild : 'a list -> 'a list *)
fun rebuild nil = nil
| rebuild (x :: xs) = x :: rebuild xs
(* isort : int list -> int list *)
fun isort nil = nil
| isort (x :: xs) = insert (x, isort xs)
All three share a common structure, combining x into the recursive call on xs. For a base case init : t2 and a recursive case f : t1 * t2 -> t2, we have:(* combine : t1 list -> t2
* REQUIRES: true
* ENSURES: combine [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun combine nil = init
| combine (x :: xs) = f (x, combine xs)
So, we can define a higher-order function, foldr, that takes in such an initial value init and a combining function f and produces the corresponding combine function:(* foldr : ('a * 'b -> 'b) -> 'b -> 'a list -> 'b
* REQUIRES: true
* ENSURES: foldr f init [x1, ..., xn] = f (x1, f (x2, ... f (xn, init)))
*)
fun foldr f init nil = init
| foldr f init (x :: xs) = f (x, foldr f init xs)
Then, we can define the other functions very simply:val sum = foldr (op +) 0
val concat = foldr (op @) nil
val commas = foldr (fn (x, y) => x ^ ", " ^ y) "."
val rebuild = foldr (op ::) nil
val isort = foldr insert nil
1463Concept150-005X150-005X.xmlList mapConsider the following functions:(* incAll : int list -> int list
* REQUIRES: true
* ENSURES: incAll [x1, ..., xn] = [x1 + 1, ..., xn + 1]
*)
fun incAll nil = nil
| incAll (x :: xs) = (x + 1) :: incAll xs
(* stringAll : int list -> string list
* REQUIRES: true
* ENSURES: stringAll [x1, ..., xn] = [Int.toString x1, ..., Int.toString xn]
*)
fun stringAll nil = nil
| stringAll (x :: xs) = Int.toString x :: stringAll xs
(* bool list -> bool list
* REQUIRES: true
* ENSURES: flipAll [x1, ..., xn] = [not x1, ..., not xn]
*)
fun flipAll nil = nil
| flipAll (x :: xs) = not x :: flipAll xs
All share a common structure, applying a function to each element of the input list. For a function f : t1 -> t2, we have:(* fAll : t1 list -> t2 list
* REQUIRES: true
* ENSURES: fAll [x1, ..., xn] = [f x1, ..., f xn]
*)
fun fAll nil = nil
| fAll (x :: xs) = f x :: fAll xs
So, we can define a higher-order function, map, that takes in such a function f and produces the corresponding fAll function:(* map : ('a -> 'b) -> 'a list -> 'b list
* REQUIRES: true
* ENSURES: map f [x1, ..., xn] = [f x1, ..., f xn]
*)
fun map f nil = nil
| map f (x :: xs) = f x :: map f xs
Then, we can define the other functions very simply:val incAll = map (fn x => x + 1)
val stringAll = map Int.toString
val flipAll = map not
1464Concept150-0067150-0067.xmlStagingCurried functions can perform some intermediate computation before receiving all of their arguments.1465Concept150-005Z150-005Z.xmlFunction compositionTo compose two functions f : 'a -> 'b and g : 'b -> 'c, we can define (g o f) : 'a -> 'c:fun (op o) (g : 'b -> 'c, f : 'a -> 'b) : 'a -> 'c = fn (x : 'a) => g (f x)
We can equivalently define composition in the following ways:fun g o f = fn x => g (f x)
fun (g o f) x = g (f x)
1466Concept150-0068150-0068.xmlCurryingWe say that a function is curried, named for mathematician Haskell Curry, when it takes in multiple arguments one at a time, producing a function accepting the rest of the arguments.For example, the type t1 -> t2 -> t3 is curried, but the type t1 * t2 -> t3 is not (sometimes called "uncurried").1467Definition150-005P150-005P.xmlHigher-order functionA higher-order function is a function that takes a function as input or produces a function as output.1480150-00CH150-00CH.xmlDatatypesLesson: types guide structure and describe the shape of data.1470Concept150-005I150-005I.xmlComparison functionIn the implementation of the insert auxiliary function, we used Int.compare : int * int -> order. To sort a list of 'as, we need a function of type 'a * 'a -> order.1471Concept150-005G150-005G.xmlUnit typeThe type unit has a single value, () : unit, the empty tuple.1472Example150-005E150-005E.xmlPolymorphic treesGeneralizing binary tree with ints at the nodes, we may have a tree storing any element type we wish:datatype 'a tree
= Empty
| Node of 'a tree * 'a * 'a tree
To recover the trees of integers, we use int tree. Now, though, we may have string tree, int option tree, int list tree, int tree tree, and more!1473Example150-005D150-005D.xmlBuilt-in polymorphic datatypesGeneralizing existing types as datatype declarations, we may include parameters:datatype 'a option
= NONE
| SOME of 'a
datatype 'a list
= nil
| :: of 'a * 'a list
1474Concept150-0057150-0057.xmlMost general typeThe most general type of an expression e is the type t such that all other types t' that could be assigned to e can be achieved by plugging in for type variables in t.We say that these other types t' are instances of type t.When we say that "e has type t", we implicitly mean that e has most general type t.1475Concept150-004V150-004V.xmlContradiction in type inferenceIf a variable is used in such a way that it has two incompatible types, a type error will be produced.1476Concept150-004E150-004E.xmlorder datatypeThe following datatype is built into the standard library of Standard ML:datatype order = LESS | EQUAL | GREATER
As the constructor names indicate, these constructors indicate the result of a comparison of elements in a trichotomous relation.1477Concept150-0031150-0031.xmlCost analysisGoal: understand the cost of programs. Some choices:Time each execution. However, this is machine-dependent.
Count a given metric (recursive calls; additions; evaluation steps; etc.). This is abstract enough to be proved, and it corresponds to real time.First, we choose a cost metric and size metrics for inputs. Then, we:Write a recurrence following the structure of the code, computing cost from input sizes.
Solve for a closed form.
Give a simple asymptotic (big-\mathcal {O}) solution.1478Concept150-002T150-002T.xmlStructural induction on treeTo prove that a property holds on all tree values t : tree:Base Case: Prove that the property holds on Empty.
Inductive Case: Prove that for all x : int and l, r : tree, if the property holds on both l and r (inductive hypotheses), then the property holds on Node (l, x, r).1479Concept150-002N150-002N.xmlBinary tree with ints at the nodesWe define the following datatype declaration to represent binary trees:datatype tree
= Empty
| Node of tree * int * tree
Note that tree is used recursively.1486150-00CI150-00CI.xmlInductionLesson: big problems can be broken down into smaller ones.1482Concept150-0027150-0027.xmlStructural induction on int listTo prove that a property holds on all list values l : int list:Base Case: Prove that the property holds on nil.
Inductive Case: Prove that for all x : int and xs : int list, if the property holds on xs, then the property holds on x :: xs.1483Concept150-0022150-0022.xmlListsFor all types t, the type t list represents ordered lists of values of type t.The values of type t list are:nil, the empty list
v1 :: v2 (pronounced "cons"), where v1 : t is an element and v2 : t list is the remainder of the listSyntactic sugar [v1, v2, ..., vn] is equivalent to v1 :: v2 :: ... :: vn, i.e. v1 :: (v2 :: (... :: (vn :: nil))).There are corresponding expressions that evaluate left-to-right.1484Principle150-001V150-001V.xmlProof structure mirrors program structureThe structure of a proof should mirror the structure of the program.If the program uses recursion on a natural number n, the proof should use induction on n.
If the program uses recursion with cases 0, 1, and n, the proof should use induction with base cases for 0 and 1 and an inductive case for n.
If the program cases on b : bool, the proof should case in the same way.1485Concept150-001T150-001T.xmlSimple induction on natural numbersTo prove that a property holds on all natural numbers n \in \{0, 1, 2, 3, \cdots \}:Base Case: Prove that the property holds on 0.
Inductive Case: Prove that if the property holds on n, then the property holds on n + 1.Then:The property holds on 0.
The property holds on 1 = 0 + 1, since the property holds on 0.
The property holds on 2 = 1 + 1, since the property holds on 1.
The property holds on 3 = 2 + 1, since the property holds on 2.
...and so on.1504150-00CJ150-00CJ.xmlBasicsLesson: choose a foundation that scales, and carefully distinguish ideas.1488Concept150-001J150-001J.xmlcase expressionscase e of
pat1 => e1
| pat2 => e2
...
| patn => en
To evaluate a case expression:Evaluate e to a value.
Then, evaluate the first branch matching the value.1489Concept150-001E150-001E.xmlPattern inputs in fun declarationsfun f <pattern> : t2 = e
1490Example150-001A150-001A.xmlTuple pattern matchingval name_and_age : string * int = ("Polly", 5)
val (name, age) : string * int = name_and_age
(* OR: *)
val (name : string, age : int) = name_and_age
(* OR: *)
val (name, age) = name_and_age
val age' : int =
let
val (_, age) = name_and_age
in
age + 1
end
val ((a : string, b : int), (c : string, d : int)) =
(name_and_age, name_and_age)
1491Example150-001C150-001C.xmlWildcard patternfun onefifty (_ : int) : int = 150
λ> fun onefifty (x : int) : int = 150;
stdIn:2.5-2.35 Warning: variable x is defined but not used
val onefifty = fn : int -> int
λ> fun onefifty (_ : int) : int = 150;
val onefifty = fn : int -> int
1492Concept150-0013150-0013.xmlFunction specifications(* f : t1 -> t2
* REQUIRES: ...some assumptions about x...
* ENSURES: ...some guarantees about (f x)...
*)
fun f (x : t1) : t2 = e
1493Definition150-000T150-000T.xmlExtensional equivalence at function typesSuppose f and f' are both of type t1 -> t2. Then, \texttt {f} \cong \texttt {f'} when for all values x and x' of type t1, \texttt {x} \cong \texttt {x'} implies \texttt {f x} \cong \texttt {f' x'}.When t1 is a base type, this is equivalent to: for all values x : t1, \texttt {f x} \cong \texttt {f' x}.1494Concept150-001L150-001L.xmlFunctions are valuesFunctions are values: they do not evaluate further.1495Concept150-000L150-000L.xmlFunction typesIn math, we talk about functions f : X \to Y between sets X and Y. In SML, we do the same, but where X and Y are types.If t1 and t2 are types, then t1 -> t2 is the type of functions that take a value of type t1 as input and produce a value of type t2 as an output.
Type
Values
t1 -> t2
fn (x : t1) => e
If assuming that x : t1 makes e : t2, then (fn (x : t1) => e) : t1 -> t2.1496Example150-000P150-000P.xmllocal declarationslocal
val b : int = 15
val c : int = b + 150
in
val a : int = b * c + 1
end
(* ERROR: b not in scope *)
val d : int = a + b
1497Example150-000O150-000O.xmllet expressionsval a : int =
let
val b : int = 15
val c : int = b + 150
in
b * c
end + 1
(* ERROR: b not in scope *)
val d : int = a + b
1498Concept150-000M150-000M.xmlval declarationsA val declaration gives a variable name to the result of an expression evaluation.val x : t = e
1499Example150-000H150-000H.xmlExample pairs(3 + 4, true) : int * bool
(1.0, ~6.28) : real * real
(1, 50, false, "hi") : int * int * bool * string
(1, (50, false), "hi") : int * (int * bool) * stringNotice in the last example that parentheses matter!1500Definition150-000Q150-000Q.xmlExtensional equivalence at base typesTwo expressions e and e' (that evaluate to values) are extensionally equivalent, written e \cong e', when they evaluate to the same value.1501Concept150-0004150-0004.xmlTypeA type is a prediction about the kind of value an expression will evaluate to. When an expression e has type t, we write e : t.An expression is well-typed if it has a type and ill-typed otherwise.Type-checking happens prior to evaluation: only well-typed programs are evaluated.1502Concept150-0005150-0005.xmlExpressionAn expression e is a program that can be evaluated.Every value is also an expression.
Until the end of the course, we make the blanket assumption that all expressions e evaluate to some value v.1503Concept150-0006150-0006.xmlValueA value v is a final answer that cannot be simplified further.1510150-00CK150-00CK.xmlClosing thoughts1505Principle150-000V150-000V.xmlPrinciples of functional programmingSimplicity: pure, functional code is easy to reason about, test, and parallelize.
Compositionality: build bigger programs out of smaller ones, taking advantage of patterns.
Abstraction: use types/specification to guide program development.1506Principle150-000W150-000W.xmlProgramming as a linguistic processImperative programming is telling a computer how to compute a result. \begin {aligned} x &\leftarrow 2; \\ y &\leftarrow x + x \end {aligned} Functional programming is explaining what you want to compute. 2 + 2Functional programming is applicable in all "high-level" programming languages.1509Perspective150-00CL150-00CL.xmlOn codeCode is math: it transforms data and is subject to precise analysis.Code is art: it can communicate ideas and help you think beautiful thoughts.
1823150-bonus150-bonus.xmlBonus LecturesHarrison GrodinThis content is entirely optional and should be ignored when completing assignments or exams.1644Lecture150-bonus01150-bonus01.xmlCost analysis and phases2024531Harrison Grodin
hljs.highlightAll();
1626150-003Q150-003Q.xmlCost annotations1612Concept150-003T150-003T.xmlCost annotationsWe extend Standard ML with a new primitive for cost tracking.If e : t, then $c(e) : t, as well, when c is a natural number. $c(e) means "increase the cost of e by c units".The cost primitive has the following properties, allowing trivial zero cost to be deleted and multiple cost to be consolidated: \begin {aligned} {\hspace {-2pt}\text {\textdollar }{0}}(e) &= e \\ {\hspace {-2pt}\text {\textdollar }{c_1}}({\hspace {-2pt}\text {\textdollar }{c_2}}(e)) &= {\hspace {-2pt}\text {\textdollar }{(c_1 + c_2)}}(e) \end {aligned} Moreover, cost can always be pulled out to the front: \begin {aligned} e~({\hspace {-2pt}\text {\textdollar }{c}}(e_1)) &= {\hspace {-2pt}\text {\textdollar }{c}}(e~e_1) \\ ({\hspace {-2pt}\text {\textdollar }{c_1}}(e_1), {\hspace {-2pt}\text {\textdollar }{c_2}}(e_2)) &= {\hspace {-2pt}\text {\textdollar }{(c_1 + c_2)}}((e_1, e_2)) \end {aligned} 1614Example150-003W150-003W.xmlCost-annotated tree sumfun x ++ y = $1(x + y)
fun sum Empty = 0
| sum (Node (l, x, r)) = sum l ++ x ++ sum r
1615Concern150-003X150-003X.xmlMissing information for general recurrence for cost-annotated tree sumTo write a recurrence for cost-annotated tree sum without any tree shape assumptions (like spine or balanced), sometimes it is said that: \begin {aligned} W(0) &= 0 \\ W(n) &= W(n_1) + W(n_2) + 2 \end {aligned} where n_1 and n_2 are the sizes of the two subtrees. But, this does not define a function: the only input to W is the tree size n, which does not include information about how the nodes are split between the subtrees!1617Example150-003V150-003V.xmlCost-annotated slow list reversefun revSlow nil = nil
| revSlow (x :: xs) = $1(revSlow xs) @ [x]
1618Concern150-003U150-003U.xmlUse of lemma to write recurrence for cost-annotated slow list reverseThe recurrence is: \begin {aligned} W(0) &= 0 \\ W(n) &= W(n - 1) + W_\texttt {@}(n - 1, 1) + 1 \end {aligned} To define this recurrence, we need the fact that n - 1 = \texttt {length xs} = \texttt {length (revSlow xs)}, which is a separate lemma. How come we need a lemma to state the recurrence in the first place?1619Innovation150-003Y150-003Y.xmlA program is its own cost recurrenceRather than take a program like revSlow and extract a recurrence W_\texttt {revSlow}, we treat the cost-annotated revSlow as a "2-in-1" solution, expressing both the program data and its cost simultaneously.
By taking in the full data structure rather than a number, we address : now, we have access to the shape of the input, rather than only the size.
By producing the full data as output rather than just the cost, we address : now, we can pass the recursive result as an input to another "recurrence" without proving anything about its size relative to the inputs.
1620Concept150-0040150-0040.xmlSolving generalized recurrencesTo solve a program-as-recurrence f, we prove that \texttt {f x} = {\hspace {-2pt}\text {\textdollar }{c}}(\texttt {F x}), where cost c is in terms of x and F is a (zero-cost) specification-level implementation of f.1622Theorem150-0043150-0043.xmlCost of cost-annotated tree sumFor all t : tree, we have \texttt {sum t} = {\hspace {-2pt}\text {\textdollar }{(\texttt {2 * size t})}}(\texttt {SUM t}).
1621Proof#195unstable-195.xml150-0043
By structural induction on t.
Case Empty:
\begin {aligned} &\texttt {sum Empty} \\ &= \texttt {0} \\ &= {\hspace {-2pt}\text {\textdollar }{0}}(0) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {2 * size Empty})}}(\texttt {SUM Empty}) \end {aligned}
Case Node (l, x, r):
\begin {aligned} &\texttt {sum (Node (l, x, r))} \\ &= \texttt {sum l ++ x ++ sum r} \\ &= {\hspace {-2pt}\text {\textdollar }{2}}(\texttt {sum l + x + sum r}) \\ &= {\hspace {-2pt}\text {\textdollar }{2}}({\hspace {-2pt}\text {\textdollar }{(\texttt {2 * size l})}}(\texttt {SUM l})\texttt { + x + } {\hspace {-2pt}\text {\textdollar }{(\texttt {2 * size r})}}(\texttt {SUM r})) &&\text {(IHs)} \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {2 + 2 * size l + 2 * size r})}}(\texttt {SUM l + x + SUM r}) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {2 * size (Node (l, x, r))})}}(\texttt {SUM (Node (l, x, r))}) \end {aligned}
1623Lemma150-0041150-0041.xmlCost of appendFor all l1, l2 : int list, we have \texttt {l1 @ l2} = {\hspace {-2pt}\text {\textdollar }{(\texttt {length l1})}}(\texttt {APP (l1, l2)}).1625Theorem150-0042150-0042.xmlCost of cost-annotated slow list reverseFor all l : int list, we have \texttt {revSlow l} = {\hspace {-2pt}\text {\textdollar }{(\texttt {(length l + 1) * (length l)} / 2)}}(\texttt {REV l}).
1624Proof#196unstable-196.xml150-0042
By structural induction on l.
Case nil:
\begin {aligned} &\texttt {revSlow nil} \\ &= \texttt {nil} \\ &= {\hspace {-2pt}\text {\textdollar }{0}}(\texttt {nil}) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {1 * 0} / 2)}}(\texttt {REV nil}) \end {aligned}
Case x :: xs:
Let \texttt {n} = \texttt {length xs}.
\begin {aligned} &\texttt {revSlow (x :: xs)} \\ &= {\hspace {-2pt}\text {\textdollar }{1}}(\texttt {revSlow xs})\texttt { @ [x]} \\ &= {\hspace {-2pt}\text {\textdollar }{1}}({\hspace {-2pt}\text {\textdollar }{((n+1)n/2)}}(\texttt {REV xs}))\texttt { @ [x]} \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + (n+1)n/2)}}(\texttt {REV xs @ [x]}) \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + (n+1)n/2)}}({\hspace {-2pt}\text {\textdollar }{(\texttt {length (REV xs)})}}(\texttt {APP (REV xs, [x])})) \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + (n+1)n/2)}}({\hspace {-2pt}\text {\textdollar }{(\texttt {length (REV xs)})}}(\texttt {REV (x :: xs)})) \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + (n+1)n/2)}}({\hspace {-2pt}\text {\textdollar }{(\texttt {length xs})}}(\texttt {REV (x :: xs)})) &&\text {(lemma)} \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + (n+1)n/2 + n)}}(\texttt {REV (x :: xs)}) \end {aligned}
As desired, this is {\hspace {-2pt}\text {\textdollar }{(\texttt {(length (x :: xs) + 1) * (length (x :: xs))} / 2)}}(\texttt {REV (x :: xs)}).
Here, we use while solving to turn length (REV xs) into length xs.
1636150-003R150-003R.xmlProgram inequality1628Example150-0044150-0044.xmlCost-annotated zero chompThe following function removes leading zeros from a list:fun chomp nil = nil
| chomp (0 :: xs) = $1(chomp xs)
| chomp (x :: xs) = x :: xs
(Note that using traditional recurrences, we again run into , since the cost depends on the value of the first element of the list.)The cost of chomp l is at most length l; however, if the zeros end before we reach nil, the cost will be less than length l.1629Innovation150-0045150-0045.xmlProgram inequalityWe add a new notion, e \le e', which means that e and e' behave the same way, but e may incur less cost.Program inequality satisfies the following properties:
Inequality is a preorder:
For all e, we have e \le e'.
For all e, e', e'', if e \le e' and e' \le e'', then e \le e''.
If c \le _\mathbb {N} c', then {\hspace {-2pt}\text {\textdollar }{c}}(e) \le {\hspace {-2pt}\text {\textdollar }{c}}(e').
For functions f and f', to show f \le f', we must show that "for all x, we have f~x \le f'~x".1631Theorem150-0046150-0046.xmlCost of cost-annotated zero chompFor all l : int list, we have \texttt {chomp l} \le {\hspace {-2pt}\text {\textdollar }{(\texttt {length l})}}(\texttt {CHOMP l}).
1630Proof#194unstable-194.xml150-0046
By structural induction on l.
Case nil:
\begin {aligned} &\texttt {chomp nil} \\ &= \texttt {nil} \\ &= {\hspace {-2pt}\text {\textdollar }{0}}(\texttt {nil}) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {length nil})}}(\texttt {CHOMP nil}) \end {aligned}
Case x :: xs:
By cases on x.
Case 0:
\begin {aligned} &\texttt {chomp (0 :: xs)} \\ &= {\hspace {-2pt}\text {\textdollar }{1}}(\texttt {chomp xs}) \\ &\le {\hspace {-2pt}\text {\textdollar }{1}}({\hspace {-2pt}\text {\textdollar }{(\texttt {length xs})}}(\texttt {CHOMP xs})) &&\text {(IH)} \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + \texttt {length xs})}}(\texttt {CHOMP xs}) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {length (0 :: xs)})}}(\texttt {CHOMP (0 :: xs)}) \\ \end {aligned}
Case otherwise:
\begin {aligned} &\texttt {chomp (x :: xs)} \\ &= \texttt {x :: xs} &&\text {(case assumption)} \\ &= {\hspace {-2pt}\text {\textdollar }{0}}(\texttt {x :: xs}) \\ &\le {\hspace {-2pt}\text {\textdollar }{(\texttt {length (x :: xs)})}}(\texttt {x :: xs}) \\ &= {\hspace {-2pt}\text {\textdollar }{(\texttt {length (x :: xs)})}}(\texttt {CHOMP (x :: xs)}) \\ \end {aligned}
1633Theorem150-0047150-0047.xmlInequality of reverse-and-append with slow reverse and appendFor all l, acc : int list, we have \texttt {revApp (l, acc)} \le \texttt {APP (revSlow l, acc)}.
1632Proof#193unstable-193.xml150-0047
By structural induction on l.
Case nil:
\begin {aligned} &\texttt {revApp (nil, acc)} \\ &= \texttt {acc} \\ &= \texttt {APPEND (nil, acc)} \\ &= \texttt {APPEND (revSlow nil, acc)} \end {aligned}
Case x :: xs:
\begin {aligned} &\texttt {revApp (x :: xs, acc)} \\ &= {\hspace {-2pt}\text {\textdollar }{1}}(\texttt {revApp (xs, x :: acc)}) \\ &\le {\hspace {-2pt}\text {\textdollar }{1}}(\texttt {APP (revSlow xs, x :: acc)}) &&\text {(IH)} \\ &\le {\hspace {-2pt}\text {\textdollar }{(1 + \texttt {length xs})}}(\texttt {APP (revSlow xs, x :: acc)}) \\ &= {\hspace {-2pt}\text {\textdollar }{(1 + \texttt {length xs})}}(\texttt {APP (APP (revSlow xs, [x]), acc)}) \\ &= {\hspace {-2pt}\text {\textdollar }{1}}(\texttt {APP (revSlow xs @ [x], acc)}) \\ &= \texttt {APP (revSlow (x :: xs), acc)} \end {aligned}
1635Corollary150-0048150-0048.xmlInequality of fast and slow reverseWe have \texttt {rev} \le \texttt {revSlow}.
1634Proof#192unstable-192.xml150-0048
Let l : int list be arbitrary.
Using :
\begin {aligned} &\texttt {rev l} \\ &= \texttt {revApp (l, nil)} \\ &\le \texttt {APP (revSlow l, nil)} &&\text {(theorem)} \\ &= \texttt {revSlow l} \end {aligned}
1642150-003S150-003S.xmlExtensional phase1637Innovation150-0049150-0049.xmlExtensional phaseIn intuitionistic/modal logic, we sometimes have propositions beyond "true" and "false".We add a third proposition (like "sometimes"), \mathbf {ext}, meaning "true if you don't care about cost". We call this proposition "the extensional phase". It comes with the following axioms:If \mathbf {ext}, then {\hspace {-2pt}\text {\textdollar }{c}}(e) = e. (If we don't care about cost, then we might as well delete the cost annotations.)
If \mathbf {ext}, then e \le e' implies e = e'. (Since program inequality requires programs behave the same way except for cost, two inequal programs are equal if we don't care about cost.)1638Remark150-004B150-004B.xmlAsymmetry between cost and dataCost and data are asymmetric. We ran into trouble ( and ) when trying to delete data and only reason about cost, but there's no issue with deleting cost and only reasoning about data.1639Definition150-004A150-004A.xmlExtensional equivalenceDefine e \cong e' as "if \mathbf {ext}, then e = e'". Informally, this means that e and e' are equal, as long as we don't care about cost.1641Corollary150-004C150-004C.xmlExtensional equivalence of fast and slow reverseWe have \texttt {rev} \cong \texttt {revSlow}.
1640Proof#191unstable-191.xml150-004C
Assume \mathbf {ext}; it remains to show \texttt {rev} = \texttt {revSlow}.
By , we have that \texttt {rev} \le \texttt {revSlow}. So, by the property of the extensional phase about inequality, we have \texttt {rev} = \texttt {revSlow}.
1643150-003Z150-003Z.xmlSummaryTo address subtleties with numbers-only recurrences, we justified the stance that a program is its own cost recurrence when cost annotations are included in the program itself. This allowed us to give exact solutions to cost recurrences; to give upper bounds, we introduced program inequality, comparing the cost of two programs when they behave the same way. Finally, to recover extensional equivalence, we introduced the extensional phase, a logical proposition that justifies deleting cost and unifying inequality and equality.1684Personbjwubjwu.xmlBrandon Wuhttps://brandonspark.github.io/1685Persondilsunkdilsunk.xmlDilsun Kaynarhttps://www.cs.cmu.edu/~dilsun/1686Personhgrodinhgrodin.xmlHarrison Grodinhttps://www.harrisongrodin.com/1687Personjacobneujacobneu.xmlJacob Neumannhttps://jacobneu.github.io/1688Personme51me51.xmlMichael Erdmannhttps://www.cs.cmu.edu/~me/whois-me.html1689Personsb21sb21.xmlStephen Brookeshttp://www.cs.cmu.edu/~brookes/1683Referencestandardmlstandardml.xmlStandard MLhttps://smlfamily.github.io/