# TypeScript, programs as proofs, and continuation passing style

The Curry-Howard isomorphism means that we can encode mathematical proofs as programs. This post introduces the Curry-Howard isomorphism and demonstrates some basic proofs in TypeScript. Motivation is then developed for the correspondence between continuation passing style and classical logic, and some simple classical logic propositions are proved in TypeScript.

## Types as Sets

It's useful to think of types as sets of values. A boolean is a set containing two elements, true and false. The type boolean | string is the set containing all the string values, as well as true and false. We then say that a given value *inhabits* a type if it's an element of that type's set, so true inhabits boolean | string, but 42 doesn't.

From the types-as-sets perspective, it's reasonably natural to ask if there exists a type that corresponds to the empty set. That is, is there a type which is uninhabited? This depends on the language, but for TypeScript, the answer is yes: and prime example type is never. One place never crops up is when TypeScript's narrowing algorithm determines that a given case is impossible:

```
function doSomething(flag: boolean) {
if (flag === true) {
console.log('Yes!');
} else if (!flag) {
console.log('No!');
} else {
// Unreachable
const thisNeverHappens: never = flag;
}
}
```

Indeed if you ever have a variable of type never, you know that the code you're writing shouldn't be ever actually executed^{[1]}.

But never isn't the only uninhabited type: the type { message: string, value: never } is also uninhabited. This is because if we ever had a value which had a property of type never, we'd be able to access that propety and get a value of type never. This should be impossible, so it must be that we can never get a value of type { message: string, value: never }. In general, if we have a type which we can reliably extract a never value from, that type is also necessarily uninhabited.

## Function Types

What about () => never, a function which takes no arguments and returns never? It's tempting to think that this type is also uninhabited, since we could always just call the function and get a never, which should be impossible. But there's a tricky way around this:

```
function f0(): never {
while (true) {
console.log("I'm never gonna return!!!");
}
}
```

The TypeScript type checker detects that f0 never returns, and so its satisfied with whatever return type is specified. The problem is that we can't be sure that the functions we call are actually going to produce a return value for the caller. They could loop forever, throw an exception, or do something else nasty. So to make things work out nicely, from here on in we'll have to insist that all the functions that we write are total: whenever they're called they have to eventually return a value.

How about (x: never) => never, a function that is given an argument of type never, and returns a never. That type *is* inhabited, just return the given argument unchanged:

```
function f1(x: never): never {
return x;
}
```

We can never actually run the code in f1, but that doesn't really matter. What matters is that we can give a definition which passes the type checker, so we can get a value that inhabits the type (x: never) => never.

In general, the only way a function can return an uninhabited type is to have been given one as an argument. This has an important corollary: *if* all of the arguments to a total function are inhabited, *then* the return type must also be inhabited.

## Product Types

Let A and B be two arbitrary types, and consider the type { a: A, b: B }. If either A or B (or both) are uninhabited, then { a: A, b: B } must also be uninhabited (otherwise we'd be able to access the a or b property of the object to get an uninhabited type from an inhabited one). So { a: A, b: B } is inhabited precisely when *both* A *and* B are inhabited.

I will now pose a question: is the type (ab: { a: A, b: B }) => A always inhabited, regardless of what A and B are? There's two ways we could go about answering the question. The most natural, I think, would be to just write a function with the right signature:

```
function f2(ab: { a: A, b: B }): A {
return ab.a;
}
```

Regardless of what A and B are (more on this later), the function f2 will typecheck, and so f2 will inhabit the type we're interested in. But there's another way we could answer the question: using logic. The type (ab: { a: A, b: B }) => A is going to be always inhabited so long as, whenever { a: A, b: B } is inhabited, A is also be inhabited. Since { a: A, b: B } is inhabited whenever both A and B is inhabited, we can be sure that A is going to be inhabited if { a: A, b: B } is.

## Types as Propositions

Using just the second line of reasoning, without needing to write a program, we proved that the type that we're interested is inhabited. But the converse is also true: by writing out an implementation of f2, we've proved something. What have we proved? In words, "if A and B are true, then A is true", or using logic symbols, \(A \land B \Rightarrow A\).

So the structure of types and the structure of propositions match each other: a given type being inhabited corresponds to a corresponding proposition being true. A type like {a: A, b: B } corresponds to the logical assertion "A and B", or \(A \land B\). And when we have a function with an argument A and a return type B, this corresponds to the proposition "given A, then B", or \(A \Rightarrow B\). Programs which manage to create a value of a certain type act as a proof that that type is inhabited, and so in turn prove that the corresponding proposition is true. This relationship is the Curry-Howard isomorphism.

Before we move unto proving stuff, we need a few more type-level tools. So far we've got conjunction (and) as object types and implication as function types, but we're missing a way of representing disjunction (or) as well as negation.

## Sum Types and Negation

Given types A and B, we wish to construct a type which is inhabited if either A *or* B is inhabited. For that we can use a discriminated union: { side: 'left', a: A } | { side: 'right', b: B }. For example, if A is never and B is number, we can construct the inhabitant { side: 'right', b: 42 }.

To find a type which is inhabited if and only if A is *un*inhabited we don't need any more type machinery, we can just use function types: (a: A) => never. If A is uninhabited, then this "negation" type is inhabited, and vice versa.

## \(\forall\) and Parametric Polymorphism

Let us return to the question I posed before, about finding an inhabitant for (ab: { a: A, b: B }) => A. When I provided an program which contained a function that inhabited f2, I hand-waved away the fact that this inhabitant actually "worked" for all possible types we might chose for A and B. What we want is for the type system to be able to check that the programs we write create values which inhabit the type we're interested in, for all choices of A, B, or whatever other type variable we care about.

To get TypeScript to check that for us, we can use generics:

```
function f4<A, B>(ab: { a: A, b: B }): A {
return ab.a;
}
```

We can also "invoke" these "theorems" by specialising these generics with particular types. For example, to prove \(C \land C \Rightarrow C\), we can just write f4<C, C>.

## So far

Logic side | Programming side | Typescript Implementation |
---|---|---|

\(\perp\) (false) | Empty type | ```
type False = never;
``` |

\(\Rightarrow\) (implication) | Function types | ```
type Implies<A, B> = (a: A) => B;
``` |

\(\lnot\) (negation) | Functions to empty type | ```
type Not<A> = Implies<A, False>;
``` |

\(\land\) (conjunction, and) | Product types | ```
type And<A, B> = { left: A, right: B };
``` |

\(\lor\) (disjunction, or) | Sum types |
```
type Or<A, B> = {
side: 'left', left: A
} | {
side: 'right', right: B
};
``` |

\(\forall\) (forall) | Parametric polymorphism | Generics |

It's also useful to define

```
type Iff<A, B> = And<Implies<A, B>, Implies<B, A>>;
```

As well as some convenience functions for using And and Or

```
function constructAnd<A, B>(a: A, b : B): And<A, B> {
return { left: a, right: b }
}
function constructOrLeft<A, B>(a: A): Or<A, B> {
return { side: 'left', left: a };
}
function constructOrRight<A, B>(b : B): Or<A, B> {
return { side: 'right', right: b };
}
function caseAnalysis<A, B, C>(or: Or<A, B>, ifLeft: (a: A) => C, ifRight: (b: B) => C): C {
if (or.side === 'left') {
return ifLeft(or.left);
} else {
return ifRight(or.right);
}
}
```

The fact that we can write the caseAnalysis function corresponds roughly with the fact that we can prove \(((A \lor B) \land (A \Rightarrow C) \land (B \Rightarrow C)) \Rightarrow C\).

## De Morgan's Laws

Let's prove something now! How about De Morgan's laws:

- \(\lnot(A \lor B) \iff (\lnot A) \land (\lnot B)\)
- \(\lnot(A \land B) \iff (\lnot A) \lor (\lnot B)\)

We can split this up into four proof goals:

- \((\lnot A) \land (\lnot B) \Rightarrow \lnot(A \lor B)\)
- \(\lnot(A \lor B) \Rightarrow (\lnot A) \land (\lnot B)\)
- \((\lnot A) \lor (\lnot B) \Rightarrow \lnot(A \land B)\)
- \(\lnot(A \land B) \Rightarrow (\lnot A) \lor (\lnot B)\)

The type for \((\lnot A) \land (\lnot B) \Rightarrow \lnot(A \lor B)\) is Implies<And<Not<A>, Not<B>>, Not<Or<A, B>>>, so we need a function which has an argument of type And<Not<A>, Not<B>>, and then returns a Not<Or<A, B>>. Returning a Not<Or<A, B>> means returning a function which is given a Or<A, B> and must then return False. We can use caseAnalysis on the Or<A, B>, and get either an A or B, and then use the left or right side of the And<Not<A>, Not<B>> together with that A or B to produce a False.

```
function proofDeMorgan1<A, B>(and: And<Not<A>, Not<B>>): Not<Or<A, B>> {
return (or: Or<A, B>) => {
return caseAnalysis(
or,
(a: A) => and.left(a),
(b: B) => and.right(b)
);
};
}
```

In the other direction, we're given a Not<Or<A, B>>, and need to provide an And<Not<A>, Not<B>>. To make a Not<A>, we need to write a function which is given an A and returns False, which we can do by constructing a Or<A, B> and then calling the Not<Or<A, B>>. A similar stratergy works for Not<B>.

```
function proofDeMorgan2<A, B>(notOr: Not<Or<A, B>>): And<Not<A>, Not<B>> {
return constructAnd(
(a: A) => notOr(constructOrLeft(a)),
(b: B) => notOr(constructOrRight(b)),
);
}
```

For the third proof goal, we're given an Or<Not<A>, Not<B>>, and need to return a Not<And<A, B>>. We return a function which is given an And<A, B> and returns False. To do that, we perform case analysis on what the Or<Not<A>, Not<B>> contains, after which we're given either a Not<A> or Not<B>, which we can use, together with the And<A, B>, to return False.

```
function proofDeMorgan3<A, B>(orNot: Or<Not<A>, Not<B>>): Not<And<A, B>> {
return (and: And<A, B>) => {
return caseAnalysis(
orNot,
(notA: Not<A>) => notA(and.left),
(notB: Not<B>) => notB(and.right)
);
};
}
```

The final proposition poses some difficulty. As a type, we need to find an inhabitant of Implies<Not<And<A, B>>, Or<Not<A>, Not<B>>>. We're given Not<And<A, B>>, and need to produce an Or<Not<A>, Not<B>>. The only useful thing we can do with the Not<And<A, B>> is call it with an argument of type And<A, B>. So ultimately, to get a False, we need to get both a value of A and a value of B. The problem is that we can return (wrapped in an Or) *either* Not<A> *or* Not<B>. If we return Not<A>, we'll write a function which is given an A, and must produce a False. To do this we need to use the Not<And<A, B>>, which means that we need A *and* B. But we only have A, with no means to get a B! The problem is that once we've "locked in" returning Not<A>, there's no oppurtunity to go back and return a Not<B> to get access to a B. A symmetric situation applies for returning Not<B>.

## Intuitionistic Logic

The problem we've run into in proving \(\lnot(A \land B) \Rightarrow (\lnot A) \lor (\lnot B)\) isn't because we haven't tried hard enough. It turns out the logic that we've been dealing with is intuitionistic, which is weaker than classical logic. In intuitionistic logic we can't prove \(\forall A, B. \lnot(A \land B) \Rightarrow (\lnot A) \lor (\lnot B)\), and this is reflected in the fact that we can't write a function body for <A, B>(notAnd: Not<And<A, B>>): Or<Not<A>, Not<B>>. Statements which are true in intuitionistic logic need to be able to be proved constructively.

## Continuations

And now for something completely different: suppose we want to find the first triangular number bigger than 1000. We might write something like:

```
function findBigTriangularNumber() {
for (let n = 0; ; n++) {
let triangularNumber = n * (n + 1) / 2;
if (triangularNumber > 1000) {
return triangularNumber;
}
}
}
console.log(findBigTriangularNumber()); // 1035
```

If we wanted to be a bit modular, we might write abstract away the calculation of triangular numbers, as well as the threshold of 1000:

```
function findOutputAboveThreshold(threshold: number, func: (n: number) => number) {
for (let n = 0; ; n++) {
let result = func(n);
if (result > threshold) {
return result;
}
}
}
const calculateTriangleNum = n => n * (n + 1) / 2;
console.log(findOutputAboveThreshold(1000, calculateTriangleNum)); // 1035
```

Here, we can call func a continuation, since when findOutputAboveThreshold wants to continue carrying on the computation rather than directly returning a value, it instead calls this computation, which returns the result of the computation.

So from that perspective, findOutputAboveThreshold first "returns" 0, by calling the continuation with 0. The continuation does some work, and once it's done, findOutputAboveThreshold can inspect the result of the computation, and choose to go back and "return" 1 by calling the continuation with 1.

When we think of calling the continuation as returning, allowing findOutputAboveThreshold to call func mutliple times is effectively giving findOutputAboveThreshold the power of time travel: if it wishes, it can "return" many times, each time rewinding the computation.

## Continuations as Classical Logic

So what does this have to do with the Curry-Howard isomorphism? Well it turns that if you write all your functions using this style of programming where you pass through a continuation to every function, you strengthen the power of the logic that your type system is in correspondence with. Before we move onto examples, we'll need to rewrite some of our tools.

The Implies type becomes

```
type Implies<A, B, R> = (a: A, cont: (b: B) => R) => R;
```

The R represents the ultimate result of the continuation. To ensure that the function we write can't just short-circuit the computation by constructing a return value without calling the continuation, we'll keep R generic. The type constructor for negation also needs to be changed to type Not<A, R> = Implies<A, False, R>.

With that out of the way, we can now go about proving the fourth part of De Morgan's law: \(\forall A, B. \lnot(A \land B) \Rightarrow (\lnot A) \lor (\lnot B)\). Translating that formula to a type, it looks like finding an inhabitant of <A, B, R>(notAnd: Not<And<A, B>, R>, cont: (or: Or<Not<A, R>, Not<B, R>>) => R) => R. Notice that the actual return-type of this big function is R, but in order to get a value of type R it has to call a continuation.

Like before, we can construct an Or<Not<A, R>, Not<B, R>> by trying to make a Not<A, R>. Doing this, we'll get an A, but we'll then need to provide a False. Like before, the only way to proceed is by getting access to a B. But this time is different: we now have the power of time travel. We can call the continuation again, this time giving a Not<B>. Now we'll have both A and B, and can then obtain a False.

```
function proofDeMorgan1<A, B, R>(
notAnd: Not<And<A, B>, R>,
cont: (or: Or<Not<A, R>, Not<B, R>>) => R
): R {
return cont(constructOrLeft((a: A, contFalse1: (f: False) => R) => {
return cont(constructOrRight((b: B, contFalse2: (f: False) => R) => {
return notAnd(constructAnd(a, b), contFalse1);
}));
}));
}
```

## Double Negation

As a final example, I'll show that \(A \iff \lnot \lnot A\). First the forwards direction: we're given an A, and "return" Not<Not<A>>. This means that we get an argument of type Not<A> and need to provide a False, which we can do easily since we have access to A, which is exactly what we need to use Not<A>.

```
function doubleNegationProofForward<A, R>(a: A, cont1: (notNotA: Not<Not<A, R>, R>) => R): R {
return cont1((notA: Not<A, R>, cont2: (f: False) => R) => {
return notA(a, cont2);
});
}
```

The opposite direction will be more tricky, since \(\forall A.\,\lnot \lnot A \Rightarrow A\) is not a theorem in intuitionistic logic^{[2]}, so we'll need to take advantage of continuation magic. We have an argument of type Not<Not<A, R>, R>, and need to come up with an A. To get an R we can call the Not<Not<A, R>, R> function, and we need to give two things.

The first thing we need to give is something of the type Not<A, R>. This means that we get an A, and a continuation of the type (f: False) => R. We seem stuck, since we can't construct a False in this context. So what do we do? Rather than running the continuation which requires us to give False, we can instead run the outer continuation which merely requires an A, which we have. By using continuations, we have been granted the ability to alter control flow, and if we're in a situation which we don't like (needing to "return" False) we can just choose a different continuation.

The final thing we need to do is to supply a continuation which is given False, and needs to return R. For this, all we need to give is the identity function (f: False) => f. Since the return type is False which is never, and never is the bottom type (a subtype of every other type), (f: False) => False is a subtype of (f: False) => R (the return-type position is covariant). This leaves us with

```
function doubleNegationProofBackward<A, R>(notNotA: Not<Not<A, R>, R>, cont1: (a: A) => R): R {
return notNotA((a: A, cont2: (f: False) => R) => {
return cont1(a);
}, f => f);
}
```

## Footnotes

Throughout this post I make the assumption that TypeScript is sound, a fact which is alas simply not true.

let a: never[] = []; let b: (never | number)[] = a; b.push(42); let x = a[0]; // check for undefined (necessary when --noUncheckedIndexedAccess is set) if (x !== undefined) { let thisShouldNeverHappen: never = x; console.log('Whoops!'); }

So we'll just have to try to avoid triggering any soundness issues. ↩

Indeed if \(A\) is classically provable, then \(\lnot\lnot A\) is intuitionistically provable, and using continuation-passing-style corresponds to double-negation translation. ↩