Foundations of Type Systems and Programming Language Classification
What Is a Type System?

A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.
— Benjamin C. Pierce, *Types and Programming Languages* (2002)
Before We Begin
Whenever people talk about programming languages, you'll often hear them bring up "Static Typing" and "Dynamic Typing."
Typing is generally one of the topics developers think about most when choosing a language.
Let's start with a brief look at what Static Typing and Dynamic Typing actually are.
Background Knowledge Worth Knowing Before We Dive In
Values and Types
A value is the concrete data a program computes (
32,"hello",true).A type is the name given to a set of those values, as well as the specification of the operations permitted on them (
int,string,bool).Without types, there is no basis for judging whether an operation like
32 + "hello"is meaningful.
Compile and Runtime
Source code → Compiler → Machine code / Bytecode → Execution
Compile time: the phase before execution, where code is translated. Structural rules are what can be checked at this stage.
Runtime: the phase where the program actually executes. Values are loaded into memory.
Which phase type checking happens in is the key fork in the road between static and dynamic typing.
Relationship with Interpreter Languages
Languages that execute source code directly without a compilation step (Python, Ruby) have little to no compile time.
That is why Dynamic Typing and interpreter languages often come up together, but the two are independent concepts.
TypeScript uses Static Typing, yet the final execution is carried out by the JavaScript interpreter.

(Illustration of Dynamic Typing and Static Typing by Kolja Wilcke https://x.com/01k)
A Summary of Basic Typing Concepts
A Typing System is the set of rules that defines how the types of variables, functions, and data are managed.
The main purposes of a Type System are as follows:
- Error prevention: proactively prevents the use of data with incorrect types.
- Code stability: makes program behavior more predictable.
- Optimization potential: allows compilers to optimize by leveraging type information.
In particular, the difference between Static Typing and Dynamic Typing comes down to when type checking occurs.
Static Typing
- Types are checked at Compile Time.
- The types of variables and functions are declared in advance, and type errors can be caught at compile time.
- Representative languages: C, C++, Java, C#, TypeScript, Kotlin
int age = "32"; // Compile error. Cannot assign a string to an integer variable
An advantage is that type errors are caught early, resulting in higher stability.
A disadvantage is that types must be declared up front when writing code, which can slow down initial development speed.
Dynamic Typing
- Types are checked at Runtime.
- There is no need to declare a variable's type in advance; the type is determined during execution.
- Representative languages: Python, JavaScript, Ruby, PHP
age = 32 #Integer
age = "Thirty-two" # Convert to string (possible)
An advantage is flexible coding and faster development speed.
A disadvantage is that type errors can occur during execution, making debugging potentially difficult.
To put it in one sentence: Static Typing proves the absence of certain classes of errors, while Dynamic Typing detects those errors at runtime.
So where did all this begin?
First, we need to understand Lambda Calculus (λ-calculus).
Useful background knowledge before we continue
Variable: $x$ - a slot that holds a value (Parameter)
Abstraction: $λx.M$ - function definition (Function Definition / Lambda Expression)
Application: $MN$ - function call (Function Call)

There is a class of problems of elementary number theory which can be stated in the form that it is required to find an effectively calculable function *f* of *n* positive integers, such that *f*(x₁, x₂, ⋯, xₙ) = 2 is a necessary and sufficient condition for the truth of a certain proposition of elementary number theory involving x₁, x₂, ⋯, xₙ as free variables.
— Alonzo Church, *An Unsolvable Problem of Elementary Number Theory* (1936)
Lambda Calculus (λ-calculus)
Lambda Calculus (λ-calculus) is one of the foundational papers at the starting point of programming.
In his paper, Alonzo Church formally introduced Lambda Calculus as a new formal system for the first time.
There, Church used Lambda Calculus to give a mathematically rigorous definition of the intuitive notion of Computability,
and it is recognized, alongside the Turing Machine, as one of the powerful and influential formal systems for defining computability.
The central result of this paper demonstrates that the Entscheidungsproblem, a difficult open problem in the mathematics of the time, is unsolvable.
The paper contains concepts foundational to functional programming, such as Anonymous Functions, Higher-Order Functions, and Function Application.
Lambda Calculus is quite difficult, but to summarize what many mathematicians and authors of computer science texts have said:
- Functions are treated as the most fundamental unit of computation.
- Turing Completeness is expressed using just three rules: variables, abstraction, and application.
However, Lambda Calculus at this stage had no concept of types. Untyped Lambda Calculus went on to become one of the foundations of later programming language theory, especially functional languages and computational models,
and it was a dynamic model that could accept "anything" as input.
In other words, the original form of Lambda Calculus was flexible precisely because it had no types,
and this paper would later serve as one of the theoretical underpinnings of Dynamic Typing.
*Turing Completeness: a system with Turing Completeness can solve every computational problem that a Turing Machine can solve.
**Turing Machine: an abstract machine conceived by Alan Turing; computers follow this same structure. Lambda Calculus (λ-calculus)

A FORMULATION OF THE SIMPLE THEORY OF TYPES
(1940, Alonzo Church)
Four years later, Alonzo Church proposed a system that added "Types" to Lambda Calculus.
Through this work, the concept of a "Type System" that assigns input and output types to each variable and function made its first appearance,
and this research would become the skeleton of what we now call Static Typing.
So now we have seen the historical origins of both Static Typing and Dynamic Typing.
Let's now look at what a Lambda Calculus expression actually is.
But before that, let's use some middle-school mathematics to make it a little easier to grasp.
Understanding Lambda Expressions Through Middle-School Math
f(x) = x + 2 (a named function)
Let's convert this into a lambda expression.
λx. x + 2 (an unnamed "anonymous" function)
λx. x + 2 means "a function that takes x as input and adds 2 to it."
f(3) = 3 + 2 = 5
(λx. x + 2) 3 = 3 + 2 = 5 -- the result of applying it is the same
In the math you learned in school, every function must be given a name like f or g, but in Lambda Calculus, a function itself is a value, so it can participate in computation without a name.
Think of it the way you first learned about substituting values for x and y back in middle school.
Similarities
Mathematical variables like x and y play the role of receiving input. The x in λx. x + 2 also acts like an input-receiving variable. In other words, when a value is given, it is substituted in and the function is evaluated.
Differences
x and y are simple variables, but lambda (λ) also serves the role of "defining a function."
λx. x + 2 itself can function as a single "Anonymous Function."
Sharp-eyed readers will have noticed that lambda (λ) is simply the notation for declaring x as an input variable, and λx means "from here on, I'll use $x$ as the input."
To be a bit more specific, λ is the symbol for expressing an 'Anonymous Function,'
and a lambda expression is simply a function that takes some value as input, performs a specific computation, and returns the result!
(A playground example to help with basic Lambda Calculus)
Now that we have a decent grasp of the basics, let's work through some actual Lambda Calculus.

Untyped Lambda Calculus (Untyped-λ-Calculus)
λx. x (identity function)
λx. λy. x + y (a function that adds two values)
→ Since there is no type system, any value can be passed, but type errors cannot be caught!
Here, x + y is pseudonotation used to aid understanding.
Pure Lambda Calculus has neither numbers nor addition operators.
In practice, even the number 2 is represented as a function of the form λf. λx. f (f x), which is known as Church Encoding.
Readers who plan to look up the original paper should keep this in mind.
Because there are no types, any value can be passed in, but by the same token, type errors cannot be caught in advance.
Typed Lambda Calculus (typed-λ-Calculus)
λx: Int. x + 1 // A function that takes an integer and adds 1
λx: Bool. if x then 1 else 0 // Takes a boolean value and converts it to an integer
-> By specifying types on variables, if a value of the wrong type is passed in, the error can be detected before the computation even runs!
Actual Notation vs. Pseudonotation
-- Pseudonotation
λx: Int. x + 1
-- Actual notation 1: STLC style with primitive operations
λx: Nat. succ x
-- succ : Nat → Nat (successor function)
-- succ x corresponds to x + 1
-- Pure lambda calculus has no + operator, so succ is declared as a primitive function
-- Actual notation 2: Church Encoding (most pure form)
λf: Nat → Nat. λx: Nat. f x
-- The number 1 is represented as λf. λx. f x
-- Even succ is removed, and numbers are represented by the number of function applications
-- Pseudonotation
λx: Bool. if x then 1 else 0
-- Actual notation: Church Boolean
true = λt: T. λf: T. t -- true returns the first of two inputs
false = λt: T. λf: T. f -- false returns the second of two inputs
λx: (T → T → T). x (λf: Nat → Nat. λn: Nat. f n) (λn: Nat. n)
-- If x is true, it returns a function equivalent to succ; if false, it returns the identity function
Level | Description |
|---|---|
Pseudonotation | Uses |
STLC + Primitive Functions | Declares and uses primitive operations like |
Pure Church Encoding | Numbers, booleans, and operations are all expressed as functions. Paper-level formalism. |

"You can learn Lisp in one day. But if you know Fortran, it will take three days." - Marvin Minsky
And these concepts were realized in the first high-level languages to evolve from assembly: FORTRAN and Lisp.
FORTRAN | 1957 | Static Typing | Type checking at compile time, fast execution speed |
Lisp | 1958 | Dynamic Typing | Type determined at runtime, flexible data structures |
1) FORTRAN: The Beginning of Static Typing
The first high-level language, invented in 1957 for scientific computing.
Characteristics
- Adopts a static type system.
- Performs type checking at compile time.
2) Lisp: The Beginning of Dynamic Typing
1958: A functional programming language developed by John McCarthy.
Characteristics
- Adopts a dynamic type system.
- Performs type checking at runtime.
The history of programming languages is endless, so to put it simply:
Static Typing was implemented in FORTRAN as a compile-time type checking mechanism,
while Dynamic Typing was implemented by determining types dynamically at runtime and allowing types to change during execution.
As a side note, the type system introduced in FORTRAN was not a strict type system,
and strict static typing appeared in the ALGOL family of languages; the type system itself continued to evolve endlessly over the following decades.
Even so, the broad framework of Static Typing and Dynamic Typing remained intact.
After that, Static Typing and Dynamic Typing were further subdivided,
and subdivided again. Let's look at the different ways they can be categorized.

(Washington CSE583)
Classification by Type Inference
public class TypeInferenceExample1
{
public static void main(String[] args)
{
// Explicit type declaration (traditional approach)
int number = 10;
double pi = 3.14;
String message = "Hello, Type Inference!";
// Type inference (using the `var` keyword)
var inferredNumber = 20; // Inferred as `int`
var inferredPi = 3.14159; // Inferred as `double`
var inferredMessage = "Welcome to Java!"; // Inferred as `String`
System.out.println("Number: " + number + ", Inferred Number: " + inferredNumber);
System.out.println("Pi: " + pi + ", Inferred Pi: " + inferredPi);
System.out.println("Message: " + message + ", Inferred Message: " + inferredMessage);
}
}
(The example shown is Java's var keyword as an illustration of type inference.)

Type Inference
Type Inference refers to the ability of a compiler (or interpreter) to automatically infer the types of variables, functions, and expressions by analyzing the context or code structure, even when the programmer has not explicitly declared them.
In Types and Programming Languages, it appears as a section called "Type Reconstruction," which is noted as being used interchangeably with the term "Type Inference." The distinction between the two is as follows.
Type Inference: the case where the compiler can fully determine a type from context alone.
Type Reconstruction: the case where some type information is already given and the rest must be inferred.
In academia, however, the two are mostly treated as synonyms, and TAPL explicitly notes this.
Languages are further divided into two categories depending on how types are declared.
Explicit Typing: the developer must declare types manually. Code readability is good, but it can become verbose.
Implicit Typing: the compiler infers types automatically. Code becomes more concise, but debugging can be difficult when inference errors occur.
To understand this more deeply, let's also explain the Hindley-Milner Type System.

(Rules expressing the Hindley-Milner Type System)
The expressions above represent the declarative type system of the Hindley-Milner Type System and depict the rules for type judgment and type inference.
Rule | Description | Relationship to the Hindley-Milner Type System |
|---|---|---|
[Var] Variable Rule | A variable | The basic rule for inferring the type of a variable. |
[App] Function Application Rule | If | Defines how types are inferred when a function is applied. |
[Abs] Lambda Abstraction Rule | If | The approach to function type inference based on Lambda Calculus. |
[Let] Let-Binding Rule | In | Type inference via |
[Inst] Instantiation Rule | If type | Related to Polymorphism type inference. |
[Gen] Polymorphic Generalization Rule | When | Explains Let-Polymorphism in Hindley-Milner. |
The Hindley-Milner system can be understood as one that provides powerful type inference.

(Constraint-Based Typing)
A Constraint-Based Type System is an approach that collects constraints during the type inference process and then resolves them all at once.
In other words, it gathers all the required relationships (constraint conditions) and solves them to determine the final types.
Rather than matching types immediately at the point of function application, it first collects type constraints and resolves them later through Unification.
Depending on whether types are explicitly declared in code by the developer (Explicit) or automatically inferred by the compiler or interpreter (Implicit), languages are then categorized further.
Strong Typing / Weak Typing
If static vs. dynamic is a question of "when type checking is performed," then strong vs. weak typing is a question of "how strictly type conversions are controlled." These two axes are independent of each other.
Strong Typing: Implicit Casting is strictly restricted. A type mismatch produces an error.
Weak Typing: Type conversions are permissive, and unexpected behavior can occur at runtime.
Type System Classification by Language
Note: the classification criteria follow the conventions commonly accepted in the community and may vary depending on language version and context!
Language | Static / Dynamic | Strong / Weak | Explicit / Implicit | Hindley-Milner Applied | Notes |
|---|---|---|---|---|---|
C | Static | Weak | Explicit | Not applied | Implicit type conversions are permissive. Type reinterpretation via pointers is possible. |
C++ | Static | Weak | Explicit / Implicit | Not applied | Similar to C, but partial type inference is supported via the |
Java | Static | Strong | Explicit / Implicit | Not applied |
|
C# | Static | Strong | Explicit / Implicit | Not applied | Partial dynamic typing supported via |
Go | Static | Strong | Implicit | Not applied | Type inference supported via the |
Rust | Static | Strong | Implicit | HM-based, with extended concepts such as lifetimes. | Uses the |
Swift | Static | Strong | Implicit | Influenced by HM. | Strong type inference using |
Kotlin | Static | Strong | Implicit | Influenced by HM. | Stronger type inference than Java. Built-in null-safety type system. |
Haskell | Static | Strong | Implicit | Fully applied. | Complete type inference based on HM. Explicit type declarations are conventional, not required. |
OCaml | Static | Strong | Implicit | Fully applied. | HM type system similar to Haskell. The prototype of the ML family. |
TypeScript | Static | Strong | Explicit / Implicit | Not applied. | The statically typed version of JavaScript. Supports context-based type inference. |
Python | Dynamic | Strong | Implicit | Not applied. | Runtime type checking. Partial static type hints supported via the |
JavaScript | Dynamic | Weak | Implicit | Not applied. | Strong implicit type conversions. A distinction exists between the |
Ruby | Dynamic | Strong | Implicit | Not applied. | Implicit type conversion is not allowed. Evaluating |
Lisp | Dynamic | Strong | Implicit | Not applied. | Types are checked strictly at runtime. Implicit conversions between types are restricted. |

A language that doesn't affect the way you think about programming, is not worth knowing.
— Alan J. Perlis
Closing Thoughts...
The type system of a programming language is not merely a technical tool; it is the crystallization of ideas and philosophy.
Beginning with Alonzo Church's Lambda Calculus and passing through the rivalry between FORTRAN and Lisp,
two poles emerged: Static Typing in pursuit of stability, and Dynamic Typing in pursuit of flexibility.
These two currents competed and converged over decades to form the foundation of modern languages, and they have since moved toward a new synthesizing philosophy known as gradual typing.
Before choosing a language, we actually tend to select it based on the libraries and frameworks most commonly used in the industry we want to enter.
But if you understand the type system, you can gain a deeper understanding of the libraries and frameworks that derive from a language's design intent,
and you will acquire the wisdom to choose the right tool (programming language) for the problem at hand.