https://lilymonade.fr/blog/feed.xml

Write your own RPN calculator from scratch in Haskell

2025-07-11

This tutorial is here to show you the thought process of creating a complete application in Haskell. I won't explain in great details concepts like recursion or higher order functions, but we will give some examples and explain how and why we use them. To better understand these concepts, I will give links to other tutorials that talk about them.

Still, if you have never written a line of Haskell, fear not. I will go slowly and introduce every concept I will use (though briefly for some).

The beginning

This chapter is very theorical, but important nonetheless. It shows the base of Haskell and it will help you all the time to understand complex functions. Haskell is a functional and declarative language. These 2 aspects of the language are very important. It is deeply inspired from lambda calculus which we will show in a bit.

Declarative languages, and lambda calculus

Declarative means that a program is not seen as a list of things to do, but as a description of how the thing we want to compute is. This means when you say a = someExpression then you exactly say that "a is someExpression". This is not "put the value of someExpression inside a memory labeled a", this last sentence is how you understand = sign in imperative languages.

This means there is some kind of inherent "immutability". You never put values in memory places, you always say what are things, and this will stay the same forever. This does not mean it will always return the same result, only that for the same input it will give the same output. This concept of "only changing depending on the input" is why declarative languages are intresting. They are very predictable.

Functional language

Functional means we will treat functions as normal values, that we can create, modify, take as parameters, or return as value. This concept is really interesting as it gives us built-in command, strategy, or even builder patterns (for those of you that like object oriented design patterns hehe), but also with a bit of syntactic sugar, a very powerful way of writing complex programs.

The most famous example of a function thaken as parameters is the function map. map takes 2 parameters:

  • a function that transform objects
  • a list of those objects

and it returns a list containing objects transformed with the given function.

Lambda calculus

One of the first (if not the first) declarative programming language is the lambda calculus. In lambda calculus, there are only 3 constructs:

  • Variables (names), like a, b, x, f, jeanclaude...
  • Lambda abstractions, like (λx.E)(\lambda x.E). You can interpret these as "function that takes an input called x and returns something that has the form E (some expression with maybe x used in it). Lambda abstractions gave their name to "anonymous functions" in modern languages. When you define a "lambda" it comes from here.
  • Applications, like (f  E)(f \; E). You can interpret them as "function call". In this example, we called function f with expression E as first parameter.

A bit of semantics

The semantic of a language is a set of rules to compute a program. For example, in a language like JavaScript, there is a precise definition of variable declaration, or even how you must compute a whole program.

let x = 0;

This line will roughly do this:

  • allocate a place in memory
  • give it the name x so that we can access this place again
  • put the numerical value 0 in it

And if there were multiple lines, we would execute the actions of each line in order (providing we don't do branching). So there is a concept of a "reading head" that goes forward after each action. The semantics of JavasScript is really complex, there are whole papers and thesis about it, but it has one like all other languages and knowing it is important to understand why things work.

Lambda calculus (and also Haskell) also have a semantic. There a 2 rules (yes only 3 constructs and 2 rules and you have a turing complete language !), but we will focus on ony one rule, as the other rule is here to de-ambiguate things when multiple expressions have the same name.

This rule is called βreduction\beta-reduction (beta reduction), but don't fear its name it is a very simple "search and replace" rule. For example, to compute this expression using the βreduction\beta-reduction rule in this lambda calculus expression (extended with arithmetic operations):

(λx.x+x)10 (\lambda x. x+x) 10

we would take the expression x+xx+x, and replace every occurrence of xx with 1010, resulting in the new expression 10+1010+10. Some expressions need multiple βreduction\beta-reduction steps to be computed. For example this one:

((λf.λx.f(fx))(λy.y×2))1 ((\lambda f. \lambda x. f (f x)) (\lambda y. y\times 2)) 1

For convenience we will reduce it from the inner-most application to the outer-most. The first βreduction\beta-reduction will take the expression (λx.f(fx))(\lambda x. f (f x)) and replace every ff with (λy.y×2)(\lambda y. y\times 2), leaving us with the expression:

(λx.(λy.y×2)((λy.y×2)x))1 (\lambda x. (\lambda y. y\times 2) ((\lambda y. y\times 2) x)) 1

Then, the second reduction we will perform would be reducing ((λy.y×2)x)((\lambda y. y\times 2) x), resulting in x×2x\times 2. The full expression once this reduction is done is this:

(λx.(λy.y×2)(x×2))1 (\lambda x. (\lambda y. y\times 2) (x\times 2)) 1

Then do the same with the expression (λy.y×2)(x×2)(\lambda y. y\times 2) (x\times 2):

(λx.(x×2)×2)1 (\lambda x. (x\times 2)\times 2) 1

And finaly reduce the last application:

((1×2)×2) ((1\times 2)\times 2)

And we have 44, we successfuly computed "double twice the value 1". Everything is computable using this technique, you just need a lot of βreduction\beta-reduction. Knowing this is not that useful when you program in real world, but it is helpful when you need to compute yourself part of a program to understand how it behaves. The same applies to imperative languages, sometimes you just have to execute your lines in your head to understand why there is a bug for example.

Exercise

Try reducing these expressions:

E1=((λx.(λy.x))  0)  1)E2=(λx.x  x)(λx.x  x)E3=((λx.(λy.x))  0)  ((λx.x  x)(λx.x  x))) E1 = ((\lambda x.(\lambda y.x)) \; 0) \; 1)\\ E2 = (\lambda x.x \; x)(\lambda x.x \; x) \\ E3 = ((\lambda x.(\lambda y.x)) \; 0) \; ((\lambda x.x \; x)(\lambda x.x \; x)))\\

Show solution

E1=((λx.(λy.x))  0)  1)E1=((λy.0)  1)E1=0 E1 = ((\lambda x.(\lambda y.x)) \; 0) \; 1)\\ E1 = ((\lambda y.0) \; 1)\\ E1 = 0 \\

In E1E1, the value 11 is bound to the name yy but yy never appears in the lambda's expression so it is never used.

E2=(λx.x  x)(λx.x  x)E2=(λx.x  x)(λx.x  x)E2=(λx.x  x)(λx.x  x)E2=(λx.x  x)(λx.x  x)... E2 = (\lambda x.x \; x)(\lambda x.x \; x) \\ E2 = (\lambda x.x \; x)(\lambda x.x \; x) \\ E2 = (\lambda x.x \; x)(\lambda x.x \; x) \\ E2 = (\lambda x.x \; x)(\lambda x.x \; x) \\ ...

Yes, E2E2 never reduces to a value, it always loops infinitely !

E3=((λx.(λy.x))  0)  ((λx.x  x)(λx.x  x)))E3=((λy.0)  ((λx.x  x)(λx.x  x)))E3=0 E3 = ((\lambda x.(\lambda y.x)) \; 0) \; ((\lambda x.x \; x)(\lambda x.x \; x)))\\ E3 = ((\lambda y.0) \; ((\lambda x.x \; x)(\lambda x.x \; x)))\\ E3 = 0 \\

You may think E3E3 would loop infinitely too because it contains E2E2, but since E2E2 disappears we don't need to reduce it, and never loop.

Functions and simple code structures

In haskell, everything is an expression, and every expression has a type (the type of the objects it reduces to). For example:

10 -- this is a number
True -- this is a boolean (True or False)
10 < 5 -- this is also a boolean, just not yet computed
"why am i here" -- there are strings
'!' -- and also characters. note the " vs ' , they really are different types
    -- in fact strings are encoded as lists of characeters
[1, 2, 3, 4] -- lists
(10, "help me", [1, 2]) -- and there are tuples, collections of objects of different types

Types in details

Pattern matching

Higher order functions

Truth about Pydamids functions

m