In the past month, I have been learning and writing in Haskell. My journey with Haskell began way back in my junior year of high school. I began reading the book “Get Programming With Haskell” by Will Kurt. But once I reached Chapter 17, on Semigroups and Monoids, I found it too abstract and decided to pause there, perhaps returning in the future... Recently, I decided to pick it up again, and I want to reflect on what I’ve learned.
Note that this article glosses over important functional programming concepts or syntax. Instead, I chose to focus on the ideas that interested me most.
Lambda Calculus
Haskell is largely inspired from ideas in lambda calculus, which is a model of computation equivalent to a Turing machine. A lambda function is written like , where the lambda denotes the declaration of a function, and the period separates its parameters from its outputs.
I’ll share an example of how lambda calculus shares parallels with Haskell. In lambda calculus, to define functions with multiple parameters, we nest lambdas together:
Now, here is how we write the signature of a function that adds two integers:
add :: Int -> (Int -> Int)
In practice, the parentheses is omitted. As we see, functional programmers see functions with multiple parameters as syntactic sugar made by composing many functions with one argument together. Concepts like partial application may seem alien on first impression, but natural for someone who has a background in lambda calculus.
Types
The type system in Haskell is one of its most interesting and expressive features. First, Haskell uses algebraic data types (ADT) which is extremely expressive. The code snippet below illustrates how a linked list may be defined.
data List a = Nil | Cons a (List a)
Here is how you define a binary tree:
data BinaryTree a = Nil | Node (BinaryTree a) a (BinaryTree a)
It is elegantly succinct to define recursive data types with ADT. That’s why ADTs are especially useful in programs that have complex, recursive structures, such as compilers.
Type classes represent another cornerstone of Haskell's design, offering an elegant approach to polymorphism. They are similar to abstract classes in traditional object-oriented languages.
class Eq a where
(==) :: a -> a -> Bool
In this example, any datatype that implements the (==) operator would automatically become an `Eq` type, without the need to explicitly declare that it is. In addition, functions can specify constraints that force their parameter to be instances of some particular type class.
Monads
Monads are actually a straightforward concept if no category theory is involved. Monads are just a pipeline design pattern. The most essential function that Monads define is the bind operator (>>=) which is equivalent to something like `Promise.then` in JavaScript.
readFile "in.txt" >>= print
`readFile` reads the contents of a file and returns it as a string. However, since IO operations sometimes fail, it instead returns a string wrapped in an IO monadic context. With the bind operator, if `readFile` succeeds, the output would be printed. If it does not, then an error would be thrown. This pipeline design pattern makes it easier to deal with real-world data systems where operations may fail.
Monads provide the abstraction for Haskell to provide syntactic sugar for imperative behavior via `do` statements.
Lazy evaluation
fibs = 0 : 1 : [ a + b | (a, b) <- zip fibs (tail fibs)]
Perhaps Haskell's most distinctive feature is its lazy evaluation. The code above defines the Fibonacci sequence. Notice that `fibs` uses itself in the definition. This is only possible because of lazy evaluation. Second, the list is infinite. No computation is done unless there is an operation that forces evaluation, such as indexing.
Practicality
Although Haskell was an amazing language that expanded my horizons, I was forced to question its practicality when using it. The benefits of Haskell are that functions can be easily reused and composed because of partial application.
Haskell programs are divided into two layers. The outer layer, which contains network requests, user inputs, and IO requests, is unpredictable and error-prone, and requires error handling. The inner layer is composed of pure functions that do not have side effects, and contain code for business logic and data manipulation.
In the real world, however, many aspects of the language make it impractical.
- Complexity: Despite what Haskellers preach, excessive complexity impedes productivity. A Haskell program encourages separating functions into many composable parts and then recombining them. Although this strategy can occasionally be beneficial, the issue is that Haskell takes this approach to the extreme and can make the codebase difficult to navigate and harder to manage.
- Design: In practice, business logic naturally gravitates toward imperative patterns, and embracing mutable states can often eliminate convoluted recursive structures. Haskell supports both imperative and mutable variables via monads, but it is unnecessarily complicated compared to a language like Python.
- Efficiency: Declarative languages are double-edged. It is widely used in query languages because it hides complexity and allows the compiler to optimize commonly used operations. Nevertheless, for a general-purpose language, there are two significant drawbacks. One, users are forced to rely on the compiler can optimize the code. Second, runtime complexity is ambiguous, which is something most software engineers cannot compromise on.
- Debugging: This was the deal breaker for me. It is too difficult to debug a Haskell program because of the lack of support for a proper error system.
As much as I loved learning and writing Haskell, I have to admit that its place in the industry is limited due to its uncompromising nature for purity. Nonetheless, it has served as an inspiration for many languages we use today and solidified its throne in academia.