Towards Faster Iteration in Industrial Haskell
2020-04-14
I’ve been writing Haskell in industry for the last eight years, at four different employers. I’ve been a full-time Haskell programmer since 2014. Though I never anticipated this turn of events, I’m certainly not complaining: I truly like writing Haskell, and I’m experienced enough in it to write at or exceeding the speeds at which I can write in the imperative languages I used before Haskell. However, I’ll be frank: Haskell hasn’t always succeeded when I’ve used it in practice. As my deployed Haskell code has aged, some of it has been rewritten(the adage about things being replaced with shell scripts is as true as it is hackneyed)
in other languages, some of it has been discarded entirely, and some of it was written at companies that no longer exist. This would be true of any other language, of course, but Haskell isn’t like other languages, both in its successes and its failures.
The realm of technical experience that is ‘Haskell in production’ admits too many perspectives and nuances to discuss in full within a single blog entry. This particular post concerns one industry perspective: the speed at which a team of programmers can iteratively improve and extend a given codebase. Throughout my Haskell career, I’ve heard a consistent refrain from team leads and management: Haskell codebases don’t iterate quickly enough, especially at early-stage startups where fast iteration is expected in the face of tight deadlines. Their observation is rooted in fact. Haskell is a compiled, strongly-typed language, and these features mean that development time is biased towards the implementation of the system, in exchange for the time saved by eliminating many classes of errors at compile-timeIt’s worth mentioning that Haskell programs, as a rule, are well-behaved in production; they’re more performant and reliable than services written in the dynamically-typed languages that I have used (Ruby, Python, JS, and the other usual suspects).
. It must be said that iteration speed is neither exclusively nor axiomatically good, for the simple reason that projects and systems built in haste often end up as liabilities. Yet at the same time, no one with industrial experience would deny that there exist products for which iteration speed is a primary concern. Haskell doesn’t make fast iteration impossible, but neither does it make it easy without paying sustained attention to developer workflows and practices.
This post is a collection of the things I’ve learned, recorded in the hope that it’ll shed light on the Haskell experience and, just maybe, help someone else to introduce Haskell into their workplace. Your experiences and mileage may, of course, vary.
Decision fatigue is the enemy: choose one way to do it.
Few languages are more syntactically and semantically flexible than Haskell; the sheer number of ways you can write Haskell code borders on the pathological. Declarative or imperative, simple or fancy, with cabal
or stack
, built with mtl
or fused-effects
or rio
: there exists no one definitive way to approach a given problem. This isn’t necessarily bad—no language designer, no matter how prescient, can foresee all the ways in which users will put a language to work, and Haskell’s flexibility plays a large part in its versatility in the face of real-world problems. However, the burden of making those decisions rests on you, the programmer, and that makes you particularly vulnerable to decision fatigue, the state of poor judgment induced by the mental depletion that comes after sustained and repeated decision-making.
I advise you in the strongest possible terms to define and stick to an agreed-upon subset of Haskell and the Haskell library ecosystem. Stick to this subset stringently and fervently, and remain consistent across repositories and projects: if you switch effect systems every project, you’re just digging yourself into a hole. Should the fell clutch of circumstance compel you to depart from or modify the bounds of this subset, consult your team, document your changes extensively, and ensure that said changes are applied consistently and immediately throughout all applicable projects. The more questions you answer up frontIf you’re interested, I use cabal
, fused-effects
, Emacs dante-mode
and lsp-haskell
set up to display errors inline, optics
for lenses, the streaming
ecosystem for producer-consumer problems, ormolu
for formatting (more on that later), and pretty much all of the language extensions save the Lovecraftian monstrosities like IncoherentInstances
.
, the fewer decisions you’ll have to make when the chips are down, and the more guidance you’ll provide to future engineering efforts.
Don’t waste time hand-formatting your code—ever.
This is a special case of the above, but I thought it was worth pointing out, perhaps because my time spent writing Haskell has given me a specific and fervent ideal of what Haskell should look like. This is a bad thing: in industry, there are few things less important than syntactic minutiae, given that little code survives long enough for it to matter. My day-to-day Haskell experience became substantially simpler and more pleasant when I swallowed my aesthetic objections and delegated my code’s formatting entirely to an external formatter; I recommend that you choose such a formatter and ensure that everyone writing Haskell has this formatter integrated into their workflow. No matter what formatter you use—ormolu
, brittany
, hindent
, stylish-haskell
—it should be entirely responsible for how your code looks; you should never spend time spacing or reflowing code on the page. If you’ve set up a formatter and are still compelled to format your code by hand, you should switch to a more opinionated or aggressively-configured formatter.
As far as which formatter you use, I recommend ormolu
precisely for the reason that it admits almost no configuration, and by default optimizes for minimal, readable diffsHorizontal alignment of things like case
branches and LANGUAGE
pragmas looks beautiful, but entails reformatting of the entire sequence of branches and pragmas. Horizontal alignment is simply not worth it.
. My experience is that the most successful and widely-adopted code formatters (and here I am thinking of things like rustfmt
, gofmt
, and black
) admit little to no configuration: the resulting code looks the same no matter who runs it, even when the underlying stylistic conventions change. A configurable formatter runs the risk of concentrating stylistic debates in the configuration file, which redirects, rather than eliminates, decision fatigue (though a configurable formatter is, of course, better than no formatter at all). The aforementioned formatters for Rust/Go/Python have an easier task than a Haskell formatter, as these language communities have official style guides agreed upon by the community as a whole. No such document exists in the Haskell community, and as such there’s a certain resistance to one-size-fits-all stylistic dicta. As regards ormolu
’s syntactic choices, I trust in the judgment of my friends at Tweag I/O; I may not love the results 100% of the time, but eliminating formatting-related decision fatigue is worth the occasional grumble. Don’t believe me? Try it.
Identify your complexity budget and stick to it.
The Simple Haskell movement asserts that the benefits associated with sticking to standard Haskell 2010 are, in industry, worth the downsides. This is a worthy and meritorious perspective: even without GHC’s many wonderful extensions, Haskell 2010 is significantly more powerful than most other languages out thereThe exceptions to this being perhaps Rust and Swift, whose support for associated types provides for extremely rich standard libraries and idioms. (I’m curious as to how the popularity of associated types will manifest itself in the lazy functional languages of the future, and in their standard libraries.)
. (Consider that most languages don’t even provide algebraic data types. Even if you’re not a Haskell programmer, you should find that appalling.) This out-of-the-box power is an opportunity for you, the programmer, to identify the minimum set of extensions you need to accomplish your goals in your provided timeframe. If you’re building, say, a Haskell CLI tool to automate parts of the code review process, there’s no reason to reach immediately for a library like lens
or a language extension like TypeFamilies
.
However, for large codebases in the domains where Haskell really shines (compilers, pipeline processing, domain-specific languages), Simple Haskell may not be enough to deliver maintainable code on schedule, because certain Haskell techniques require both fancy language extensions and a sufficient depth of knowledge on the part of the programmer. Take the case of datatype-generic programming atop GHC.Generics
. The GHC.Generics
idioms are, by a long shot, the best way to eliminate boilerplate without compromising type safety, using runtime casts, or requiring Template Haskell. However, effective use of GHC.Generics
requires a grasp of the TypeFamilies
extension, an understanding of associated types as type-level functions, sufficient Haskell familiarity to understand the structures generated by Generic
instances, higher-kinded types (Generic1
), and PolyKinds
(to take advantage of Generic1
’s polymorphic kinds). Make no mistake about it: this is hard, even for people who’ve spent considerable time learning Haskell, much less those coming to it from imperative backgrounds, where datatype-generic programming is usually as simple as taking advantage of runtime reflection to access and modify objects’ instance variables.
I’ve worked on Haskell codebases that, because they took insufficient advantage of expressive capabilities such as the above, crumbled under their own weight: something as trivial as adding a new data type to a syntax tree entailed extensive modification of dozens of hand-written generic iteration code. Iteration was slow because the code was simple, not in spite of it. If your use case truly merits that complexity—it is, as of this writing, the case for me, but it was certainly not the case for my first production Haskell systems, which were little more than JSON-serving endpoints—then you bear the additional responsibility of identifying the minimum viable set of extensions you need to use (see the first point above). For example, before reaching for the type-level escapade that is servant
, consider reaching for scotty
; rather than pulling in an SQL layer like beam
or opaleye
, try getting away with postgresql-simple
; rather than pulling out lens
, consider using the substantially simpler optics
or microlens
, or no lens library at all.
Type system tricks come with trade-offs.
Beyond the simplest of syntactic extensions (LambdaCase
, etc.), there are few GHC extensions that don’t involve some sort of trade-off. As you ask GHC to do more for you, the longer it’ll take, and the more likely you are to encounter cryptic type errors and/or mysterious slowdowns. I’ve encountered this in practice regarding the case of advanced overlap. In semantic
, we deal with syntax trees, and complicated languages can have many different constituent parts: TypeScript has more than a hundred distinct syntax nodes. Though the typeclass under discussion used GHC.Generics
to derive the “boring” instances that simply traversed children, even opting into these generic instances required a line of code per type.
Advanced overlap saved us from the irritation of O(n) lines of boilerplate instance declarations. Everything seemed hunky-dory until, during an investigation into slow compile times, we discovered that the repeated lookups associated with advanced overlap were not at all comparable in speed to standard instance
declarations; the module that used advanced overlap on the TypeScript+XML grammar took twelve seconds to compile, which added when repeated hundreds of times per week. We were able to reduce that time by a factor of 4 by eschewing advanced overlap entirely, and simply writing out the instances in question instead. This leads me nicely into my subsequent point:
Optimize for fast compiles, fast deploys, and cached builds.
GHC is a magnificent piece of software; given the considerable effort required to translate Haskell source into efficient machine code, it’s spectacular that it can work as quickly as it does. But there’s no way around it: the time spent repeatedly recompiling a large Haskell application (where large is anything with, say, more than 25,000 SLoC and/or more than a hundred transitive dependencies), especially given that cabal
recompiles an entire project if the .cabal
file changes, even for changes that shouldn’t need to invalidate a great deal of the existing cache, such as adding a new module to a library. There are several things you can do to ameliorate this situation:
- Pursue a REPL-focused workflow.
ghci
is perfectly capable of successfully:load
-ing a newly created file, without having to quit and restart. Should you pursue this strategy, I highly recommend writing some scripts that invoke the REPL such that its build products directory is something other than the default one used bycabal
; this ensures that an errantcabal clean
won’t blow the REPL’s cache. I also recommend configuring said script so that it includes all source files associated with all built code, even the tests and benchmarks. This greatly lowers the time associated with a write-compile-test cycle when compared to acabal test
approach. (You can find an example of a script that generates all relevantghci
flags here.) - Use Nix. Though Nix takes time to learn and is often difficult to integrate into existing development/deployment setups, its reproducible builds are, almost by definition, trivial to cache, and services like cachix will take care of hosting a cache for you. I’ve seen (though, sadly, never worked at) Haskell shops in which every contributor used Nix, and a locally-hosted cache server ensured that every contributor had access to a blazingly-fast cache of build artifacts. A nice setup−if you can pull it off.
- Alternatively, use Bazel and the haskell-rules toolset. Like Nix, Bazel takes a significant amount of time to configure, and indeed replaces the
cabal
workflow entirely (in contrast with Nix, which has official support incabal
3.0). Most people won’t need solutions as comprehensive as Bazel, but those that do can build Haskell systems with hundreds of thousands of lines of Haskell in a matter of minutes. - Simplify your code. The above example regarding advanced overlap is a classic example an avoidable slowdown: sure, it’s not tremendously fun to have to write out a hundred or so lines of boilerplate, but a factor-of-four reduction in module compilation time is more than worth it. If at all possible, choose libraries and build systems that don’t rely overmuchTemplate Haskell is worth discussing in isolation: it’s a battle-tested and mature extension to Haskell, and it’s truly necessary in the cases where the burden of manually-written boilerplate would be too much to bear. Yet TH is not without its downsides: a given TH splice is loaded and interpreted every time a file is compiled, as are all the packages upon which that splice depends. Template Haskell is not an inherently bad thing, but if you’re using it to save, oh, ten or twelve lines of boilerplate, you’re probably doing yourself a disservice. Many features provided by TH splices can be executed effectively in the type system, such as the
generic-lens
package as compared to themakeLenses
splice contained inlens
. If you’re writing Template Haskell yourself, I wish you the best of luck: getting TH right is nontrivial, given the size and relative lack of documentation in thetemplate-haskell
library itself.
on Template Haskell, that avoid heavy-duty type-level tricks, and that keep their dependency footprint small.
It’s worth noting that Haskell’s unsurpassed ease of refactoring shines brightest when small, low-risk refactoring patches can be deployed early and often to running services. Time invested in ensuring you can deploy small changes incrementally will reap rewards.
Editor integration matters more than you think.
It’s common knowledge that there exists no definitive, top-tier IDE for Haskell—we have no equivalent of something as advanced or as helpful as IntelliJ IDEA. Though this is not a good thing, it is not a stowshopper for Haskell in industry: hacking on Haskell entails much less typing than do other languages. This is reflected in the editor setups of the most august Haskell programmers: SPJ uses an unmodified Emacs, and Ed Kmett uses an unmodified vim. However, you are probably neither of those cats (unless you are; hey Simon and Ed!). As such, it behooves you, and everyone on your team, to spend time ensuring that you have the most modern Haskell development experience possible. And make no mistake, it certainly is possible to have a feature-laden Haskell editor, but it takes some elbow grease. Until someone comes along and sells a modern Haskell IDE, you’re gonna have to construct your own development environment, keep it working (a nontrivial task at times), and provide documentation as to how to configure an editor to integrate with your project.
Though editor and Haskell integration capabilities vary widely, here are a few features without which my write-compile-test cycle is significantly impaired:
- In-buffer error detection. Sure, it’s easy to communicate with GHC entirely through a terminal emulator, but I can’t emphasize the benefits of eliminating that context-switch. There is no substitute for as-you-type feedback. You should use an LSP/daemon server like
ghcide
as well as a linter likehlint
. The UI for displaying these errors matters, too: there’s a visceral difference between diagnosing type errors from, say, an Emacs modeline versus a modern popup-based interface. - Quick-fix suggestions. A great many Haskell errors, especially those concerned with language extensions, have a straightforward resolution. If GHC complains at you that, say, you don’t have the
MultiParamTypeClasses
extension on, your editor should be able to insert that for you automatically. - Robust completion/snippet features. You’re going to type
import qualified Blah as B
dozens and dozens of times; there’s a real difference when you can hit a key or type a snippet to expand constructs like these. In addition, your more featureful editor integrations will complete things likeLANGUAGE
pragmas, which is immensely helpful and will prevent many trips back to the GHC manual. - In-editor type information. No matter how much of the Prelude you have paged into your head, the benefits that in-editor type lookup provide are substantial, especially when you have to consult information that Haddock does not display by default (such as the order of type variables associated with a given declaration, which is relevant when
TypeApplications
is enabled).
As an aside, I strongly recommend against using the default-extensions
feature provided by cabal
, except for the extensions that have no effect on type checking or syntax (-XStrict
or -XStrictData
). Embedding every relevant LANGUAGE pragma required for a given source file may entail a lot of lines of code, but it means that third-party tooling will always be able to parse your code, even if it doesn’t read your .cabal file.
Prepare for partial functions; you’ll encounter them.
On a long enough timeline, a long-running Haskell process will crash. This is inevitable; anything can come down a network connection, and indeed it often does. Be sure that your operational setup anticipates this and takes appropriate action, whether that’s an immediate restart or some deeper error reporting.
Haskell’s support for stack traces in production is shaky, unless you’re willing to take the performance hit associated with -prof -xc
. Until the Haskell ecosystem gets a more compelling answer, I’d recommend that any long-running Haskell service log extensively to whatever third-party logging system you use. Libraries like fused-effects-profile
can provide many of the benefits of a -prof
compilation without the laptop-warming work of recompiling every dependency. Judicious use of the GHC.Stack
module can often recover stack traces even without profiling builds.
Build polyglot systems. Not everything has to be in Haskell.
Of all the above, this is perhaps the most important.
If you’re writing a program in which behavioral composition is important, and you have the infrastructure and knowledge to commit to a Haskell project, then I heartily endorse doing so. But let’s not deceive ourselves: the vast majority of code, especially at early-stage startups, doesn’t ever need to compose. If this is the situation in which you find yourself, and assuming you’re not employed by one of the few companies that use Haskell across all aspects of their engineering, then you probably shouldn’t use Haskell for the product in question, especially if Haskell novices need to hack on the project. “Use the right tool for the job” is a truism, especially when it comes to programming languages: what ‘right’ means is a function of your environment and circumstances much more than your technical requirements. Many systems, especially compilers and pipeline processors, do require a great deal of compositionality; if you’re building one of those, then a Haskell solution is appropriate, as long as that solution can communicate with other languages (via binary serialization protocols, JSON, or sending text over the humble Unix pipe).
A few last reflections
At its worst, Haskell can be frustrating, obtuse, and difficult to learn—and I don’t regret using it in production one bit. Every language has its frustrations, and Haskell on a middling day still trounces what most languages are capable of at their best. I couldn’t hope to count the errors, from the trivial to the subtle, from which Haskell has saved me over the years: without confidence in my work, I cannot function as an engineer, and Haskell provides degrees of confidence that I cannot achieve in other languages, even the ones I know well. I believe that the act of engineering−of putting my efforts forth to improve or ease the human experience−is a serious, maybe even sacred, thing, and for that reason I consider it an imperative (ha!) to use tools like Haskell, ones that enrich my vocabulary and perspective rather than simply occupying space in my brain, as so many APIs and tools doAs Stanislav Datskovskiy said: “Learning where the permanent bugs and workarounds are inside a phonebook-length API teaches you nothing. It is anti-knowledge.”
. Will Haskell still be used in industry in 2040? I don’t know. Will the languages of the future draw from Haskell’s wealth of features? I hope so. Is some occasional annoyance worth it to use a language in which I am confident and brings me joy? Without a doubt.
Thank you to Alexis King, who pointed out that running with -prof
in production destroys optimizations, and Joe Kachmar, who corrected some faulty assumptions about rustfmt
.