I feel differently. I would rather sit by the pond on a summer day rather than build stuff
> If I come to an existing OCaml project, the worst thing previous developers could do to it is have poor variable names, minimal documentation, and 200+ LOC functions. That’s fine, nothing extraordinary, I can handle that. > > If I come to an existing Haskell project, the worst thing previous developer>s could do… Well, my previous 8 years of Haskell experience can’t prepare me for that
This is kind of like Go vs C++, or <anything> vs Common Lisp. The former is a rather unsophisticated and limited language, not particularly educational or enlightening but good when you need N developers churning code and onboard M new ones while you're at it. The latter is like tripping on LSD; it's one hell of a trip and education, but unless you adopt specific guidelines, it's going to be harder to get your friends on board. See, for example: https://www.parsonsmatt.org/2019/12/26/write_junior_code.htm...
The people who do it can’t stop talking about how great it was, but also can’t really explain why it was so great, and when they try it just sounds ridiculous, maybe even to them. And then they finish by saying that you should drop acid too and then you’ll understand.
For anyone else reading - you don't need to make a copy if you know your data isn't going to change under your feet.
https://dev.to/kylec32/effective-java-make-defensive-copies-...
But I didn't mean purity in that formal sense. I meant that Haskell is plenty pragmatic in its design.
A better example of impurity in Haskell for pragmatic's sake is the trace function, that can be used to print debugging information from pure functions.
Many typed lambda calculi do normalise. You can also have a look https://dhall-lang.org/ for some pragmatic that normalises.
> A better example of impurity in Haskell for pragmatic's sake is the trace function, that can be used to print debugging information from pure functions.
Well, but that's just unsafePerformIO (or unsafePerformIO-like) stuff under the hood; that was already mentioned.
you can still have total functions that don't finish in humanly/business reasonable amount of time.
Just like pure functions can use more memory than your system has. Or computing them can cause your CPU to heat up, which is surely a side-effect.
I like it when
assertTrue (f x) -- passes in test
means that assertTrue (f x) -- passes in prod
Also, there is a better story for compilation to the web.
Anyway, the issue has nothing to do with relative powerfulness. The issue is that the Haskell community encourages practices which lead to unreadable code: lot of new operators, point-free, fancy abstraction. Meanwhile, the Ocaml community was always very different with a general dislike of overly fancy things when they were not unavoidable.
There's a reason Google is migrating Go services to Rust:
https://www.theregister.com/2024/03/31/rust_google_c/
> "When we've rewritten systems from Go into Rust, we've found that it takes about the same size team about the same amount of time to build it," said Bergstrom. "That is, there's no loss in productivity when moving from Go to Rust. And the interesting thing is we do see some benefits from it.
> "So we see reduced memory usage in the services that we've moved from Go ... and we see a decreased defect rate over time in those services that have been rewritten in Rust – so increasing correctness."
That matches my experience: Go serivces tend to be tire fires, and churn developers on and off teams pretty fast.
This is very much how I felt when using Rust as a previous production Haskell user. I enjoyed aspects of Haskell, and I'm still very impressed with it, but it seems to call up a kind of perfectionism that I don't experience nearly as much with other languages.
I am sure it's possible to write Haskell and just not fall into that trap, but I found that Haskell was especially prone to nerd-sniping the micro-optimizing part of my brain, in a way that other languages don't.
The only parts I'm really interested in optimizing are the bits that matter: factoring, naming, datastructures, algorithms, queries (for database), and minimizing abstractions that don't pay their weight.
I spent some time using F# out of curiosity of both it and OCaml and found that it was very easy to use with the exception of (mutable) arrays.
Do we ever use TypeFamilies and DataKinds? Sure, but it's very rare.
https://www.simplehaskell.org/ is a pretty reasonable set of guidelines (though we opt for a more conservative approach ourselves)
So it's not that they're not the best way, it's just that not everyone knows how to do it that well.
Edit: Link to docs: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/cont...
XMonad is a bit bigger: https://github.com/xmonad/xmonad
https://github.com/jl2/Compiler-Algorithm-Code/tree/master/h...
I tried it out and after renaming fold -> foldr, it still builds and seems to work. The main function takes a regex as a command line argument and creates a finite automata graph using GraphViz's dot.
In the Compiler-Algorithm-Code/haskell directory:
make
./test "(foo)+(bar)*(foo+)" | dot -Tpdf -ofoobarfoo.pdf && xdg-open foobarfoo.pdf
No idea how it holds up, it was my first try at a compiler, but it’s quite small. I was following the Crafting Interpreters book.
https://github.com/sdiehl/kaleidoscope https://github.com/arbipher/llvm-ocaml-tutorial
The Haskell one is a nice one. Can say nothing about the OCaml one since I found it using a google search.
I've had a try at implementing an Caleidoscope compiler in OCaml but did not finish it. But it was fun to write.
The Haskell standard library was split off into smaller parts. Map used to be part of the standard library. To date the containers package (which contains Map) is still pre-installed alongside the GHC compiler. So it should be considered part of the standard library.
Check out the documentation for GHC 3.02 https://downloads.haskell.org/~ghc/3.02/docs/users_guide/use... it clearly shows a FiniteMap type being provided.
There was a time when the fastest way to resolve circular dependencies in the library chain was to simply add -Lthing multiple times in sequence so that a subsequent library could be sure the name list of the priors were loaded, including the dependencies down "right" of it in the list.
Taking something as fundamental to what FP is like map/reduce and saying "this can live in an adjunct library" feels like somebody took divide and conquer a litte too far.
What are you talking about?
See those words "to date" and "considered" Not Is, is considered, to date. Thats what I am talking about.
The Map they are talking about here is a key-value-store datatype. It has nothing to do with the 'map' function.
The big difference here is that the OCaml compiler has a lot less work to do. It's not that the Haskell error messages are inadequate (they are actually pretty good), but that the amount of compiler features and type gymnastics make the errors deeper and more complex. For example, if you get the parens wrong in a >> or >>=, you'll get some rather cryptic error that only hits home once you've seen it a few times, as opposed to "did you mean to put parens over there?"
There's been research on modular implicits for OCaml to solve this more generally, but that's not landing upstream anytime soon.
aka let's mix two jars of jam and shit via this funnel and see what happens.
That didn't bother me so much because i speak spanish and can read french. OCaml is of french origin. `string_of_int` is a bad english translation—should have been `string_from_int`.
I like F# where I can use the `int` or `string` functions:
let myString = "2024"
let myInt = int myString
let myStringAgain = string myInt
Ocaml obviously supports polymorphism and an extremely expressive type system. The fact that operators are not polymorphic is purely a choice (and a good one).
Then I gave up. Have things improved?
sudo apt install dotnet9 # or dotnet-sdk-9.0 if the repo doesn't have dotnet9 metapackage
dotnet new console --language F#
There is also a package for Avalonia that lets you write GUI applications in F#: https://funcui.avaloniaui.netMicrosoft seems to be prioritizing "cloud" on all their developer products (rather than just windows). I don't feel disadvantaged by NOT using dotnet on windows.
F# includes an optimizer that performs e.g. lambda inlining. Then, .NET compiles the assemblies themselves to a final application, be it native executable or otherwise, so I feel like relative compiler slowness is not a dealbreaker. It is also relatively quick at starting for executing F# script files with `dotnet fsi` (I'm happy that F# is included in standard .NET SDK, much different to e.g. Clojure or Scala), it's slower to start than Python or Elixir shell but I can live with that.
This was also a good opportunity to write down a small workflow to use F# interactive to quickly build small native console applications thanks to FSharpPacker:
https://gist.github.com/neon-sunset/028937d82f2adaa6c1b93899...
In the example, the sample program calculates SHA256 hashes for all files matching the pattern at a specific path. On my M1 Pro MBP it does so at up to 4GB/s while consuming 5-10 MiB of RAM.
F# is indeed fast. Thanks the the work MS has put in. But so is Ocaml, it is close to C when written in perf first mode. Having said that i rarely need the speed of C for what im building, as bottlenecks tend to be IO anyway.
Finally, ocaml 5+ got multicore (domains) and effects, that really is a better abstraction than monads ever will be (imho)
Isn't this just .NET?
Think this was a feature. F# has access to all of the existing libraries, plus those made for F#.
Something as simple as calling a .NET function that doesn't have F# bindings forces a change (e.g. `someLibraryFunc(arg1, arg2)` instead of `f arg1 arg2`).
This gets worse as libraries force you to instantiate objects just to call methods which could have been static, or use ref parameters, etc.
I say this as somebody who loves F# - you do absolutely have to know .NET style (which really means C# style) in addition to F# style. It's extremely pragmatic if you're already familiar with C#, but I'm not sure how wonderful it is if you come in cold.
I actually like the way F# does refs more! byref<'T> aligns more closely with internal compiler terminology and makes it more clear that this is something like a pointer.
Having to perform tupled calls over curried is indeed an annoyance, and even a bigger one is the prevalent fluent style which results in
thing
|> _.InstanceMethod1()
|> _.InstanceMethod2()
|> _.Build()
Ugh, I usually try to shuffle around what I find annoying into bespoke functions (which are thankfully very easy to define) or even declaring operators. It also helps that while selection is not vast, F# tends to have various libraries that either provide alternatives or wrap existing widely adopted packages into a more functional F#-native API e.g. Oxpecker, Avalonia FuncUI, FSharp.Data, etc.Yep. I love F# too, wish I could stay in blissful F# land.
Wish MS would just release a .NET re-done in F#? Huge task, with no payback. But would be nice.
I have regularly run into code that can only be compiled by a rather narrow range of ghc versions; too old or too new and it just won't.
> Haskell probably has the most elegant syntax across all languages I’ve seen (maybe Idris is better because dependently typed code can become ugly in Haskell really quickly).
I have a general dislike for ML-type syntaxes, and Haskell is my least favorite of the ML-type syntaxes I have encountered. Presumably there is some underlying set of principles that the author would like maximized that I assuredly do not.
> There’s utter joy in expressing your ideas by typing as few characters as possible.
Disagree; if I did agree, then I would probably use something from the APL family or maybe Factor.
Also, Haskell isn't really ML-syntax. I love MLs but find Haskell syntax pretty ugly.
That might be true for academics. But most engineers don't consider ML syntax to be the cleanest, since most don't know any ML language.
This is like saying "most uncontacted Amazonian tribes don't like Shakespeare, because they've never read it". Sure, but why would we care about their opinion on this topic?
https://law.ubalt.edu/downloads/law_downloads/IRC_Shakespear...
The same idea is probably true with programmers who have grown used to C-like syntax or even Python-like or Ruby-like syntax. Syntax is at least in great part a cultural thing and your "cultural background" can affect your judgement in many cases:
1. Are braces good? Some programmers find them noisy and distracting and prefer end keywords or significant whitespace, but other programmers like the regularity and simplicity of marking all code blocks with braces.
2. Should the syntax strive for terseness or verbosity? Or perhaps try to keep a middle ground? At one end of the spectrum, Java is extremely verbose, but a lot of Java engineers (who have generally been exposed to at least one less-verbose language) are perfectly OK with it. The trick is that the main way that code gets typed in Java used to be copy-paste or IDE code generation (and nowadays with LLMs typing verbose code is even easier) and reading and navigating the code is done with the help of an IDE, so a lot of the effects of having a verbose language are mitigated. Diffs are harder to review, but in the Enterprise app world, which is Java's bread and butter, code reviews are more of a rubber stamp (if they are done at all).
3. Lisp-like S-expression syntax is also highly controversial. Many people who are introduced with it hate it with passion, mostly because the most important syntax feature (the parentheses) is repeated so often that it can be hard to read, but advocates extol the amazing expressive power, where the same syntax can be use to express code and data and "a DSL" is basically just normal code.
What syntaxes do engineers find clean? I don't understand the distinction you're making.
* Python, Ruby, C#, Java, Go-style languages?
I imagine most developers operate in neither ML languages nor Lisp-style languages.The most advanced "FP" trick that I imagine most developers use is Fluent-style programming with a whole bunch of their language's equivalent of:
variable
.map do ... end
.map do ... end
.flatten
Addendum: or maybe Linq in C#
Addendum 2:And even the fluent-style trick in those languages tends to follow a similar pattern. Using Kotlin and Ruby as examples, since those are my (edit: main) languages,
variable
.map do |i| something_with(i) end
.map { |i| something_else(i) }
.flatten
shows familiar tricks. The dot operator implies that the thing previous to it has an action being done; and curly braces or the "do" operation both imply a block of some sort, and so a quick glance is easy to follow.In Kotlin, this gets a little bit more confusing (yes, really) because it's common to see:
variable
.map { somethingWith(it) }
.map { somethingElse(it) }
.flatten()
And now there's this magic "it" variable, but that's easy enough to guess from, especially with syntax highlighting.Anything more advanced than that and the cognitive load for these language starts to rise for people that aren't deep in them.
When you're starting working with a new language, that does increase difficulty and may even be so much of a barrier that developers may not want to hop over.
Having familiar constructs not only make it easier to code-switch between languages (people that work on multi-language projects know that pain pretty well), but also decreases the barrier to entry to the language in the first place.
I find Python to be quite readable as well.
I've said this before but if you remove all punctuation and capitalisation from English it will make it look "cleaner", but it will also make it awful to read.
Look how "clean" text is if you get rid of all the spaces!
https://en.wikipedia.org/wiki/Scriptio_continua#/media/File%...
Clearly brilliant.
Let's look at three implementations of a doubling function in modern languages where at least two of them are considered "clean" syntaxes.
F#:
let double x =
2 * x
Python: def double(x: int) -> int:
return 2 * x
Rust: fn double(x: i32) -> i32 {
return 2 * x;
}
Tying back to my original point, are the brackets, annotations, semicolons, and return really needed? From my perspective, the answer is no. These are the simplest functions you can have as well. The differences get multiplied over a large codebase.Are people here really making the argument that F# and MLs don't have clean syntax? All three of these functions have the same IDE behavior in terms of types being showcased in the IDE and also in the type-checking processes, with F# and Rust's type-checking happening at the compiler level and Python's happening with MyPy. People might argue that I left off type annotations for F#, and that's true, but I did so because they aren't required for F# to compile and have the same IDE features like the other languages. Even if I did, it would look like:
F# with type annotations:
let double (x: int) : int =
2 * x
which is still cleaner. fn double(x: i32) -> i32 {
2 * x
}
The reason Rust requires types here is because global inference is computationally expensive and causes spooky action at a distance. The only real difference between your F# and this is the braces, which I do personally think makes code easier to read.Of course there’s more overhead in smaller functions. I’d never write this code in Rust. I’d either inline it, or use a closure, which does allow for inference, and dropping the braces given that this is a one liner:
|x| 2 * x
Different needs for different things. # Rust
let double = |x| 2 * x;
double(3);
// F#
let double = (*) 2
double 3
-- Haskell
let double = (*2)
double 3
# Python
double = lambda x: 2 * x
double(3)
This also works:
let zeroCount = numbers |> Seq.filter ((=) 0) |> Seq.length
The answer is no for this trivial function. As your code goes beyond this it gets harder for humans to parse and so the syntax becomes more necessary.
My earlier analogy is pretty perfect here actually. Do you really need spaces in "thank you"? No clearly not. Does that mean you shouldn't use spaces at all?
> Are people here really making the argument that F# and MLs don't have clean syntax?
No. You are literally replying to a comment where I agreed that it is "clean".
> The effect of layout is specified in this section by describing how to add braces and semicolons to a laid-out program.
I wasn't going to actually read that in detail to see if it makes any sense, though, but it looks very detailed :).
Anecdotally, in my years of haskell at separate places, I had to debug/repl maybe had to repl once or twice. I wish I could say that of mycurrent java gig.
and then there’s “do” notation. I has three people tell me “it’s just a simple mechanical transformation from lambdas” oh yeah? how? they couldn’t explain that either
But do notation really is just syntactic sugar for a chain of >> and >>= applications (plus "fail" for invalid pattern matches). It's not usually pretty or understandable to write it that way, but it's a very simple translation. If the people you talked to couldn't explain it to you, I think they maybe didn't understand it well themselves.
Scala's for/yield (which is pretty similar) you can literally do an IDE action in IntelliJ to desugar it.
I forget the precise Haskell syntax, but do {a <- x; b <- y; c <- z; w } is sugar for something like x >>= $ \a y >>= $ \b z >>= $ \c w . Hopefully the pattern is clear. Were there cases where you actually couldn't see how to desugar something, or are you just nitpicking about them calling it a mechanical transformation without being to explain the precise rules?
mySubroutine = do
foo <- mkFoo
bar <- mkbar foo
mkBaz bar
mySubroutineV2 =
mkFoo >>= \foo ->
mkBar foo >>= \bar ->
mkBaz bar
let x = y in F <===> (\x.F)(y)
just replacing function application with a different function (>>=).If you can use >>= and return to do it then "do" is indeed a mechanical transformation.
It is like await if you understand JS promises.
The indentation rules are definitely a mess though :/
Weird. Haskell's my preferred language and I thought there was only one indentation rule - if something's further to the right it's part of the same expression, and if something's down it's a new expression.
f
x
y
g z
... With a slight addendum that you don't need to write 'let' on consecutive lines, e.g. let x = ...
y = ...
People usually check it once, see that it is correct and forget about it.
I'm glad I'm not the only one. The white-space as syntax in some cases is very confusing. It took me a while to figure it out.
I also don't see the issue, when there is tooling that complains about broken indentation.
There are the actual puzzles that have been minified beyond recognition, and one has to work backwards to figure out what clearer code the author started from. I don't think anyone likes this code, but it gets written by people who are more used to writing code than reading it.
There is also the incredibly clear, straight-forward code that could have come from any other language. If you manage a sane production Haskell codebase, you'll have a lot of this.
In between those two, there is clear code but which you wouldn't find in any other language. This is code that uses well-known Haskell idioms that anyone who works with a lot of Haskell code will recognise, but which look like puzzles to someone who has not worked with a lot of Haskell. These are things like
guard (age person >= 18) $> beer
to return Just beer to someone of age, but Nothing to someone underage; Uuid.fromWords <$> getRandom <*> getRandom <*> getRandom <*> getRandom
to return a randomly generated UUID; maybe (compute_fresh seed) pure cached_value
to use a cached value if it exists, computing it from scratch if it did not; or traverse_ $ \elem -> do
error "stub"
to create a function that loops through a collection and executes a side effect for each element.These look like nonsense to people not familiar with the idioms, but they appear frequently enough that they are no longer puzzles.
Sure, the statements in the function will have side-effects on local variables, but as long as the function isn't too long, and you can comprehend all of it at a time, that's not a problem.
Your examples of "clean code" aren't any cleaner than what can be found in any imperative language.
Using C++ (with std::optional), for reasons of familiarity.
(person.getAge() >= 18) ? "beer" : {};
UUID::fromWords(getRandom(), getRandom(), getRandom(), getRandom());
cache.has_value() ? cache : (cache = computeFresh(seed));
for (auto& elem : collection) {
modify(elem);
}
> UUID::fromWords(getRandom(), getRandom(), getRandom(), getRandom())
This ignores that getRandom() is a side-effecting function (it has to be because it returns something different every time). And that's not just a "local variable". UUID.fromWords val1 val2 val3 val4 works in Haskell too, the "extra syntax" is to be able to all that in IO (i.e. with side effects). If you want it in a form that is more recognisable, you could write it as
getUuid :: IO UUID
getUuid = do
val1 <- getRandom
val2 <- getRandom
val3 <- getRandom
val4 <- getRandom
return UUID.fromWords val1 val2 val3 val4
but that's unnecessarily verbose if you know the idiom.Your cache example is similarly side effecting.
I now find that using some combination of Claude, Gemini, and ChatGPT makes using Haskell more pleasant for processing compiler error and warning messages, suggesting changes to my cabal or stack configuration files, etc.
Combination of both the flaws of currying as a concept, as well as the complexity of Haskell's type system.
More about it here: https://www.janestreet.com/what-we-do/overview/
'curious people': people who got jaded by academia and were attracted by the six- to seven-figure salaries at Jane Street.
'deep problems': Buy X units of Y instrument at A exchange, and sell Z units of said instrument at B exchange, and do this often enough that said company makes a pile of money for itself and its employees (mostly itself, given it can afford to pay its employees six to seven figures).
I mentioned Jane Street because it uses OCaml for high-frequency trading, and because they are huge contributors to OCaml.
strSum = sum . map read . words
I much prefer this over 10 levels of class indirections or procedural style.
Don't get me wrong, I like FP and have been trying to get into it for a long time. But currently I strongly believe FP as commonly done in Haskell is just too far from what we expect even before we start writing code. Combining functions and chaining Monads just seems to me to be extremely hard to do and understand, and I don't need to do any of that in "lesser" languages. However, I am finally "getting it" with newer languages like Flix and Unison - they let me just use `let` and stuff like that which makes the code trivial again, while being basically purely functional.
Who's "we"?
I spent years writing JavaScript, PHP, and Ruby. I thought Haskell was weird and hard, and probably not practical in the real world.
As it turned out, I was just being a fool. Once you actually learn it, you realise how silly the opinions are that you had of it before you learned it.
Advent of Code is running right now. Why don't you just try learning the language?
everyone is familiar with "I don't know how exactly, but generally it would be this way..., we can discuss specifics later" which is the same as reading the above pointfree notation (sum . map read . words) verbatim instead of imperatively inside-out: something is a sum of all parsed values of space-separated words.
I’m guessing you do use languages that are very similar to JS. Like a Spanish speaker saying “I don’t speak Italian or Chinese but Italian is way easier.” If you wrote F# every day you would probably find Haskell syntax quite intuitive
I know a dozen languages well. Everyone here thinking it's just ignorance, but that's not the case. There's just no way that, for me, Haskell and similar languages are readable in any sense just because they're more concise. If that was the case Haskell still wouldn't be close to the most readable, but something like APL or Forth would. I've tried for more than 10 years to be like you guys and read a bunch of function compositions without any variable names to be seen, a few monadic operators and think "wow so easy to read"... but no, it's still completely unreadable to me. I guess I am much more a Go person than a Haskell person, and I am happy about that.
There's no such thing as a "Go person" or a "Haskell person"; all programming languages are made up. Nobody has "more natural inclination" for coding one way than another. Just get your ass out of the comfort zone, try learning a new (to you) language - give it a heartfelt attempt to use it - that may change your life.
Just to be clear - I'm not saying Haskell is easy, oh no, not at all. I'm just saying that it stops being so intimidating and weird after a while.
In what sense?
Haskell's: strSum = sum . map read . words
in JS would be: const strSum = str => str.split(' ').map(Number).reduce((a, b) => a + b, 0);
for a person who's not already a JS programmer, the first one would be more readable (without a doubt), it literally reads like plain English: "sum of mapped read of words".
Haskell's version is more "mathematical" and straightforward. Each function has one clear purpose. The composition operator clearly shows data transformation flow. No hidden behavior or type coercion surprises.
Whereas JS version requires knowledge of:
- How Number works as a function vs constructor
- Implicit type coercion rules
- Method chaining syntax
- reduce()'s callback syntax and initial value
- How split() handles edge cases
So while the JS code might look familiar, it actually requires more background knowledge and consideration of implementation details to fully understand its behavior. Javascript is far more complex language than most programmers realize. btw, I myself don't write Haskell, but deal with Javascript almost daily and I just can't agree that JS is "more readable" than many other PLs. With Typescript it gets even more "eye-hurting".
It's funny to me that you quote the FP-like version of that in JS.
The more traditional version would be more like this:
function strSum(str) {
let words = str.split(' ');
let sum = 0;
for (word of words) {
sum += new Number(word);
}
return sum;
}
I do sincerely think this is more readable, no matter your background. It splits the steps more clearly. Doesn't require you to keep almost anything in your head as you read. It looks stupid, which is great! Anyone no matter how stupid can read this as long as they've had any programming experience, in any language. I would bet someone who only ever learned Haskell would understand this without ever seeing a procedural language before.- The assumption that "verbose = readable" and "explicit loops = clearer"? Seriously?
- The suggestion that "looking stupid" is somehow a virtue in code? "Simple" I can still buy, but "stupid"... really?
- You're using new Number() - which is actually wrong - it creates a Number object, not a primitive;
- You `sum +=` is doing not a single op but multiple things implicitly: addition, assignment, potential type coercion, mutation of state;
- for loops are another layer of complexity - iterator protocol implementation, mutable loop counter management, scoping issues, potential off-by-one errors, break/continue possibilities, possible loop var shadowing, etc. Even though for..of is Javascript's attempt at a more FP-style iteration pattern and is safer than the indexed loop.
You clearly underestimate how natural functional concepts can be - composition is a fundamental concept we use daily (like "wash then dry" vs "first get a towel, then turn on water, then...").
Your "simple" imperative version actually requires understanding more concepts and implicit behaviors than the functional version! The irony is that while you're trying to argue for simplicity, you're choosing an approach with more hidden complexity.
Again, I'm not huge fan of Haskell, yet, the Haskell version has:
- No hidden operations
- No mutation
- Clear, single-purpose functions
- Explicit data flow
You have just demonstrated several key benefits of functional programming and why anyone who writes code should try learning languages like Haskell, Clojure, Elixir, etc., even though practical benefits may not be obvious at first.
On the contrary French is much more readable than English to a Spanish speaker. Because French is much more similar to Spanish than English is.
Same with your JS example, I would guess it is much more similar to what you are used to
The more dense the code, the more there is to unpack until you are deep in the language.
It's OK for languages to be more verbose and offer structural cues (braces) as this often helps in human parsing of logic.
You don't do programming with chalk on a wallboard, for crying out loud. Ideally, you are using a good IDE with syntax completion. Therefore, readability matters more than the ability to bang out commands in as few keystrokes as possible.
It's about phase transitions. When you understand the system, shorter symbols are easier/faster to reason with. If your primitives are well thought out for the domain, this notation will be the optimal way of understanding it!
On the other hand, longer names help on board new people. Theoretically, you could avoid this issue by transforming back and forth. Uiua e.g. lets you enter symbols by typing out their names. Typing "sum = reduce add" becomes: "sum ← /+". If you transform it back...
Imagine if you could encode the std lib with aliases!
You've made a common mistake. You're wiring your listener's thinking with the imperative inside-out approach that you're used to. Instead, it should be explained as this: "strSum = sum . map read . words" is "a sum of all parsed values of the original input of space-separated words". The reason you should avoid inside-out explanations is because in Haskell you're allowed to move from general ideas to specifics, and you can sprinkle `undefined` and `_` for specific details whilst thinking about general ideas and interfaces.
words :: String -> [String]
So that words "foo bar baz"
-- Produces: ["foo","bar","baz"]
In my experience, both the blessing and the curse of Haskell's incredible succinct expressiveness is that, like other specialised languages - for example using latin for succinctly-expressed legal terms - you need a strong understanding of the language and its standard library - similar to the library of commonly used "legal terms" in law circles - to participate meaningfully.Haskell, and languages like Go (which anybody with a bit of programming experience can easily follow) have very different goals.
Like many in this discussion, I too have developed a love/hate thing with Haskell. But boy, are the good parts good ...
So maybe there a difference where Haskell has an advantage? I mentioned it in my previous comment but I don’t know Haskell at all, but if this is “the way” to do splits by word then you’ll know both to read and write it. Which would be a strength on its own, since I imagine it would be kind of hard to do wrong since you’ll need that Haskell understanding in the first place.
Someone thought "words" was the perfect name, and it wasn't me!
As for "words"... yes, possibly not the best name. But also so common that everyone that has ever written any Haskell code knows it. Such as Java's System.out.println
Yes, that definitely the case.
If you know what each function above does, including the function composition dot (.), then this is like reading English — assuming you know how to read English.
There are other languages which are functional as well. like the one in the article and like Elixir where readability is it sacrificed.
I still think readability is atrocious in this language. Sure I can get used to it, but I’d never want to subject myself to that
strSum = sum . parsed . words
where
parsed = map read
wordsPerLine = filter (>0) . map (length . words) . lines
Funnily enough, parenthesis are actually optional in Elixir, although it's a warning to use pipe syntax without them. The following is valid in both Haskell and Elixer: length [1,2,3]
apply foo bar (lol wat)
in Haskell, and it confuses you, simply mentally replace it with apply(foo, bar, lol(wat))
To translate into a more popular syntax (e.g. JavaScript).2. Reverse
Now you have:
words | map read | sum
Or..
$ cat words | map -e read | sum
I do like pipelines though!
words |> map read |> sum
IHP uses it a lot. find . -name '*.py' | sed 's/.*/"&"/' | xargs wc -l
But instead of using | to tie the different functions together you're using . and the order is reversed. strSum = words >>> map read >>> sum
For most people this then becomes "Apply `words` to the input argument, pass the result to `map read` and then `sum` the results of that".I don't think `.` is super complex to read and parse, but we had people new to Haskell so I thought it prudent to start them off just with `>>>` and keep it that way. Most things are read left-to-right and top-to-bottom in a codebase otherwise so I don't see why not.
Edit:
I also told everyone it's fine to just spell out your arguments:
stringSum sentence =
sentence
& words
& map read
& sum
In the example above `&` is just your average pipe-operator. Not currying when you don't have to is also fine, and will actually improve performance in certain scenarios.Edit 2:
The truth is that there are way more important things to talk about in a production code base than currying, and people not using currying very much wouldn't be an issue; but they'll have to understand different pointer/reference types, the `ReaderT` monad (transformer), etc., and how a `ReaderT env IO` stack works and why it's going to be better than whatever nonsense theoretical stack with many layers and transformers that can be thought up. Once you've taught them `ReaderT env IO` and pointer types (maybe including `TVar`s) you're up and running and can write pretty sophisticated multi-threaded, safe production code.
The first example could be more idiomatically written as:
strSum :: String -> Maybe Int
strSum = fmap sum . sequence . fmap readMaybe . words
(You'd also probably want to avoid String and use Text instead.)For more complex parsing scenarios, the various parser combinator libraries can take a while to get used to (and I wish the community would standardise on one or two of them), but they're extremely powerful.
import Data.Either.Combinators -- from the either package
strSum :: String -> Either String Int
strSum = fmap sum . sequence . fmap tryReadInt . words
where
tryReadInt w = maybeToRight ("not an integer " ++ w) (readMaybe w)
This keeps the bulk of the method the same. Being able to go e.g. from Maybe to Either with only few changes (as long as your code is sufficiently general) is one of the nice things you get from all the Haskell abstraction. You can't really do that if you start with exceptions (unless you're in IO). strSum :: String -> Either String Int
strSum = fmap sum . mapM readEither . words
> strSum "10 20 x"
Left "Prelude.read: no parse"
But yeah, I didn't remember "mapM". mapM f = sequence . fmap f, which is what I used. {-# LANGUAGE OverloadedStrings #-}
import Data.Text (Text)
import Data.Void (Void)
import qualified Data.Text as T
import Text.Megaparsec
import Text.Megaparsec.Char
import qualified Text.Megaparsec.Char.Lexer as L
type Parser = Parsec Void Text
numParser :: Parser [Integer]
numParser = L.decimal `sepBy` space1
-- just for demonstration purposes - this will print an ugly debug string, it can be customised
main = putStrLn $ show $ parse numParser "some-source-file.txt" "10 20 30 x"
-- this would print "Right [10, 20, 30]"
-- main = putStrLn $ show $ parse numParser "some-source-file.txt" "10 20 30"
(…on Stripe API…)
OCaml: 1 (last change was 8 years ago, so it’s more like zero)
I’ve been using OCaml for some time, primarily for my pet project, but also and some minor at-work utilities. One such thing was interacting with Datadog which, unsurprisingly, doesn’t provide OCaml SDK.In short: Experience was great. Not only implementing specs with provided specs while using OCaml tooling was fast but also when I got to the rate limiting, I’ve been able to refactor code in around 20 minutes and then it felt like magic transformation.
My take away from that experience is that I wouldn’t use library availability as a hard argument. Every external library is liability and if language provides comfortable tooling and helpers then probably having own code on use-adequate level of abstraction (instead of high level kitchen sink) might preferred.
For high number of external integration I would use something like Go instead, which has exactly that, but as an API implementer I’d prefer to use OCaml instead.
So you may want that github sdk, but in truth you’re only going to use 2 to 5 functions. Why not get an http package and add the few line of codes for these?
Author here
Happy to answer any questions! At this point, I have 18 months of OCaml experience in production.
Wait, why doesn't the standard library have docs at all for this version I use?
Instead of:
> Wait, why the standard library doesn't have docs at all for this version I use?
And
How can Haskellers live like that without these usability essentials?
Instead of:
> How Haskellers can live like that without these usability essentials?
Thanks for the suggestions! English is not my first language, so mistakes could happen. I hope it's not too off-putting, and people can still learn something interesting from articles.
I'll incorporate your suggestions :)
It's also a bummer that you need the containers package for basic DS. So, batteries not included, unlike Python. This also means that you need to use a package manager (stack or cabal) even for non-trivial programs, where a one-line ghci command line would have been more ergonomic.
That said, learning Haskell has had a very positive effect on how I think about and structure the programs I write in other languages ("functional core, imperative shell"). I don't think I'll ever program in Haskell in any semi-serious capacity. But these days, I get my daily functional fix from using jq on the command line.
Is this problem unique to me?
To be fair, it could be almost a bash script (using curl and jq or something like that), but I'm more comfortable with Python.
So for one-off (or almost one-off) scripts, the standard library is worth avoiding dealing with dependencies.
Batteries are included, because containers ships with every distribution of GHC, and you don't need to use stack or cabal to expose it:
% ghci
GHCi, version 9.4.8: https://www.haskell.org/ghc/ :? for help
ghci> import Data.Map
ghci> singleton 5 "Hello"
fromList [(5,"Hello")]
Please try the same at your end and if it doesn't work for you then you can let me know and I'll try to help you resolve the issue.Unfortunately, F# was very much a second-class citizen in the VS.NET space, so it was only useful for banging out tight C# classes in F# first, then translating them over to C#. (My fundamental db access classes were developed in F# then translated to C#; the crux were a parallel pair of isomorphic classes that did design-time swappable DB2 and SqlServer access.)
Beyond my refusal to use Microsoft for my OS anymore, it looks like F# has transitioned away from the original Don Syme authored minimal OCaml-based v2, into much more automagical v3+ stuff that I'm not interested in. They're a brilliant design team, but well beyond what I need or want for my needs.
At the end of the day, it's hard to keep things simple, yet effective, when the tool designers we depend on are always one-upping themselves. Somehow, it seems like a natural divergence as design tooling expands to meet the our expanding needs as devs. What's good is that our perspective gets levelled-up, even if the tools either out-evolve our needs/desires or just plain fail to meet them.
"more automagical v3+ stuff"
What is going on with this? Any examples?
Or perhaps Computation Expressions? But those are an integral part of F# and one of the key reasons why it's so flexible and able to easily keep up in many areas with changes in C# - something that require bespoke compiler modification in C# is just another CE* in F# using its type system naturally.
* with notable exception being "resumable code" which is the basis of task { } CE and similar: https://github.com/fsharp/fslang-design/blob/main/FSharp-6.0...
If you want to get back to F#, it has gotten really easy to do it because .NET SDK is available in almost every distro repository (except Debian, sigh) - i.e. `brew install dotnet`/`sudo apt install dotnet9`.
I have also posted a quick instruction on compiling F# scripts to native executables in a sibling comment: https://gist.github.com/neon-sunset/028937d82f2adaa6c1b93899...
Thanks for the links and help. That's really excellent.
Still, I'm old enough to remember "embrace and extend and extinguish" so I keep Microsoft off my linux boxen. I reject their corporate mandate and the directions their profit-motive has taken them. Besides, I wouldn't ever want my Unix systems to depend on their software's security.
While at it, please ask Golang team to change YouTube policies.
You may also want to avoid Rust, Java, TypeScript, C and C++. For they too are """tainted""". For good measure, VS Code must be avoided at all cost and, which is terrible news, probably the Linux kernel itself.
No Rust here.
Never cared for Java, its generics were crap, then .NET v2 was here with F# and, welp, Java's boxed ints were crap, too, so nope.
No TypeScript. Javascript was built in a week (or something); ubiquity is not to be confused with good design. So, no node, too, in case you were curious.
C and C++?!? I used C in 1990 in OS class recompiling the Minix kernel for assignments. No Microsoft there, bro.
VS Code is not welcome in my home, either.
No Linux kernel? No biggie, I prefer OpenBSD anyway. Does it run vi, C/C++, and Python3? Of course it does. I'm good to go, dude.
I hope you enjoy your adware version of Windows, which is going to be ALL versions of Windows.
I use macOS as a main device with Linux and Windows for verifying and measuring behavior specific to their environment, while doing so from VS Code. I have a friend who has his home server farm hosted on FreeBSD, still using .NET. Oh, and luckily most of the tools I use are not subject to ideological and political infighting.
I like when the technology can be relied on being supported in the future and be developed in a stable manner without sudden changes of project governance or development teams. The added bonus is not having to suffer from the extremely backwards and equally sad state that Python and C/C++ find themselves in tooling-wise. Python is somewhat fixed by uv, C/C++ - not so much.
But rust, java, nodejs, and Chrome are nowhere to be found (IIUC).
I don't even care what Mesa is.
But Microsoft products are not present.
Many (most?) for-profit corps are not my friends. They have the right to exist; I'll leave them at that.
Even Scheme and Go communities had to learn this.
While buried in our monorepo, so not very accessible, we just open sourced our product that is written in Ocaml and we have a GitHub client that is generated from the OpenAPI schema.
It is separated out from any I/O so it can be used in any I/O context.
https://github.com/terrateamio/terrateam/tree/main/code/src/...
> A great standard library is a cornerstone of your PL success.
1. Compiler speed: a clear win for OCaml
2. OCaml 's module system is more explicit (always qualified Map.find). Haskell's type class system is more ergonomic (the instance is found through its type, so no explicit annotation needed, e.g. fmap fmap fmap, where the second must be the Reader's fmap).
3. > If I come to an existing OCaml project, the worst thing previous developers could do to it is have poor variable names, minimal documentation, and 200+ LOC functions. That’s fine, nothing extraordinary, I can handle that.
Though it's not common, but functor-heavy codebase does give you a headache. On the other hand, unifying type class instances across packages is no fun either.
4. OCaml's mixture of side effects and pure code tends to encourage using that in libraries and codebase. So expect more speed and more crashes.
So that leaves OCaml in a spot where it instead competes with more bare runtimes, i.e. compiled languages like Odin, Zig and Rust.
In terms of straight forward control of behavior OCaml loses handily to both Odin and Zig. Likewise with how much effort you have to put in to get good performance out of what you're doing, despite what OCaml enthusiasts say it's not magic fairy dust, you still have indirection in terms of expressing your desired allocation patterns, etc., in OCaml that you wouldn't have in Odin/Zig, making it less appropriate for those situations.
So, OCaml's final upside: Language simplicity... It's not remotely small or simple enough to be compared to Odin. Zig (despite their best efforts) is still a simpler and leaner language than OCaml as well. Zig also has the interesting quirk that it can express functors just with its `comptime` keyword (returning structs with parameterized functions, all done at compile-time) in a much more straight forward way than OCaml can, making it almost a better OCaml than OCaml in that particular respect.
Given the above it seems to me that there's always a [obviously] better language than OCaml for a task; it can't really win because whatever you're doing there's probably something that's better at your most important axis, unless the axis is "Being like OCaml".
I liked writing OCaml with BuckleScript, though, compiling it to JS and using OCaml as a JS alternative.
Again, the main issue with OCaml is that it really doesn't have an axis it's even in the top 5 of, except maybe compile times.
I understand people who like OCaml. There's a lot that's good about it, but it just doesn't have any edge. There's almost no way to pick OCaml and objectively have made a good choice.
Also, OCaml is mature enough for Jane Street, Cisco, Docker, among others.
Which most likely will never pick neither Zig, nor Odin.
I do agree with the points about language extensions (and I have certainly cursed my fair share of operator heavy point free code), but until someone makes something better (maybe that thing is even lean4?) Haskell still brings me more joy than any other production ready programming language.
I'm pretty sad that OCaml isn't more popular than it is.
OCaml was everything opposite. I made two Ocaml tours in recent years, and both times it literally just worked (tm). Granted, I've been using it less than haskell, but the experience of starting out is just heaven and earth. The only issue I have with ocaml tooling is that ideally I'd like to run the language server for real-time hints from the compiler, but also be able to invoke my program interactively. Unfortunately it seems you either have to run "dune build watch", or you can build and run, but not both as there is some locking happening.
As far as the languages themselves go, I'd say haskell is more "fun", in a way that it has a lot of features, and it reads a lot nicer (unless it's point-free code). Monads are pretty fun, although when I finally got through Monad transformers I started feeling "I wish we had no monads tbh" Ocaml feels much more barebone, syntactically less appealing and somewhat clunky. On the other hand there is a kind of spartan appeal to it.
Honestly, I like both of the languages a lot and wish for them to continue their development. I can certainly see myself using both in the future.
This is annoying, yes, but for some use cases you can use `dune exec --watch`, which builds and restarts the executable.
Will this eventually also become a problem with Rust?
> Everything is hard to read until you learn to read it.
I know this is not objectively true because there are many cases in which Haskell also forces me to write things a different way than I would have intended (e.g. due to behaviour around resource allocations) but they don't hurt as bad, for some reason. I don't know what it is!
This is more about syntax differences. Even then, I'd be curious how well both languages accommodate themselves to teams and long term projects. In both cases, you will have multiple people working on parts of the code base. Are people able to read and modify code they haven't written -- for example, when fixing bugs? When incorporating new sub components, how well did the type systems prevent errors due to refactoring? It would be interesting to know if Haskell prevents a number of practical problems that occurred with OCaml or if, in practice, there was no difference for the types of bugs they encountered.
This blog post feels more like someone is comparing basic language features found in reviews for new users rather than sharing deep experience and gotchas that only come from long-term use.