I found F# to be more pleasant to work with async than C# (which is already a breeze). It is true that you still have to define 'task' (or 'async' if you want to use Async CEs) but it is generally there for a reason. I don't think it's too much noise:
let printAfter (s: float<second>) = task {
let time = TimeSpan.FromSeconds (float s)
do! Task.Delay time
printfn $"Hello from F# after {s} seconds"
}
let async printAfter (s: float<second>) =
let time = TimeSpan.FromSeconds (float s)
await Task.Delay time
printfn $"Hello from F# after {s} seconds"
and then printAfter is called with `await` as well. I'm sure there's some FP kind of philosophy which prohibits this (code with potential side effects not being properly quarantined), but to me it just results in yet more purpose-specific syntax to have to learn for F#, which is already very heavy on the number of keywords and operators'async'-annotated methods in C# enable 'await'ing on task-shaped types. It is bespoke and async-specific. There is nothing wrong with it but it's necessary to acknowledge this limitation.
let!, and!, return!, etc. keywords in F# are generic - you can build your own state machines/coroutines with resumable code, you can author completely custom logic with CEs. I'm not sure what led you to believe the opposite. `await Task.Delay` in C# is `do! Task.Delay` in F#. `let! response = http.SendAsync` is for asynchronous calls that return a value rather than unit.
In the same vein, seq is another CE that is more capable than iterator methods with yield return:
let values = seq {
// yield individual values
for i in 1..10 -> i
// yield a range, merged into the sequence
yield! [11..20] // note the exclamation mark
}
Adding support for this in C# would require explicit compiler changes. CEs are generic and very powerful at building execution blocks with fine control over the behavior, DSLs and more.Reference: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
I would disagree. If you need to have a bespoke set of syntax, then something is not integrated where it should be. The language design should not be such that you are writing things differently, depending on the paradigm that you're handling. That's not something that occurs in every language, so it isn't essential that it exists.
We can acknowledge the differences in a way that alerts the programmer, without forcing the programmer to switch syntaxes back and forth when moving between the paradigms. async/await is one method, Promises another, etc. A different syntax is a much, much higher cognitive load.
Support of asynchronous code and of its composition is central to C#, which is why it does it via async/await and Task<T> (and other Task-shaped types). Many other languages considered this important enough to adopt a similar structure to their own rendition of concurrency primitives, inspired by C# either directly or indirectly. Feel free to take issue with the designers of these languages if you have to.
F#, where async originates from, just happens to be more "powerful" as befits an FP language, where resumable code and CEs enable expressing async in a more generalized fashion. I'm not sold on idea that C# needs CEs. It already has sufficient complexity and good balance of expressiveness.
Do-notation-like 'await' is not for calling functions, it is for acting on their return values - to suspend the execution flow until the task completes.
However, the compiler does not, has never, required that it does things via a different syntax. In fact, in the early branches before that was adopted, it didn't! The same behaviour was seen in those branches. This behaviour you expect, was never something that had to be. It was something chosen to simplify the needs of the optimiser, and in fact cut the size of code required in half. It was to reduce the amount of code needed to be maintained by the core team. And so 1087 [1] was accepted.
So perhaps you might need to read more into the process of the why and how async was introduced into C# and F#. It was a maintenance team problem. It was a pragmatic approach for them - not something that was the only way that this became a possibility.
As said, in the original branch for using tasks...
> Having two different but similar ways of creating asynchronous computations would add some cognitive overhead (even now there are times when I am indecisive between using async or mailbox for certain parallelism/concurrency scenarios). [0]
[0] https://github.com/fsharp/fslang-suggestions/issues/581
[1] https://github.com/fsharp/fslang-design/blob/main/FSharp-6.0...
However, this is where our opinions differ. I like task CE (and taskSeq for that matter too). It serves as a good and performant default. It's great to be able to choose the exact behavior of asynchronous code when task CE does not fit.
Simple things like the maybe and either monad are often clearer in this notation. Complex things like alternatives to async (such as CSP derived message passing concurrency), continuations, parser combinators, non-determinism, algebraic effects and dependency tracked incremental computations are naturally modeled with this same machinery, with CE notation being a kind of super helpful DSL builder that makes certain complex computations easier to express in a sequenced manner.
If the custom syntax was only for async you'd have a point, but the general power of the framework make it the more preferable approach by far, in my opinion.
We shoehorn things that feel like, but are structurally different, to DSLs into config files and the like, using JSON/YAML/etc in rough ways, because DSLs introduce a cognitive overhead that doesn't need to be there.
That the shoehorn happens, does mean that DSLs are something natural to reach for. You're right there. But that we have moved away, as an industry, indicates that using any kind of DSL is a smell. That there probably is a better way to do it.
Having a core language feature using a DSL, is a smell. It could be done better.
Rephrased: ocaml is so flexible that async can be implemented as a library with no special support from the language.
This is the beauty of ocaml (and strongly typed functional languages more broadly)
I don't think that's anything specific to strongly typed functional languages. In eg Rust even the standard library relies on third party crates.
Though it is still somewhat amusing to me that loops in Haskell are delivered via a third party library, if you ever actually want them. See https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C...
I do agree that it's good language design, if you can deliver what would be core functionality via a library.
Whether you want to integrate that library into the standard library or not is an independent question of culture and convenience.
(Eg Python does quite well with its batteries-included approach, but if they had better dependency management, using third party libraries wouldn't be so bad. It works well in eg Rust and Haskell.)
Clojure has core.async, which implements "goroutines" without any special support from the language. In fact, the `go` macro[1] is a compiler in disguise: transforming code into SSA form then constructing a state machine to deal with the "yield" points of async code. [2]
core.async runs on both Clojure and ClojureScript (i.e. both JVM and JavaScript). So in some sense, ClojureScript had something like Golang's concurrency well before ES6 was published.
[1] https://github.com/clojure/core.async/blob/master/src/main/c...
[2] https://github.com/clojure/core.async/blob/master/src/main/c...
That's wildly overselling it. Closure core async was completely incapable of doing the one extremely important innovation that made goroutines powerful: blocking.
(let [c (chan)]
;; creates channel that is parked forever
(go
(<! c)))
The Go translation is as follows. c := make(chan interface{})
// creates goroutine that is parked forever
go func() {
<-c
}()
It's early days in that regard, with some folks doing some really interesting things: Odersky himself / the Ox project.
I'm pretty sure it's "working on a codebase" that kills your soul, not the minutae of particular language choice.
1. there’s no marker to indicate the end of let scopes
2. functions are bound with the same syntax as constants
He asserts that this is confusing. In practice - for the many issues I have with Ocaml! - neither of these are actual issues, in my experience, once code formatting is applied.
An actual serious problem with Ocaml’s syntax is that matches don’t have a terminator, leading people to mess up nested matches frequently. Pair that with the parser’s poor error reporting/recovery and things can become unpleasant quickly.
- Reason, a different syntactic frontend for regular OCaml: https://reasonml.github.io/
- ReScript, a language with OCaml semantics that compiles into: JS https://rescript-lang.org/ (I suppose it's a reincarnation of js-of-ocaml).
I've had lots of fun playing with Reason a few years ago. I created an interactive real-time visualization tool, to see a Regexp transform into a NFA to DFA to minimal DFA graph: http://compiler.org/reason-re-nfa/src/index.html It only works for basic regexes though.
The match terminator end thing made me sad when I first saw this in Ocaml. So many languages (C, bourne shell, etc etc) have this exact same problem and it completely sucks in all of them. It's more debilitating in a functional language specifically because matches are more useful than say C case statements so you want to use them much more extensively.
I frequently want to do a pattern match to unpack something and then a further pattern match to unpack further - a nested match is a very intuitive thing to want. Yes you can normally unnest such a match into more complicated matches at the parent level but this is usually much harder for humans to understand.
...and if you had a marker for ending match scopes you could always just reuse that to end let scopes as well if you wanted to although I've literally never a single time run into that as a practical problem (although I haven't written that much OCaml you'd think if it was a real issue I would have banged into it at least once because I found a fair few sharp edges in my time with the language).
however, my university has a mandatory class taught in ocaml, which i've ta'd for a few times; this is the _number one_ "the undergrad ta couldn't figure out my syntax error" issue students have
I think you’re being generous. The example the author gave is awful because any language can be made illegible if you cram in complicated expressions with multiple levels of nesting into a single line. I’d say it’s outright flamebait.
Apparently, the author hasn't come around to understanding that functions are just another constant.
> because a 0 arity function without side-effect is just constant.
http://xahlee.org/Periodic_dosage_dir/las_vegas/20031015_cop...
Found it via Google
1. It’s whitespace insensitive, which means I can code something up really messy and the code formatted will automatically fix it up for me.
2. In general there aren’t a ton of punctuation characters that are very common, which is great for typing ergonomics. Don’t get me wrong, there are still a lot of symbols, but I feel compared to some languages such as Rust, they’re used a lot less.
Beyond the syntax, there are a couple of things I really like about the language itself:
1. Due to the way the language is scoped, whenever you encounter a variable you don’t recognize, you simply have to search in the up direction to find its definition, unless it’s explicitly marked as “rec”. This is helpful if you’re browsing code without any IDE tooling, there’s less guessing involved in finding where things are defined. Downside: if the “open” keyword is used to put all of a module’s values in scope, you’re usually gonna have a bad time.
2. The core language is very simple; in general there are three kinds of things that matter: values, types, and modules. All values have a type, and all values and types are defined in modules.
3. It’s very easy to nest let bindings in order to help localize the scope of intermediate values.
4. It has a very fast compiler with separate compilation. The dev cycle is usually very tight (oftentimes practically instantaneous).
5. Most of the language encourages good practice through sane defaults, but accessing escape hatches to do “dirty” things is very easy to do.
6. The compiler has some restrictions which may seem arcane, such as the value restriction and weak type variables, but they are valuable in preventing you from shooting yourself in the foot, and they enable some other useful features of the language such as local mutation.
I never really seen someone put that into words. I always feel a certain kind of weird when I look at a language with tons of punctuation (Typescript is good example).
Would be curious if there's work into a programming language that was optimized for minimal typing while iterating.
It's a checkbox at most web hosts, built in to many reverse proxies, etc. There's no excuse for not offering htttps, particularly since it places users at risk if at any point along the path between them and you, there's someone untrustworthy.
Help me out here, what's the threat model here while reading troll programming blogs?
If one wanted to criticize OCaml syntax, the need for .mli-files (with different syntax for function signatures) and the rather clunky module/signature syntax would be better candidates.
Sometimes I wrote (haven't written OCaml for some time now..) functions like:
let foo: int -> int = fun x ->
..
just to make them more similar to the syntax val foo: int -> int
in the module types.It's like... when you mismatch brackets or braces in a C-style language, except to resolve the problem you can't just find the bracket that's highlighted in red and count; you have to read an essay.
I don't know why there are so many people here defending it. It's pretty clearly very elegant, but extremely inconvenient.
* An auto-formatter (ocamlformat integration in your editor, or ocaml-top) that shows how the actual nesting looks like
* You can add ;; at the end of a top-level function to get a syntax error at a better location
* Use the LSP integration of your editor which will show you where the error is as you type, so you catch the problem early
On the other hand, I like that there's little overloaded syntax, and the meaning of different characters is fairly consistent.
I've read other articles and there is other weird stuff. Like Java has perfect syntax because on one line you could do so less. In the mean time modern "functional" (or "monadic") style java is a chained mess with ridiculously long lines.
I have seen many devs that prefer the Java imperative style over more functional chained style. Could be an outcome of leetcode style interviews or that most CS programs start with C/python/Java style languages but it's not uncommon to see that preference especially among junior devs
The worst thing so far is that you have to declare and define functions before using them. This results in unimportant utility functions being at the top, and the most important functions being at the bottom of a file.
I haven't had much issue with let bindings, however that might be because my functions are fairly simple for now.