https://aclanthology.org/C08-5001.pdf
Also good was section 4 of
https://par.nsf.gov/servlets/purl/10237543
Both work with semirings, which are a structure with two monoids.
I found these papers fairly readable on a quick skim, but I have a background in closely related stuff. They might not be so readable if you're not used to the style of presentation.
One of the clearest, most relatable definitions of what a monoid actually is, why you should care if at all, and how it relates to monads at the end. Great!
Might have been nice to add a footnote on what an "endofunctor" is as well, broadly speaking, given the whole "a monad is a monoid in the category of endofunctors" mantra that one first hears when they try to figure out monads.
A good example for a monoid are strings with the string concatenation operation! This is used a lot in text editors. Homomorphisms are also of practical relevance here.
If × is a commutative group operation (i.e. inverse elements exist and a×b=b×a), that can even be done in constant time in a trivial way.
> A monad is a monoid over a set of curried functions.
Is that so? Sounds very wrong to me. If we want to go the monad joke way, monads have to have an operation (a -> m b) that composes, but those are just normal functions, and there’s nothing curried about it. It’s a statement that one could bend enough so it’s kind of right, but what it really does is raise eyebrows.
> Monads force sequential processing because you set up a pipeline and the earlier stages of the pipeline naturally must run first.
No, a counterexample is the (esoteric) reverse state monad, where values flow the normal way but state comes from the results of future computations.
A 'handwavy' association that somewhat makes sense and allows you to have some sort of perspective when moving on to monads is better than simply omitting the link to monads completely, just because one can "kindof maybe" find holes in the simplified explanation provided.
(fair enough, the words "this is somewhat oversimplified, but" could have been added, but personally I didn't care)
That's unfortunate. They should be bumping onto monoids much earlier, and much more often.
Yeah, IO and do notation put monads on the face of people way before they have time to adapt to it. But monoids are the one that are extremely valuable, simple, and easy to learn. Also, they make for a nice step in a progressive adaptation to the "generalized thinking" one need to understand why Haskell does the things it does.
More generically, named concepts like this give you a way to compress knowledge, which makes it easier to learn new things in the future. You get comfortable with the idea of a monoid, and when you meet a new one in the future, you immediately have an intuitive ground to build on to understand how your new thing behaves.
In some other contexts, it's useful to talk about transforming your problem while preserving its essential structure. e.g. in engineering a Fourier transform is a common isomorphism (invertible homomorphism) which lets you transform your problem into an easier one in the frequency domain, solve it, and then pull that solution back into the normal domain. But to understand what's going on with preserving structures, you need to first understand what structures are even present in your problems in the first place, and what it means to preserve them.
This stuff isn't strictly necessary to understand to get real work done, but without it, you get lots of engineers that feel like the techniques they learn in e.g. a differential equations class are essentially random magic tricks with no scaffold for them to organize the ideas.
Another useful purpose of these concepts is to have the vocabulary to ask questions: A semigroup is a monoid without a unit. Given a semigroup, can you somehow add a unit to make a monoid without breaking the existing multiplication? A group is a monoid where the multiplication has inverses/division (So if your unit is called 1, then for any x, there's a "1/x" where x/x = 1). Can you take a monoid and somehow add inverses to make it into a group? etc. In a programming context, these are generic questions about how to make better APIs (e.g. see [0]). It also turns out that groups exactly capture the notion of symmetry, so they're useful for things like geometry, physics, and chemistry. If the symmetries of the laws of physics include shifts, rotations, Lorentz boosts, and adding certain terms to the electromagnetic potential, can I somehow understand those things individually, and then somehow understand the "group of the universe" as being made out of those pieces (plus some others) put together? Can we catalogue all of the possible symmetries of molecules (which can tell you something about the the states they can be in and corresponding energy levels), ideally in terms of some comprehensible set of building blocks? etc.
[0] https://izbicki.me/blog/gausian-distributions-are-monoids
Would it help if you defined a monoid as a combination of 3 things?
1) a data type A
2) an associative operation on A
3) an identity (or empty element)
Then you can correctly say that the string data type, admits an associative operation (concatenation of two strings) and you have an empty element (the empty string).
I think too many people talking about functional programming really overblow how much you need to understand about mathematics, they serious do.
Think about my definition and you can quickly understand that there are many monoid instances for numbers (with the associative operation being addition to get a monoid or multiplication to get another monoid instance).
There's infinite numbers of monoid instances for various data types.
Haskell has several shortcomings from what I remember and Haskell developers incorrectly assume that you can only have one semi group (or equality, monoid, etc) instances for your data type because they believe they are the closest to math, but in reality their language has its own limitations.
They don't assume that. The devs bent the compiler backwards several times trying to support more than one instance, but they still couldn't design an implementation that is actually good to use.
If you know of any language where this works well, it would be nice to know. AFAIK, representing that kind of thing is an open problem, nobody has an answer.
I can see where they’re coming from, but they certainly haven’t set the stage for it to be a something you could deduce without already knowing what they’re referencing.
So to me, it seems they’re referencing the Free Monad, recursion schemes and a little of HomFunctor/Yoneda Lemma.
The free monad gives a coproduct of functions, where the value is either a recursive call or a value (branch vs node). To get from a set to a free monad, you need to define a Functor over the set, and given most things are representable, this is trivial.
Given this free monad, an algebra can be formed over it by providing a catamorphism, where the binary function would indeed be composition.
I'm glad I'm not the only one struggling with this! Though I have started remembering it a different way: I pretend the 'r' in 'foldr' stands for recursive. Thus it's easier to remember that
foldr(º, [a, b, ...]) ~=
a º (b º ...)
where the right term for each operator is given by the recursive call. In contrast, then, foldl must be the iterative variant where foldl(º, z, [a, b, ...]) ~=
z º= a
z º= b
...
return z
> it's easier to remember that ...
Whoa, we have very different experiences remembering things.
For instance, if you're putting together an on-disk full-text search index, immutability techniques become very relevant (since you can't insert into the middle of files like you can do with in-memory hashmaps). Just make small indexes and a (binary, associative) index-merge operation. Now you have an online&updatable search engine which doesn't need to wait for full reindexes over the data to include new documents.
I think this statement will have two kinds of readers. People who are not familiar with monads, for whom it'll fly right over their heads, and people who are familiar with monads, who will be annoyed at its inaccuracy.
It's best to not learn from wrong things, which is, unfortunately, statistically most of the things talking about monads.
Correction: function composition is not a monoid over unary functions, only over families of endofunctions (functions of type `a -> a` for some `a`). You can't compose a function of type `a -> b` with a function of type `a -> b` for example, when `a` /= `b`, because one gives back a value of type `b` which cannot be passed into a function expecting a value of type `a`.
At that point you've lost associativity: ((state * transition) * transition) is meaningful, but (state * (transition * transition)) isn't well defined. Which means you're no longer talking about monoids.
Another way to look at it—by associativity, fold-left and fold-right should be equal. If they're not, or if one is defined and the other isn't, then you don't have associativity.
ps: nice punchline https://imgur.com/a/5g9wxPg
pps: thanks for the sub
Semigroups are not required to have any identity, and the monoidal identity needs to be both a left and right identity.
Here's one example I wrote up using trees as the data structure:
https://github.com/tmoertel/practice/blob/master/EPI/09/soln...
Here's another example, this one in Python: https://github.com/tmoertel/practice/blob/master/dailycoding...