Since all of my free time programming nowadays is in Gleam, I hope to have better examples for you in the future. :)
[0] https://git.ahlcode.fi/nicd/elektrofoni
An AI tourguide which presents the nearest geotagged Wikipedia pages to the user's location and then produces an entertaining summary of your chosen topic on request. It's just a simple mashup of Wikipedia, MaxBox and OpenAI APIs.
A "leaving the house" dashboard for my wife. It's displayed on a tablet near her mirror, and shows weather and live public transport information to reduce the "I missed the bus and I didn't realize it's raining" disasters that get her day off to a bad start.
I've had a lot of fun making these with Gleam. It has the "if it compiles, it usually works" factor that people love about Rust, with none of the complicated borrow-checking rules. It's very simple - once you're up and running, there's not a lot more to learn about the language and you can just focus on modelling your problem.
If you write JavaScript like it’s Java and really lean into OO, then Elixir will be more alien and force you to learn some new patterns.
The only major difference between them is the typing and the syntax I'd say.
Consider what would happen if you have a larger json object you want to decode (which is often the case). This would require a substantial amount of code that you later would have to read through (being vary of minor changes).
In contrast to Rust's approach where you just have a few notations you can quickly scan through and identify anything unusual.
This is honestly a major turnoff for me with Gleam and doesn't make me want to use the language for anything where I need to handle json (despite all the other things I appreciate about the language).
What strangeness are you expecting to encounter?
The type system should help you if you change any fields. It's hard for me to image this causing anything beyond slight annoyance at the small additional maintenance required as your types evolve.
For example:
* Rename keys to lowercase, uppercase, snake_case, etc.
* Deserialize incoming data with a "type" field that specifies what enum variant it should target.
* Default to a value if the key doesn't exist.
* Skip serializing if it's None (or include it).
* Skip a field completely.
And the list goes on.
With a declarative macro system all attributes are immediately visible so I know what to expect but with Gleam I'd have to carefully read through the code every time to understand what it does. There's a ton of cognitive overload here that doesn't have to exist.
Remember that it's user data we're parsing so we'd have to maintain (update and debug) the code continually, which means reading through the boilerplate again and again.
Small nitpick:
> One drawback of this sound type system is that converting untyped input from the outside world into data of known types requires some additional code which would not be required in unsound systems.
It isn't really consequence of its sound type system, but its runtime representation - assuming it requires type information to be safely constructed and manipulated, you really need to generate code to do so, but the compiler could instead choose to use more dynamic representation, e.g. compiling to ordinary Erlang maps / JS objects.
The other problem is ensuring that such casting is safe, but that requires runtime checking even in dynamically typed languages.