Is there a direct comparison of why someone should choose this over alternatives? 27 bytes down to 18 bytes (for their example) just doesn't seem like enough of a benefit. This clearly isn't targeted to me in either case, but for someone without much knowledge of the space, it seems like a solution in search of a problem.
Whatever your messaging format is going to be, the performance will mostly depend on the application developer and their understanding of the specifics of the format. So, the 20% figure seems arbitrary.
In practical terms, I'd say: if you feel confident about dealing with binary formats and like fiddling with this side of your application, probably, making your own is the best way to go. If you don't like or don't know how to do that, then, probably, choosing the one that has the most mature and robust tools around it is the best option.
----
NB. It's also useful to remember that data transfer of the network is discrete, with the minimum chunk of information being MTU. So, for example, if most of the messages exchanged by the application were smaller than one MTU before attempting to optimize for size, then making these messages shorter will yield no tangible benefit. It's really only worth to start thinking about optimizations when a significant portion of the messages are measured in at least low double digits of MTUs, if we believe in the 20% figure.
It's a similar situation with the storage, which is also discrete, with the minimum chunks being one block. Similar reasoning applies here as well.
https://www.w3.org/TR/webauthn-2/#sctn-conforming-all-classe...
Reading through this, it looks like they toss out indefinite length values, "canonicalization", and tags, making it essentially MP (MP does have extension types, I should say).
https://fidoalliance.org/specs/fido-v2.0-ps-20190130/fido-cl...
[0]: https://github.com/getml/reflect-cpp/tree/main/benchmarks
As a standard it's almost exactly the same as MsgPack, the difference is mostly just that CBOR filled out underspecified parts of MsgPack. (Things like how extensions for custom types work, etc.)
Performance is just one aspect, and using poor to describe it is very misleading. Say not performant if that is what you meant.
CBOR is MessagePack. The story is that Carsten Bormann wanted to create an IETF standardized MP version, the creators asked him not to (after he acted in pretty bad faith), he forked off a version, added some very ill-advised tweaks, named it after himself, and submitted it anyway.
I wrote this up years ago (https://news.ycombinator.com/item?id=14072598), and since then the only thing they've addressed is undefined behavior when a decoder encounters an unknown simple value.
Wouldn't it just decode to 1,type,value 2,type,value without the schema no names?
Human readable key names is a big part of what makes a self describing format useful but also contributes to bloat a format with an embedded schema in the header would help.
https://media.licdn.com/dms/image/v2/D5612AQF-nFt1cYZhKg/art...
Source: https://www.linkedin.com/pulse/json-vs-messagepack-battle-da...
Looking at the data, I'm inclined to agree that not much CPU is saved, but the point of MessagePack is to save bandwidth, and it seems to be doing a good job at that.
Significante with regards to what? Not doing anything? Flipping the toggle to compress the response?
To me it doesn't. There's compression for much bigger gains. Or just, you know, just send less data?
I've worked at a place where our backend regularly sent humongous jsons to all the connected clients. We were all pretty sure this could be reduced by 95%. But, who would try to do that? There wasn't a business case. If someone tried succeeded, no one would notice. If someone tried and broke something, it'd look bad. So, status quo...
I've tried messagepack a few times, but to be honest the hassle of the debugging was never really worth it
Thus, the only thing you can do after that to improve performance is to reduce bytes on the wire.
Encoding/Decoding an array of strings in Javascript is going to have a completely different performance profile than Encoding/Decoding an array of floats in a lower level language like C.
If you have a lot of strings, or lots of objects where the total data in keys is similar to the total data in values, then msgpack doesn't help much.
But when you have arrays of floats (which some systems at my work have a lot of), and if you want to add a simple extension to make msgpack understand e.g. JavaScript's TypedArray family, you can get some very large speedups without much work.
I don't think using a dictionary of key values is the way to go here. I think there should be a dedicated "table" type, where the column keys are only defined once, and not repeated for every single row.
Unlike JSON, you can’t just open a MessagePack file in Notepad or vim and have it make sense. It’s often not human readable. So using MessagePack to store config files probably isn’t a good idea if you or your users will ever need to read them for debugging purposes.
But as a format for something like IPC or high-performance, low-latency communication in general, MessagePack brings serious improvements over JSON.
I recently had to build an inference server that needed to be able to communicate with an API server with minimal latency.
I started with gRPC and protobuf since it’s what everyone recommends, yet after a lot of benchmarking, I found a way faster method to be serving MessagePack over HTTP with a Litestar Python server (it’s much faster than FastAPI), using msgspec for super fast MessagePack encoding and ormsgpack for super fast decoding.
Not sure how this beat protobuf and gRPC but it did. Perhaps the Python implementation is just slow. It was still faster than JSON over HTTP, however.
It's like JSON in that it's a serialisation format.
Is there a non-teleological manner in which to evaluate standards?
> The most useful aspect of JSON is undoubtedly wide support
This is a fantastic example of how widespread technology doesn't imply quality.
Don't get me wrong, I love JSON. It's a useful format with many implementations of varying quality. But it's also a major pain in the ass to deal with: encoding errors, syntax errors, no byte syntax, schemas are horribly implemented. It's used because it's popular, not because it has some particular benefit.
In fact, I'd argue JSON's largest benefit as opposed to competitive serializers has been to not give a fuck about the quality of (de)serialization. Who gives a fuck about the semantics of parsing a number when that's your problem?!?
+ The Object and Array needs to be entirely and deep parsed. You cannot skip them.
+ Object and Array cannot be streamed when writing. They require a 'count' at the beginning, and since the 'count' size can vary in number of bytes, you can't even "walk back" and update it. It would have been MUCH, MUCH better to have a "begin" and "end" tag --- err pretty much like JSON has, really.
You can alleviate the problems by using extensions, store a byte count to skip etc etc but really, if you start there, might as well use another format altogether.
Also, from my tests, it is not particularly more compact, unless again you spend some time and add a hash table for keys and embed that -- but then again, at that point where it becomes valuable, might as well gzip the JSON!
So in the end it is a lot better in my experience to use some sort of 'extended' JSON format, with the idiocies removed (trailing commas, forcing double-quote for keys etc).
Most languages know exactly how many elements a collection has (to say nothing of the number of members in a struct).
For example, pseudo code in a sub-function:
if (that) write_field('that'); if (these) write_field('these');
With messagepack you have to go and apply the logic to count, then again to write. And keep a state for each levels etc.
For example, consider serializing something like [fetch(url1), join(fetch(url2), fetch(url3))]. The outer count is knowable, but the inner isn't. Even if the size of fetch(url2) and fetch(url3) are known, evaluating a join function may produce an unknown number of matches in its (streaming) output.
JSON, Protobuf, etc. can be very efficiently streamed, but it sounds like MessagePack is not designed for this. So processing the above would require pre-rendering the data in memory and then serializing it, which may require too much memory.
Protobuf yes, JSON no: you can't properly deserialize a JSON collection until it is fully consumed. The same issue you're highlighting for serializing MessagePack occurs when deserializing JSON. I think MessagePack is very much written with streaming in mind. It makes sense to trade write-efficiency for read-efficiency. Especially as the entity primarily affected by the tradeoff is the one making the cut, in case of msgpack. It all depends on your workloads but Ive done benchmarks for past work where msgpack came up on top. It can often be a good fit for when you need to do stuff in Redis.
(If anyone thinks to counter with JSONL, well, there's no reason you can't do the same with msgpack).
How do these things matter in any use case where a binary protocol might be a viable alternative? These specific issues are problems for human-readability and -writability, right? But if msgpack was a viable technology for a particular use case, those concerns must already not exist.