If Microsoft would also ship the library in %system32%, we would have a truly cross-platform and stable, OS-provided and -patched high-level network protocol client.
(So that probably won't happen)
I never needed curl on Windows, because on OSes that provide a full stack developer experience such things are part of the platform SDK, and rich language runtimes.
It is only an issue with C and C++, and their reliance on POSIX to complement the slim standard library, effectively making UNIX their "runtime" for all practical purposes.
And now for a personal opinion: I'll take libcurl over .NET's IHttpClientFactory any day.
Additionally, writing little wrappers around OS APIs is something that every C programmer has known since K&R C went outside UNIX V6, which again is why POSIX became a thing.
Just `new HttpClient` and cache/dispose it. Or have DI inject it for you. It will do the right thing without further input.
The reason for this factory existing is pooling the underlying HttpMessageHandlers which hold an internal connection pool for efficient connection/stream(in case of HTTP2 or Quic) reuse for large applications which use DI.
Edit, I had a recollection I saw something like that before, this might be that: https://www.codeproject.com/articles/1045674/load-exe-as-dll...
It is possible to do that in the general sense though.
I'm not sure if this is accurate. Why do they include a default alias in Powershell for `curl` that points to the `Invoke-WebRequest` cmdlet then?
I've always installed curl myself and removed the alias on Windows. Maybe I've never noticed the default one because of that.
Guessing this is for backwards compatibility with scripts written for the days when it was just PowerShell lying to you.
$ 7z l /Volumes/CCCOMA_X64FRE_EN-US_DV9/sources/install.wim | egrep ' 1/Windows/Sys.*/curl.exe'
2024-09-06 00:02:14 ....A 672312 343918 1/Windows/System32/curl.exe
2024-09-06 00:02:14 ....A 585160 323710 1/Windows/SysWOW64/curl.exe
This is, in fact, curl: > C:\Windows\System32\curl.exe --version
curl 8.9.1 (Windows) libcurl/8.9.1 Schannel zlib/1.3 WinIDN
Release-Date: 2024-07-31
Protocols: dict file ftp ftps http https imap imaps ipfs ipns mqtt pop3 pop3s smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS HSTS HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM SPNEGO SSL SSPI threadsafe Unicode UnixSockets
My personal favorite *nix utility nobody knows ships with Windows is tar.exe, which is libarchive's bsdtar and therefore supports a wide variety of archive formats.["1/…" is 7z syntax for the first image in the WIM file — Windows 11 Home, in this example — included here to avoid redundant output; curl.exe is included in all images.
"…/SysWOW64/curl.exe" is a 32-bit curl build (32-bit Windows programs see "Windows\SysWOW64" as "Windows\System32").]
TIL. However, some caveats: https://github.com/libarchive/libarchive/issues/2092
I wish the Python core developers had even the level of commitment to stability that developers of JavaScript frameworks do. Instead they intentionally break API compatibility every single release, I suppose because they assume that only worthless ideas are ever expressed in the form of Python programs.
But then, I normally try to stay on the leading edge. I think it’s more difficult if you leave it 2+ years between updates and ignore deprecation warnings. But with a year between minor releases, that leaves almost a two year window for moving off deprecated things.
I think that’s reasonable. I don’t experience the pain you describe, and I don’t get the impression that the Python project treats Python programs as “worthless”. The people working on Python are Python users too, why would they make their own lives difficult?
Nobody has to worry about ignoring deprecation warnings in libcurl, or for that matter in C, in English, in Unicode, or in linear algebra. There's no point at which your linear algebra theorems stop working because the AMS has deprecated some postulates. Euclid's theorems still work just as well today as they did 2000 years ago. Better, in fact, because we now know of new things they apply to that Euclid couldn't have imagined. You can still read Mark Twain, Shakespeare, or even Cicero without having to "maintain" them first, though admittedly you have to be careful about interpreting them with the right version of language.
That's what it means for intellectual work to have lasting value: each generation can build on the work of previous generations rather than having to redo it.
Last night I watched a Primitive Technology video in which he explains why he wants to roof his new construction with fired clay tiles rather than palm-leaf thatch: in the rainy season, the leaves rot, and then the rain destroys his walls, so the construction only lasts a couple of years without maintenance.
Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition. Moreover, the "fixed" version is less correct than the version I used 11 years ago, because previously it correctly handled filename command-line arguments even if they weren't UTF-8. Now it won't, and there's evidently no way to fix it.
I wish I had written it in Golang or JS. Although it wasn't the case when I started writing Python last millennium, a Python program today is a palm-leaf-thatched rainforest mud hut—intentionally so. Instead, like Euclid, I want to build my programs of something more lasting than mere masonry.
I'm not claiming that you should do the same thing. A palm-leaf-thatched roof is easier to build and useful for many purposes. But it is no substitute for something more lasting.
Today's Python is fine for keeping a service running as long as you have a staff of Python programmers. As a medium of expression of ideas, however, it's like writing in the sand at low tide.
I mean, that last part really unravels your point. Linguistic meanings definitely drift significantly over time in ways that are vitally important, and there are no deprecation warnings about them.
Take the second amendment to the USA constitution, for example. It seems very obviously scoped to “well-regulated militias”, but there are no end to the number of gun ownership proponents who will insist that this isn’t what was meant when it was written, and that the commas don’t introduce a dependent clause like they do today.
Take the Ten Commandments in the Bible. It seems very obvious that they prohibit killing people, but there are no end to the number of death penalty proponents who are Christian who will insist that what it really prohibits is murder, of which state killings are out of scope, and that “thou shalt not kill” isn’t really what was meant when it was written.
These are very clearly meaningful semantic changes. Compatibility was definitely broken.
If “you have to be careful about interpreting them with the right version of the language”, then how is that any different to saying “well just use the right version of the Python interpreter”?
> Today I opened up a program I had written in Python not 2000 years ago, not 200 years ago, not even 20 years ago, but only 11 years ago, and not touched since then. I had to fix a bunch of errors the Python maintainers intentionally introduced into my program in the 2-to-3 transition.
In your own words: You have to be careful about interpreting it with the right version of the language. Just use a Python 2 interpreter if that is your attitude.
I don’t believe software is something that you can write once and assume it will work in perpetuity with zero maintenance. Go doesn’t work that way, JavaScript doesn’t work that way, and Curl – the subject of this article – doesn’t work that way. They might’ve released v7.16.0 eighteen years ago, but they still needed to release new versions over and over and over again since then.
There is no software in the world that does not require maintenance – even TeX received an update a few years ago. Wanting to avoid maintenance altogether is not achievable, and in fact is harmful. This is like sysadmins who are proud of long uptimes. It just proves they haven’t installed any security patches. Regularly maintaining software is a requirement for it to be healthy. Write-once-maintain-never is unhealthy and should not be a goal.
Isn't fixing this the whole point of Python's "surrogateescape" handling? Certainly, if I put the filename straight from sys.argv into open(), Python will pass it through just fine:
$ printf 'Hello, world!' > $'\xFF.txt'
$ python3 -c 'import sys; print(open(sys.argv[1]).read())' $'\xFF.txt'
Hello, world!
Though I suppose it could still be problematic for logging filenames or otherwise displaying them.If you're a Mathematician in the nineteenth century (for example Peano) you know what a set is in some sense, and if pressed you'll admit that you don't really have a formal way to explain your intuition. Nevertheless you feel content to write about sets as if they're formally defined, even for the infinite sets.
Turns out you've been relying on an unstated Assumption, the Axiom of Choice. When Ernst Zermelo and Abraham Fraenkel write down axioms for a coherent set theory they discover that oops, the system works fine either way with this regard to this Axiom, and many years later it was proved to be entirely independent, and yet mathematicians had been gaily assuming it's true, without saying so.
So in a sense Peano and say Turing are working with different, slightly incompatible versions of Mathematics. Euclidean geometry works fine... but today you'll be told that our universe's geometry isn't actually Euclidean, so, if something big enough just doesn't work that's to be expected, Euclid's model is neat but it's not actually a scale model of our universe, our universe is much stranger.
But other than that botched transition, Python is very stable. My Python photo downloader from 2004 did not need any changes throughout python 2 lifetime and still works today (using 2.7 interpreter). The oldest python3 script I've found is from 2019, and it still works just fine without any changes.
Most notably is PEP 594 (“Removing dead batteries from the standard library”), which removed 19 obsolete modules. However they were deprecated in Python 3.11 (2022) and removed in Python 3.13 (2024).
So everybody has had two years to update their obsolete code if they want to immediately use new versions of the Python interpreter, and if that isn’t long enough, Python 3.12 is officially supported until 2028. So if you use a module like sunau, which handles an audio format used by SPARC and NeXT workstations in the 80s, then you have six years to figure out what WAVs or MP3s are.
Basically even though Daniel might say "I didn't change the ABI" if your code worked before and now it didn't, as far as you're concerned that's an ABI break. This particularly shows up for changed defaults and for removing stuff that's "unused" except that you relied on it and so now your code doesn't work. Daniel brings up NPN because that seems easy for the public Internet but there have been other examples where a default changed and well... too bad, you were relying on something and now it's changed, but you should have just known to set what you wanted and then you'd have been fine.
Ohh that takes me back, that feature was used heavily in the FXP warez scene (the one the proper warez people looked down on), you’d find vulnerable FTP servers to gain access to, and the best ones would support this. That way you could quickly spread releases over multiple mirrors without being slowed down by your home internet.
That's progress I believe.
To parent's downvoters: would you kindly cut him some slack? It's OK to ask if you don't know. https://xkcd.com/1053/
Now I'm wondering if/how managed code of e.g. dotnet solves this issue, but that might be too much of a tangent.
Jon Skeet has some good examples [2]
[1] https://learn.microsoft.com/en-us/dotnet/core/compatibility/... [2] https://codeblog.jonskeet.uk/2018/04/13/backward-compatibili...
That sounds like a super useful feature that would be great if more FTP servers supported it. I guess FTP itself is a dying protocol these days, but it's extremely simple and does what it says on the tin.
Well, Android anyways. I don't know how things work in the Apple world. It's bizarre that whatever the "official" method of file transfer is is so bad. Also, managing files on Android is, on its own very bad. FTP allows connecting a decent file manager to the phone and do the management externally.
I think it will survive as a protocol as a fallback mechanism. Ironically I used FTP on a smartphone here and there because Smartphone OS are abysmally useless. Don't get me started with your awesome proprietary sync app, I don't do trashy and they all are.
Otherwise I do everything today through scp and http, but it is less optimal technically. It just happens to be widely available. FTP theoretically would provide a cleaner way for transfers and permission management.
We have an ancient (in javascript years) app that webpacks stupid template systems and polyfills and all sorts of cruft that should have never been in the first place and we haven't had to touch any of that in years.
What dependency or feature forces you to chase versions?
I ask this because I'd like to know what practices I might want to avoid to guarantee that there is no ABI breakage in my C project.
If you have a struct which might grow, don’t actually make it part of the ABI, don’t give users any way to find it’s size, and write functions to create, destroy and query it.
Thanks! This is very insightful. What is a solution to this? If I cannot expose structs that might grow what do I expose then?
Or is the solution something like I can expose the structs that I need to expose but if I need to ever extend them in future, then I create a new struct for it?
Option 1: If allocating from the heap or somewhere otherwise fixed in place, then return a pointer-to-void (void *) and cast back to pointer-to-your-struct when the user gives it back to you.
Option 2: If allocating from a pool, just return the index.
Then for your internal stuff you define what's inside T and you can use T normally.
Also, even if you're returning an index, learn from Unix's mistake and don't say it's an integer. Give it some other type, even if that type is just an alias for a primitive integer type, because at least you are signalling that these are not integers and you might make a few programmers not muddle these with other integers they've got. A file descriptor is not, in fact, a process ID, a port number, or a retry count, and 5 is only any of those things if you specify which of them it is.
What's inside a T? How can we make one? We don't know, but that's fine since we have been provided with APIs which give us a pointer-to-T and which take a pointer-to-T so it delivers the opacity required.
This got a bit messy because Windows also included compatibility hacks for clients that didn't set the length correctly.
typedef struct {
char name[50];
int age;
} Person;
vs typedef struct {
int age;
char name[50];
} Person;
Basically anything that moves bytes around in memory for data structures that are passed around. Of course any API breakage is also an ABI breakage.I don't think this is true. You can change things on a superficial level in a source language that still compiles down to the same representation in the end.
If you add new signatures or data structures, software compiled against the previous version should still work with the new version.
In my opinion the whole issue is more important on Windows than on Linux. Just recompile the application against the new library or keep both the old and the new soversion around.
Some Linux distributions go into major contortions to make ABI stability work, and still compiled applications that are supposed to work with newer distros crash. It is a waste of resources.
Debian chose to do both: https://wiki.debian.org/ReleaseGoals/64bit-time . Wherever they could, they recompiled much of the stuff changing package names from libsomething to libsomethingt64, so where they couldn't recompile, the app still "works" (does not segfault), but links with 32-bit library that just gets wrong values. Other distros had flag day, essentially recompiled everything and didn't bother with non-packaged stuff that was compiled against old 32-bit libs, thus breaking ABI.
Some function in your library used to return 42 and now returns 43, and an app with 10,000 users asserted that you returned 42? That's an ABI break.
There are more obvious examples: renaming functions that someone linked to? ABI break. Change the size of a struct in a public header? ABI break, usually (there are mitigations for that one).
The list goes on and Hyrum's law very much applies.
Another way to define ABI compatibility is that folks who promise it are taking the same exact stance with respect to folks who link against them as the stance that Linus takes regarding not breaking userspace: https://lkml.org/lkml/2012/12/23/75
If I promise you ABI compat, and then I make a change and your app breaks, then I'm promising that it's my fault and not yours.
Plenty of people told WG21 (the C++ committee) that they need to fix this, P1863R0 ("ABI: Now or Never") is five years old on Friday. If you bet "Never" when that paper was written, you're ahead by five years, if you understand how this actually works you know that you have much longer. When Titus wrote that he estimated 5-10% perf left on the table. That's a thin margin, on its own it would not justify somebody to come in and pick that up. But it isn't on its own, there are a long list of problems with C++.
I get mad even reading about this. If I was in his shoes I'd demand they fork and rename their implementation. Too many times we hear about over zealous debian patches being the source is issues.