What a time.
Does not look very cheap to me. Please note that $1 of 1973 is approximately $7 of 2024, so prices of usable configurations quickly reach the $100k to $200k territory, with a few grand of monthly upkeep.
The PDP-11 seems like a bargain for what was fairly close to cutting edge technology at the time.
[1] https://en.wikipedia.org/wiki/Bell%27s_law_of_computer_class...
The real breakout for Unix was that it was something you could grab for free and later it was possible to port software to other platforms easier
That price list seems to be mostly for the PDP-11/40, which according to http://gunkies.org/wiki/PDP-11_Memory_Management does seem to have supported a kernel/user mode split and a base register for keeping multiple processes resident in core, without any extra options, so I infer it was suitable for Unix. But I'm not sure if you might have needed the KT11-D addon memory management unit.
The 01974 CACM paper https://dl.acm.org/doi/pdf/10.1145/361011.361061 says Unix could run on hardware costing as little as [US]$40 000, which would be about US$250 000 today.
I think it's common for each office employee today to receive a US$2500 computer, so US$250k is a 100-employee departmental computer budget, not counting things like network infrastructure and servers. In other workplaces such as machine shops, coal mines, construction sites, and cattle ranches, capital investment per employee is commonly several times that.
Unix was not designed to support that many users—the uid was 8 bits—so the Programmer's Workbench users over the next few years took to sharing a single uid per a whole team of users. Later versions of Unix, of course, expanded the uid to 16 and later 32 bits.
https://ourworldindata.org/grapher/capital-intensity-vs-labo... seems to have a measure of capital intensity capital stock per worker, but I don't understand how to interpret it. The US today is around 198, and 103 in 01973, and the units are supposed to be inflation-adjusted (02010) dollars per work hour. But work hours are a flow, not a stock, and we're supposed to be measuring capital stock. So does that mean dollars per work hour per year? If so, that works out to about US$200k of capital stock per full-time worker in 01973, as an average across the economy. That would be a US$700k PDP-11-equivalent for every 4–8 workers in 01973, or for every 2–4 workers today.
https://fred.stlouisfed.org/series/RKNANPUSA666NRUG puts the total capital stock of the US at 23 trillion inflation-adjusted dollars in 01973, and https://fred.stlouisfed.org/series/PAYEMS puts total nonfarm employment at about 78 million for the country. This works out to about US$290 000 per employee, which is in the same ballpark as the French data but a bit higher, a US$700k PDP-11 for every 2–3 workers. But maybe a lot of the capital stock was on farms, which are excluded from the denominator here.
I remember a 10-foot-long book at my college for Michigan Terminal System (MTS) because we didn't have UNIX running on the mainframe... i can't remember what UNIX ran on now, it was 1984-1988 at RPI. Anybody remember what UNIX ran on? It wasn't the VAX on the Vorhees building altar.
(Long live kremvax!!! <https://en.wikipedia.org/wiki/Kremvax>.)
My fauvorite book in this regard is the annoted source of Unix.
Nowadays there is no way to get such grasp of the system.
Considering most users would probably be reading this on a fan paper printout, an index like this was probably quite good ergonomically.
> For each file in the given directory ("." if not specified) d_s_w_ types its name. If "y" is typed, the file is deleted; if "x", d_s_w_ exits; if anything else, the file is not removed.
Before looking for a builtin command to do that, I'd execute a find(1) in my editor to load a buffer up with potential files to delete, and then xargs(1) the edited buffer to rm(1) them; without xargs I'd probably just prepend "rm -f" to all lines and then execute the whole buffer; if I wished to do it the slow way I could pipe those names through a shell loop; etc. etc.
(with ed(1) you'd need to first write the buffer to the filesystem, then bang-execute it, but the same workflow would suffice even on a hard tty)
EDIT: see sibling for real story: dsw(1) was meant to remove files with shell-inexpressible filenames, so none of the above would apply. Not being a unix-kernel hacker (and not having had my shell sessions corrupted by noise chars due to someone attempting to use the modem line for voice in decades), I've always managed to use wildcards to get a suitably-unique typeable match to the occasional binary-named junk file.
I have not used cp or rm with their default options for decades, because I always use aliases that set correct default options. Similarly for many other traditional UNIX utilities.