Riveting
This seems like another media opportunity about a nothing burger event for an index that has lost relevance, when other indices like the S&P 500 and QQQ have already incorporated NVIDIA a while ago. They’re just playing catch up.
AFAIK its only real utility is if you want to make very long term comparisons over the years against old values of itself, where its long baseline may be valuable compared to other metrics that don't stretch as far.
Everything else is just flim-flam for getting views/clicks or comforting very old viewers with something that is a familiar staple.
The one I know of off the top of my head is DIA.
In the end how funds are marketed and presented is important part to understand. Also it is better for them to sell actively managed fund with bigger number on it than index with negative one.
It’s a crappy price weighted benchmark with 30 stocks invented over 100 years ago, that’s why Vanguard doesn’t offer a fund.
The largest DJIA ETF is small potatoes, SPY’s average daily volume is higher than DIA’s total assets under management.
VOO tracks the S&P 500 and VTI tracks the US total market, both of these are much better, more diversified options for equity investing.
The DJIA is a price-weighted index and doesn’t track dividends, yet is somehow supposed to reflect investor sentiment.
The media wants to not go bankrupt. It’s the customers that decide whether and how that’s possible.
I’d say that depends. Many media organizations operate without profits — or even incurring losses — in order to serve the public good, promote an ideology, engage in activism, spread misinformation, etc.
Yes, several state-run and private media organizations operate without profits or at a loss in order to spread misinformation.
> which major media operations are operating at a loss how for the public good (discounting state run) and how?
Sorry, I was unable to properly understand your question.
In any case, I never said the organizations that are not primarily concerned with chasing profits are “major” ones. I believe that applies best to lesser-well-known entities, like those who live on social media or the blogosphere. Though there are some instances of major state-run companies, even on TV and on the radio, that operate in a similar fashion.
Also, I am surely not equating the spread of misinformation with serving the public good — these are just distinct objectives that may be sought by media organizations, rather than avoiding bankruptcy.
"Chicago Sun-Times becomes nonprofit newspaper with $61 million in backing as WBEZ merger closes"
<https://www.chicagotribune.com/2022/01/31/chicago-sun-times-...>
Of newspapers operating at a loss or as philanthropies, there are the Baltimore Banner,[2] The Guardian,[3] and ProPublica,[4] which all operate as non-profits, relying on a mix of advertising, subscriptions, and philanthropy. The privately-held, for-profit Washington Post is a for-profit paper that's operated at a loss for years, and this before losing ~10% of its subscribers due to recent editorial decisions.[4][5]
There are numerous propaganda institutions (usually labeled as "think tanks") promulgating various ideologies or interests, with the Atlas Network being amongst the largest and most influential:
<https://www.sourcewatch.org/index.php?title=Atlas_Network>
________________________________
Notes:
1. See for example the WSJ's coverage: "Chicago Public Media to Acquire Chicago Sun-Times, Creating a Nonprofit Local-News Powerhouse" <https://www.wsj.com/amp/articles/chicago-public-media-to-acq...> archive/paywall: <https://archive.is/LP3Q6>.
2. A non-profit newspaper established in 2022: <https://en.wikipedia.org/wiki/The_Baltimore_Banner>.
3. A slightly dated take on 2016 turmoil at The Guardian: "Everything you need to know about the Guardian’s giant bust-up" (2026-5-18) <https://www.standard.co.uk/lifestyle/london-life/fueding-and...> and Wikipedia's entry on the Scott Trust Limited which underwrites the paper: <https://en.wikipedia.org/wiki/Scott_Trust_Limited>.
4. ProPublica was established as a 501(c)(3) in 2007, with funding from the Sandler, Knight, MacArthur, and Ford foundations, along with the Pew Charitable Trusts, Carnegie Corporation, and Atlantic Philanthropies: <https://en.wikipedia.org/wiki/ProPublica>.
5. "The Washington Post publisher disclosed the paper lost $77 million last year. Here’s his plan to turn it around" (2024-5-23) <https://www.cnn.com/2024/05/23/media/washington-post-will-le...>
6. "Washington Post cancellations hit 250,000 – 10% of subscribers" (2024-10-29) <https://www.theguardian.com/media/2024/oct/29/washington-pos...>
For most of the listed orgs, it's a different business organisation (not-for-profit rather than shareholder-based), the organisations aren't intended to run an operating profit, and the content is available to far more than just those who pay directly for access.
Mind that in the case of ad-supported print media, the principle customers (the advertisers) weren't identical with the set of readers. But access was largely limited to those who subscribed directly, bought a newsstand copy, or could access a copy obtained by either method. In either case the operation was generally intended to run a profit.
There have also been free papers, either supported entirely by advertising (frequently "entertainment weeklies"), or published as propaganda organs for a given organisation, frequently religious or political.
Free papers might be either for-profit (ad-supported) or not-for-profit (propaganda). Ultimately there's a rather blurry line between advertising and propaganda: both are messaging modes in which the publisher is more interested in distribution than the reader.
It’s a good example of the media being split into two populations. The free media, which is filler for ads. And media one pays for. The financial press isn’t really headlining this story; it’s on CNBC.
This argument has been around since time immemorial. The right way to think of it is more like a country club or a who's who, rather than a survey or a directory.
As for the news at hand, it's really more about Intel than Nvidia. Sic transit gloria mundi.
It has engineers. It might yet surprise. Nvidia hasn't proven it can turn a ridiculous amount of capital into a matrix-multiplication moat.
Those were the telecoms, transport, and infrastructure companies of their day. They connected the industrial and agricultural output of the United States in much the way Apple, Amazon, Boeing, Verizon, and Walmart (all current components of the DJIA) do today.
That said, yes, it's interesting to watch how the components and industrial sectors represented change over time.
The first DJIA proper (26 May 1896) featured cotton oil, sugar, tobacco, gas & coke (coal), cattle feed, electrical utility, lead, railroads, leather, rubber, and a holding company (trust) largely engaged in utilities and transportation, and dropped later the same year, along with US Rubber. Changes to the average have been a consistent feature to its origins.
Think of those which aren't directly comparable to modern concerns (e.g., oil & gas, electric utilities) as raw materials (mining and ag), transport and logistics, and food (or feed).
Many exist.
This is how it has always been.
It's not that useful now that we have computers, but in the early 1900s it was a reasonably good approximation of a market cap using fast math.
On which point, John K. Galbraith's The Great Crash: 1929 (1954) remains an excellent history of those events (and notes the DJIA's value frequently), as well as a general primer on equities and investments, and how they may go wrong.
1. sum 500 of the biggest companies by size (price * n shares).
or
2. have WSJ editors select 30 companies by any criteria they see fit, but you don't get to see the size of the companies, only the share price.
The way that the DJIA changes isn't the same as an index of, say, the n most highly capitalised equities might (Fortune 5, 10, 20, S&P 500, etc.).
Index funds are big business. Dropping Intel and replacing it with NVIDIA will cause a rebalancing of the DJIA index fund investments from Intel to NVIDIA. Yup, there is an index premium.
But the S&P 500 index funds has somewhere around ~$2.5T invested, but that is spread across 500 companies (~$40T total valuation), so it averages $5B per company.
Thus you are correct the S&P 500 is more influential in terms of index fund reallocations, but only roughly 5x more influential.
There is, as far as I can tell, zero point to the dow, it's a completely useless tracker that is reported on because people talk about it because it's reported on.
I am not sure what point you’re making but there is ~$38B invested in index funds (the ones I mention in the previous post) that track the DJIA.
Granted it is a fraction of the index funds which track the S&P 500.
Sounds about the same as:
> that is reported on because people talk about it because it's reported on.
The unique aspect is that a randomly weighted index is just outright stupid however you look at it.
Since Nvidia was already a part of the S&P 500 (and other similar indices) prior to its big run, those index investors generally profited from its rise. New flows into those funds do help prop it up, though.
The DJIA is a weird historical relic, and there's little reason for anyone to buy a fund tracking it. It's possible that those who did anyways will end up holding a tiny fraction of the bag due to this change, but it's not a big effect.
AI bubble then burst. Funds are “adjusted” and someone is not getting his pension.
They were a key cause for LLMs being a thing in the first place.
NVidia has continued to stay ahead because every alternative to CUDA is half baked trash, even when the silicone make sense. As a tech company, trading $$ for time not spent dealing with compatibility bugs and broken drivers pretty much always makes business sense.
> dealing with compatibility bugs
> broken drivers
Describes my experience trying to use CUDA perfectly.
We have a long way to go and we haven't even started yet.
If you need people to abandon an ecosystem thats been developed steadily over nearly 20 years for your shiny new thing in order to keep it around, you'll never compete.
But AMD and others could have done the same, had they been better at riding that wave. There’s a reason it was Nvidia who won out.
I was using it as soon as it came out in 2007 and my dinky desktop workstation was out performing the main frame in the basement.
Twenty years ago I was thinking we'd be speccing machines in kilocores by now.
> Number of SMs is a more appropriate equivalent to CPU core count.
What do you mean by this? Why should an SM be considered equivalent to a CPU core? An SM can do 128 simultaneous adds and/or multiplies in a single cycle, where a CPU core can do, what, 2 or maybe 4? Obviously depends on the CPU / core / hyperthreading / # math pipelines / etc., but the SM to CPU-core ratio of the number of simultaneous calculation is in the double digits. It’s a tradeoff where the GPU has some restrictions in return for being able to do many multiples more at the same time.
If you consider an SM and a CPU equivalent, then the SM’s perf can exceed the CPU core by ~2 orders of magnitude — is that the comparison you want? If you consider a GPU thread lane and a CPU thread lane equivalent, the the GPU thread lane is slower and more restricted. Neither comparison is apples to apples, CPUs and GPUs are made for different workloads, but arguing that an SM is equivalent to a CPU core seems equally or more “misleading” when you’re leaving out the tradeoff.
I’d argue that comparing SMs to cores is misleading and that it makes more sense to compare chips is by their thread counts. Or, don’t compare cores at all and just look at the performance in, say, FLOPS.
That's using SIMD, but so is Nvidia for all intents and purposes. Those "cuda cores" aren't truly independent: when their execution diverges, masking is used pretty much like you'd do in CPU SIMD.
A lot of the control logic is per-SM or perhaps per-SIMD unit -- there are multiple of those per SM. You could perhaps make a case that it's the individual SIMDs which correspond to CPU cores (that makes the flops line up even more closely). It depends on what the goal of the comparison is.
https://images.nvidia.com/aem-dam/Solutions/geforce/news/rtx...
An SM is split into four identical blocks, and I would say each block is roughly equivalent to a CPU core. It has a scheduler, registers, 32 ALUs or FPUs, and some other stuff.
A CPU core with two AVX-512 units can do several integer operations plus 32 single-precision operations (including FMA) per cycle. Not 2 or 4. An older CPU with 2-3 AVX2 units could fall slightly behind, but it's pretty close.
That doesn't factor in the tensor units, but they're less general purpose, and CPUs usually put such things outside the cores.
I would say an SM is roughly equivalent to four CPU cores.
Yes, considering CPU SIMD, maybe comparing a CPU core to a CUDA warp makes some sense in some situations. The peak FLOPS rate is still so much higher on Nvidia though, that the comparison hardly makes sense. So yeah like I and the other commenter mentioned, it depends entirely on what comparison is being made.
Nvidia/CUDA process: Download package. Run the build. It works. Run your thing- it's GPU accelerated. Go get a beer/coffee/whatever while your net runs.
AMD process: Download package. Run the build. Debug failure. Read lots of articles about which patches to apply. Apply the patches. Run the build. It fails again. Shit. OK ok now I know what to do. I need a special fork of the package. go get that. Find it doesn't actually have the same API that the latest pytorch/tf relies on. OK downgrade those to an earlier version. OK now we're good. Run the build again. Aw shit. That failed again. More web searches. Oh ok now I know - there's some other patches you need to apply to this branch. OK cool. Now it compiles. Install the package. Run the thing. Huh. That's weird. GPU accelleration isn't on.... sigh....
The first transformer models were developed at Google. NVIDIA were the card du jour for accelerating it in the years since and have contributed research too, but your statement goes way too far
The first transformer models didn’t even use CUDA and CUDA didn’t have mass ecosystem inroads till years later.
I’m not trying to downplay NVIDIA but they specifically mentioned cause and effect, and then said it was because of NVIDIA.
NVIDIA absolutely contributed to the foundation but they are not the foundation alone.
Alexnet was great research but they could have done the same on other vendors at the time too. The hardware didn’t exist in a vacuum.
To say it wouldn’t have been possible on AMD is ludicrous and there is a pattern to your comments where you dismiss any other companies efforts or capabilities, but are quite happy to lay all the laurels on NVIDIA.
The reality is that multiple companies and individuals got us to where we are, and multiple products could have done the same. That's not to take away from NVIDIA's success, it's well earned, but if you took them out of the equation, there's nothing that would have prevented the tech existing.
> It was just what happened to be available and most straightforward at the time
AMD made better hardware for a while and people wanted OpenCL to succeed. The reason why nvidia became dominant was because their competitors simply weren’t good enough for general purpose parallel compute.
Would AI still have happened without CUDA? Almost certainly. However nvidia still had a massive role in shaping what it looks like today.
It’s one of the projects I think the Khronos group mishandled the most unfortunately.
If I were to categorize the successful ones:
glTF, KTX, SPIR-V, OpenGL and its variants, WebGL
People will say Vulkan but it has the same level of adoption as OpenCL, and has the same issue that it competes against vendor specific APIs (DX and Metal) that are just better to use. It’s still used though of course as a translation target but imho that doesn’t qualify it as a success.
OpenCL was and is a failure of grand magnitude. As was colada.
I graduated college in 2010 and I took a class taught in CUDA before graduating. CUDA was a primary driver of NN research at the time. Sure, other tools were available, but CUDA allowed people to build and distribute actually useful software which further encouraged the space.
Could things have happened without it? Yeah, for sure, but it would have taken a good deal longer.
Deep learning was only possible because you could do it on NVidia cards with Cuda without having to use the big machines in the basement.
Trying to convince anyone that neural networks could be useful in 2009 was impossible - I got a grant declined and my PhD supervisor told me to drop the useless tech and focus on something better like support vector machines.
The difference is AMD killed theirs to use OpenCL and NVIDIA kept CUDA around as well.
I tried using AMD Stream and it lacked documentation, debugging information and most of the tools needed to get anything done without a large team of experts. NVidia by comparison could - and was - used by single grad students on their franken-stations which we build out of gaming GPUs.
The less we talk about the disaster that the move to opencl was the better.
That said ROCm is quite a recent thing borne out of acknowledging that OpenCL 2 was a disaster.
OpenCL 1 had a reasonable shot and was gaining adoption but 2 scuppered it.
I did write quite a bit of OpenCL prior to that on Intel/AMd/NVIDIA, both for training and for general rendering though, and did some work with Stream before then.
Cuda by comparison JustWorks^tm.
1 was definitely a lot easier to work with than 2. CUDA is easier than both but I don’t think I hit anything I could do in CUDA that I couldn’t do in OpenCL, though CUDA of course had a larger ecosystem of existing libraries.
Thanks for the trip down memory lane.
Management at the two could not be more opposite if they tried.
Nvidia's advantage is that they have by far the most complete programming ecosystem for them. (Also honestly… they're a meme stock.)
- CUDA conception in 2006 to build super computers for scientific computing.
- CUDA influencing CPU designs to be dual purpose, with major distinction of RAM amounts (for scientific compute you need a lot more RAM compared to gaming)
- Crypto craze driving extreme consumer GPU demand which enabled them to invest heavily into RND and scale up production.
- AI workload explosion arriving right as the crypto demand was dying down.
- Consistently great execution, or at least not making any major blunders, during all of the above.
It doesn’t mean they didn’t make a bunch of mistakes its that when they did there was no competition to to realistically turn towards, and they fixed a lot of their mistakes.
It could have been AMD/ATI profiting from such random events, as they were the ones that financed AI development.
See also MD/Boeing
So yes, they would have tried to integrate it into the rest of Intel.
From 2019 to 2021, Rivera was Intel’s chief people officer, leading the company’s Human Resources organization worldwide.
I heard a rumor that Jensen wouldn't agree to the acquisition unless he became CEO of the combined entity.
Everyday I look at stocks and wish I'd known to buy the ones that went up today.
Overall, pretty uninteresting and uninsightful, unless you have Doc Brown's DeLorean with the Mr. Fusion upgrade (BTTF II). Even then, would it actually be good if you built your own reality to such an extent? I'm sure life's weird for the UHNW's, unclear if it's actually better. We're all still stuck on the same planet, our kids and kid's kids will all face the same struggles.
Even still, today is a pretty interesting temporal location to occupy!
https://youtube.com/watch?v=bKgf5PaBzyg
metadat slowly saunters off into the nearby foggy embankment, disappearing, Homer Simpson style
You can’t compare yourself to the board of the premiere microchip manufacturing company (at the time). They should have more information than you and they are paid to be making more informed decisions (obviously they can be wrong too).
you'd think that such multi-year shortage of a product would be used as an opportunity by other players to jump in and make great money. Yet that major machinery of capitalism is failing here.
I think the real takeaway here is the right conditions within companies can lead to breakthroughs. If a company employs the smartest engineers and gets them to work 12 hours days, it's obvious that they are going to take the lead.
So, the ROI would have been much, much lower.
It makes sense when the growing company doesn't have a path forward (e.g. YouTube's bandwidth costs) or the price is truly crazy (e.g. WhatsApp).
It's not clear to me why Instagram sold out to Facebook for $1B.
Or maybe Intel would be partitioning off Nvidia like the memory unit.
My local gym has a tier of cable so low that they don't even get CNN. For music, there's K-pop.
It was made this way because 139 years ago we didn't have computers and someone had to manually calculate the average.
Two back-in-the-day engineering driven future-forward companies and today run into the ground by Quarterly Reports And Nothing Else Matters culture and meandering around.
They'd be a lovely cultural fit.
I wonder how bad "worked as a manager at Intel" poisons a resume.
Observing Intel, my feeling has been that "best you can" has been "bail out" since just after the PPro. The company has worked hard to waste a whole generation of talent and possibility. It took immense effort.
Anyone who looks at that performance and says "I was proud to be part of that" is not someone who will be happy working beside me, anyway.
Some companies get results, and some specialize in Powerpoint and politics. It all comes down to the managers in charge.
Would you hire a VP/manager that spent too much time at IBM or similar? How long can you work there without getting infected?
If Google is the new Microsoft, is AMD the new Intel?
Was fun ride.
But worse, how we talk about the index, even today on radio and TV, in terms of nominal points up or down is completely ridiculous. Just yesterday "the Dow Jones was up 100 points". Ok - no reference value, no percentage change: means nothing to me. And we wonder why we are financially illiterate in this country.
SPY and VTI are 80-85% the same thing
VT includes a ton of non-US stocks and is significantly different
The DJIA is not meant to be viewed in isolation but in the context of other indexes like DJTA, the idea being that if industrial companies are doing well but the railroads or shipping aren't, it will be reflected in those respective averages diverging, hinting at deeper economic problems.
Also they adjust for stock splits now.
For example UNH and especially GS take up the largest individual components despite not being being amongst the largest companies, giving their share movement disproportionate weight in the “index”.
Perhaps it made sense 140 years ago but today it belongs in the rubbish.
Except it might be very hard to tell that because it’s effectively randomly weighted.
Caterpillar almost has the same weight as MS and double that of Amazon.
Personally I prefer the %
With a market cap weighted index you have to make more frequent trades every time a company does a buyback or issues new shares.
Not true for a price-weighted index. If there is a stock-split, the price will change as will the weighting. AAPL split in August 2020 4 for 1. It was 11% in the DJIA before the split - and close to $500 stock price. Post-split, all DJIA tracking funds had to sell AAPL as it probably went down to 2.5% weight. Consequently, all other stocks in the DJIA were to buy. Luckily, there aren't too many dollars tracking the DJIA.
> With a market cap weighted index you have to make more frequent trades every time a company does a buyback or issues new shares.
It depends. If the index is free-float market-cap weighting then yes, there will typically be a free-float adjustment at each rebalance (typically quarterly). But if there's no free-float adjustment then you need not do anything. Though managing a fund that tracks a free-float weighted index is not really an issue - there's some operational work to do on each rebalance.
Why would you want to though? The ratios are entirely random.
This always really bugged me
Their new Core Ultra chips have pretty okay performance and good energy efficiency, their P/E core design seems to make sense, even their entry into the dedicated graphics segment seemed like a good thing for the average consumer.
But the 13/14th gen issues were a pretty major hit, I do have an Intel Arc card that I got for a really nice price (cheaper than similar AMD/Nvidia GPUs) but it's not without its own share of issues even with pretty decently developed drivers, people seem to have taken the Core Ultra being a bit of a sidegrade pretty badly and the pricing doesn't always seem all that competitive when it comes to CPUs (even the motherboards seem more expensive when compared to AMD).
What a bummer.
That said, when it comes to the consumer segment, even AMD doesn't seem to be doing all that well, for example their net revenue for gaming is down 69%: https://ir.amd.com/news-events/press-releases/detail/1224/am...
They're taking a beating, but I feel like every few years we hear an "intel has fallen behind", and then they dust themselves off and are at the top again.
Though Nvidia can charge a huge premium for GPUs, those machines still need a CPU (if I'm wrong, please correct me).
Is anyone building ARM based servers? It looks like Intel has fought back against the Arm on Windows battle, and possibly won with their newest power saving Ultra 2s.
Do they really need to beat Nvidia?
And yet Intel still had 3/4 of datacenter the market by volume at least a few months ago…
Of course they are struggling with maintaining their margins.