So he's now the CEO of Anthropic, a company selling AI services?
Claude is amazing, and we use it's Teams plan here at the office extensively (having switched from ChatGPT since Claude is vastly better at technical material and adcopy writing).
But, Anthropic definitely has a commercial motive... no?
I'm not saying a commercial motive is a bad thing - hardly... but this quote seems to be odd given the circumstances.
We used ChatGPT's Teams plan too with GPT4, but were sold on Claude almost immediately. Admittedly we have not used GPT4o recently, so we can't compare.
With technical information, Claude is vastly better at providing accurate information, even about lesser-known languages/stacks. For example, it's ability to discuss and review code written in Gleam, Svelte, TypeSpec and others is impressive. It is also, in our experience, vastly better at "guided/exploratory learning" - where you probe questions as you go down a rabbit hole.
Is it always accurate? Of course not, but we've found it to be on average better at those tasks than ChatGPT.
For example, here's their research mission: https://www.anthropic.com/research
And an example of one of their early research focuses, Constitutional AI: https://arxiv.org/abs/2212.08073
It sounds like they are least trying to build on the notion of being a public benefit corporation, and create a business that won't devolve into chart must go up and to the right each quarter.
Time will tell of course, OpenAI was putatively started with good, non-profit intentions.
https://web.archive.org/web/20230714043611/https://openai.co...
Isn't the former already a red flag?
The only downside for me is having been involved in all these projects and knowing enough to innovate. At least I do try to warn people before we proceed.
* Acting in accordance with declared motivations is a demonstration of integrity.
* Acting towards hidden motivations that oppose your declared motivations is deceptive action.
Honest people don't want to lead and be responsible for deceptive action, even if the action is desirable.
For these types of people, it is often better to leave a place that requires them to active deceptively in favor of one that will let them operate with integrity.
Even if the end goal is the same, eg: to make money.
It only seems odd to you because you are reading much more into it than he's ever said, like "AI should never be commercialized under any circumstances and it is impossible to do so correctly". Then yes, it would be hypocritical. But he didn't say that and never has; and Anthropic thinks they are doing it right.
Godspeed to Anthropic! Hopefully they can be a force for good, despite the various deals with the devil that they've taken. They've lost so many safety and e/acc people that I was getting dubious, but they certainly are staying in the fight.
Shame they're already for-profit... But don't worry, they Pinky Promise to be For The Public Benefit :)
Anthropic is legally a Delaware Public Benefit corporation so it's written into their corporate governance.
How effective that governance will be at balancing the public benefit with profit remains to be seen, but it's a lot more than just a pinky promise.
I'd stand by the general assertion that it's little more than a pinky promise because they merely have to "balance" the concerns according to "any reasonable person"--an extremely weak-seeming obligation to this non-lawyer--but it's certainly much more impactful than I thought, namely:
Sections 365 (b) and (c) provide broad protection to directors of public benefit corporations against claims based on interests other than those of stockholders
https://www.legis.delaware.gov/BillDetail?LegislationId=2235...Good on you, Anthropic! In this specific case I believe in the director(s) a lot more than I believe in the shareholders ethics-wise, so it seems like a perfect choice. They can always fire him/them I suppose, but truly catastrophic AI risks would move faster than that, anyway.
By law and precedent a C-Corp’s only obligation is to shareholders, thanks to a case from almost a century ago: https://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.#:~:t....
A B-Corp was the first and somewhat successful attempt to create a legal framework where company executives are allowed to work on behalf of all their stakeholders without it creating an automatic basis for a suit.
Generally, people who care very deeply about a thing bring a higher ethical standard than any regulatory body can impose.
This shouldnt be news.
I think that is the crux of the matter.
People set up and fund a public bus system that has coverage for all neighborhoods, rich or poor, distant or close.
And then after the bus system is up and running, the bus system manager decides transportation is important! He IPOs the bus system, and changes all the routes to money-making routes with cost optimized (higher) fares.
But also, a money-making bus system would mean people actually use it. Nobody uses most American bus systems because they don't go anywhere, and they're slower when they do because they aren't popular enough to replace car traffic.
There's an arms race. Openai was ahead. Then anthropic was ahead. Now gpt4o and o1 are better again. This may change in a few months.
I'll miss the projects feature though.
I'm porting a medium sized project (40k loc) from iOS to Flutter and I couldn't be more happy with my setup with Claude. Every time I hit the Pro plan limit and I have to resort to ChatGPT the work that I have to put in to manually fix the code easily triples.
Turns out that Anthropic’s signup flow has been silently broken for months for Firefox users: https://old.reddit.com/r/ClaudeAI/comments/1bq06yz/phone_ver.... You get the SMS verification code, and you can enter it, but you get a barely visible “Invalid verification code” error message followed near-instantly by a refresh of the page. I reached out to support, but like many others, heard nothing back.
This barely-disguised contempt for what should likely be their most valuable power-user base suggests to me that a lot of the recent departures from OpenAI are being driven by push instead of pull, and I’m not convinced that Anthropic will remain a competent competitor in the LLM arms race long-term.
the new customer reality, courtesy Google etc..
I signed up and paid for credits to access their API last weekend.
All requests still get rejected saying I don't have sufficient credit. This is despite their dashboard saying that I do indeed have the requisite credits.
No response despite reaching out to support.
Don't think I have been treated this indifferently by any other service in recent times.
I.e. - I think Anthropic is seeing a boon right now not because they’re doing things right, but because the competition is doing them worse.
Yikes. I am a long time Mozilla supporter, active user of Firefox since before it was Firefox, and former Mozilla employee, but this comment is pretty crazy.
Firefox is well below 3% market share, and is essentially a niche browser at this point - it sucks when I run into sites and services that aren't supported by Firefox, but I don't assume that it's contempt for me as a Firefox user. I simply assume that I, as a power user, have opted to use an alternate tool that has features that are compelling to me, and I certainly don't expect every business out there to prioritize my use of a niche tool.
I learned a long time ago that while power users can be an effective avenue for building a market for niche products, they also end up being some of the most problematic users, because of the assumptions that power user needs should be placed above the regular users. It's fine to want to be catered to, but it's not really great to assume malice when you aren't - it shows contempt for the prioritization of the limited resources they have available.
I’m not salty just because they don’t support the browser; you’re totally right that that’d be an unreasonable take, but it’s not the one I’m trying to make.
Source for this claim?
I use Open WebUI for when I want a website with some more features than a terminal provides.
They routinely keep logging me out, also always making me wait for an email confirmation code just to login every time, and it's sickening.
They also promise API credits but then don't actually give any.
Guess what: enterprises are made of people. People like to try things out. If people are not happy with something for their personal use, they most definitely are never going to recommend it to their employer. This is why OpenAI wins. It is in fact one of the factors that sets apart a hyper-successful product from a wannabe.
The AI emperor will not be the one who has the most consumers logging into product.com to use the chatbot.
Compound this with OpenAI’s continuous shedding of, as far as I can tell, every credible researcher… I find your position quite hard to believe, even accounting for the hysterical tone.
I don't believe this at all. I am not here to argue that Claude is a worse model, only that Anthropic is a worse company.
> The AI emperor will not be the one who has the most consumers logging into product.com
Your point only goes to show how much Anthropic hates its end users.
> OpenAI’s continuous shedding of, as far as I can tell, every credible researcher
OpenAI has zero trouble hiring great talent. As I see it, they lost a lot of dead weight that had no interest in bringing AI to the masses, but had an agenda of their own instead.
Huh? How so? Sorry not even clear what your complaint is… is it the Firefox (3% market share) login bug? The Claude chat experience has been superior for a while now, and Projects and Artifacts make it 100x so.
Good at hiring and bad at retaining is much worse than the reverse, especially for long-lived R&D projects.
It helps to read. It's noted in the original comment. It has nothing whatsoever to do with Firefox, as it manfiests only on the Anthropic website.
I remain astonished that Adam continues to be the most widely used optimizer in AI 10 years later. So many contenders have failed to replace it.