Codebuff is different because we simplified the input to one step: you type what you want done in your terminal and hit enter. Then Codebuff looks through your whole codebase and makes the edits it wants, to existing source files or new ones. It also can run your tests, the type checker, or install packages to fulfill your request.
Demo video: https://www.youtube.com/watch?v=dQ0NOMsu0dA
It all started at a hackathon. I was trying out Sonnet 3.5 which had recently come out and seeing if I could use it to write code. The script I cobbled together that day pulled codebase context in one step and used it to rewrite files with changes in the second step. This two step process still exists today. Incidentally, my hackathon script worked rather poorly and my demo failed to produce any useful code.
But that weekend I thought about the kind of errors it made, and realized that with more context on our codebase, it might have been able to get the change right. For example, it tried to create an endpoint on our server (at my previous startup), but it didn't know that you needed to edit 3 specific files to do this (yeah... our backend was not that clean). So I hand-wrote a guide to our codebase, like I was instructing a new hire. I put it in a markdown file and passed it into Sonnet 3.5's system prompt. And the crazy thing is that it started producing wayyy better code. So, I started getting excited. In fact, this code guide idea still exists in Codebuff today as knowledge.md files which are automatically read on every request.
I didn't think of this project as a startup idea at first. I thought it was just a simple script anyone could write. But after another week, I could see there were more problems to solve and it should be a product.
In the week between applying to YC and the interview, I could not get Codebuff to edit files consistently. I tried many prompting strategies to get it to replace strings in the original file, but nothing worked reliably. How could I face my interviewer if I could not get something basic like this to work? On the day before my interview, in a Hail Mary attempt, I fine-tuned GPT-4o to turn Claude's sketch of changes into a git patch, which would add and remove lines to make the edits. I only finished generating the training data late at night, and the fine-tuning job ran as I slept.
And, holy hell, the next morning it worked! I pushed it to production just in time for my YC interview with Dalton. Soon after, Brandon joined and we were off to the races.
So, how does Codebuff work exactly? You invoke it in your terminal, and it starts by running through the source files in that directory and subdirectories and parsing out all the function and class names (or equivalents in 11 languages). We use the tree-sitter library to do this. It builds out a codebase map that includes these symbols and the file tree.
Then, it fires off a request to Claude Haiku 3.5 to cache this codebase context so user inputs can be responded to with lower latency. (Prompt caching is OP!). We have a stateless server that passes messages along to Anthropic or OpenAI. We use websockets to ferry data back and forth to clients. We didn't have authentication or even a database for the first three months. Codebuff was free to install and used our API keys for all requests. Luckily, no one exploited us for too much free Claude usage haha. Major thanks to Brandon for saving this situation by building out our database (Postgres + Drizzle), server (Bun, hosted on Render, auth (using the free Auth.js), website (NextJS also hosted on Render), billing (Stripe), logging (BetterStack), and dashboard (Retool). This is the best tech stack I’ve ever had.
When the user sends an input message, we prompt Claude to pick files that would be relevant (step 1). After picking files, we load them into context and the agent responds. It invokes tools using xml tags that we parse. It literally writes out <edit_file path="src/app.ts">…</edit_file> to edit a particular file, and has other tags to run terminal commands, or to ask to read more files. This is all we really need, since Anthropic has already trained Claude with very similar tools reach state of the art on the SWE benchmark.
Codebuff has limited free usage, but if you like it you can pay $99/mo to get more credits. We realize this is a lot more than competitors, but that’s because we do more expensive LLM calls with more context.
We’re already seeing Codebuff used in surprising ways. One user racked up a $500 bill by building out two Flutter apps in parallel. He never even looked at the code it generated. Instead, he had long conversations with Codebuff to make progress and fix errors, until the apps were built to his satisfaction. Many users built real apps over a weekend for their teams and personal use.
Of course, those aren't the typical use cases. Users also frequently use Codebuff to write unit tests. They would build a feature in parallel with unit tests and have Codebuff do loops to fix up the code until the tests pass. They would also ask it to do drudge work like set up Oauth flows or API scaffolding.
What's really exciting with all of these examples is that we're seeing people's creativity becoming unbridled. They're spending more of their time thinking about architecture and design, instead of implementation details. It's so cool that we're just at the beginning, and the technology is only going to improve from here.
If you would want to use Codebuff inside your own systems, we have an alpha SDK that exposes the same natural language interface for your apps to call and receive code edits! You can sign up here for early access: https://codebuff.retool.com/form/c8b15919-52d0-4572-aca5-533....
Thank you for reading! We’re excited for you to try out Codebuff and let us know what you think!
we've seen our own productivity increase tenfold – using codebuff to build buff our own code hah
let us know what you think!
In Codebuff you don't have to manually specify any files. It finds the right ones for you! It also pulls more files to get you a better result. I think this makes a huge difference in the ergonomics of just chatting to get results.
Codebuff also will run commands directly, so you can ask it to write unit tests and run them as it goes to make sure they are working.
Alright, I'm in.
Nice work!
Aider has extensive code for computing "repository map", with specialized handling for many programming languages; that map is sent to LLM to give it an overview of the project structure and summary of files it might be interested in. It is indeed a very convenient feature.
I never tried writing and launching unit tests via Aider, but from what I remember from the docs, it should work out of the box too.
Another aspect is simplicity. I think Aider and other CLI tools tend to err towards the side of configuration and more options, and we've been very intentional with Codebuff to avoid that. Not everyone values this, surely, but our users really appreciate how simple Codebuff is in comparison.
This is a good read: https://aider.chat/2023/10/22/repomap.html
Any specific reason to choose the terminal as the interface? Do you plan to make it more extensible in the future? (sounds like this could be wrapped with an extension for any IDE, which is exciting)
Also, do you see it being a problem that you can't point it to specific lines of code? In Cursor you can select some lines and CMD+K to instruct an edit. This takes away that fidelity, is it because you suspect models will get good enough to not require that level of handholding?
Do you plan to benchmark this with swe-bench etc.?
The terminal is actually a great interface because it is so simple. It keeps the product focused to not have complex UI options. But also, we rarely thought we needed any options. It's enough to let the user say what they want in chat.
You can't point to specific lines, but Codebuff is really good at finding the right spot.
I actually still use Cursor to edit individual files because I feel it is better when you are manually coding and want to change just one thing there.
We do plan to do the SWE bench. It's mostly the new Sonnet 3.5 under the hood making the edits, so it should do about as well as Anthropic's benchmark for that, which is really high, 49%: https://www.anthropic.com/news/3-5-models-and-computer-use
Fun fact is that the new Sonnet was given two tools to do code edits and run terminal commands to reach this high score. That's pretty much what Codebuff does.
Hah. If you encounter people that think like this, run away because as soon as they finish telling you that terminals are stupid they inevitably want help configuring their GUI for k8s or git. After that, with or without a GUI, it turns out they also don’t understand version control / containers.
It's become my go-to tool for handling fiddly refactors. Here’s an example session from a Rust project where I used it to break a single file into a module directory.
https://gist.github.com/cablehead/f235d61d3b646f2ec1794f656e...
Notice how it can run tests, see the compile error, and then iterate until the task is done? Really impressive.
For reference, this task used ~100 credits
Thanks for sharing! haxton was asking about practical use cases, I'll link them here!
/codebuff/dist/manifold-api.js
Codebuff was originally called Manicode. We just renamed it this week actually.
There was meant to be a universe of "Mani" products. My other cofounder made Manifund, and there's a conference we made called Manifest!
I thought it would be fun if you asked it about the chance of the election or maybe something about AI capabilities, it could back up the answer by citing a prediction market.
Is that through the Enterprise plan?
We actually ended up not charging this guy since there was a bug where we told him he got 50,000 credits instead of 10,000. Oops!
However, we also try to leverage prompt-caching as much as possible to lower costs and improve latency.
So we basically only add files over time. Once context gets too large, it will purge them all and start again.
Interesting! That does have 5 minute expiry on Claude, and your users can use Codebuff in an unoptimal way. Do you have plans in aligning your users towards using the tool in a way that makes the most use of prompt caches?
I have noticed some small oddities, like every now and then it will remove the existing contents of a module when adding a new function, but between a quick glance over the changes using the diff command and our standard CI suite, it's always pretty easy to catch and fix.
The real problem I want someone to solve is helping me with the real niche/challenging portion of a PR, ex: new tiptap extension that can do notebook code eval, migrate legacy auth service off auth0, record and replay API GET requests and replay a % of them as unit tests, etc.
So many of these tools get stuck trying to help me "start" rather than help me "finish" or unblock the current problem I'm at.
I want the demos to be of real work, but somehow they never seem as cool unless it's a neat front end toy example.
Here is the demo video I sent in my application to YC, which shows it doing real stuff: https://www.loom.com/share/fd4bced4eff94095a09c6a19b7f7f45c?...
Historically, Pepsi won taste tests and people chose Coke. Because Pepsi is sweeter, so that first sip tastes better. But it's less satisfying—too sweet—to drink a whole can.
The sexy demos don't, in my opinion and experience, win over the engineers and leaders you need. Lil startups, maybe, and engineers that love the flavor of the week. But for solving real, unsexy problems—that's where you'll pull in organizations.
Great point, we're in talks with a company and this exact issue came up. An engineer used Codebuff over a weekend to build a demo app, but the CEO wasn't particularly interested even after he enthusiastically explained what he made. It was only when the engineer later used Codebuff to connect the demo app to their systems that the CEO saw the potential. Figuring out how to help these two stakeholders align with one another will be a key challenge for us as we grow. Thanks for the thought!
The long tail of niche engineering problems is the time consuming bit now. That's not being solved at all, IMHO.
Any links on this topic you rate/could share?
Takes a lot longer to write than just diving into the code. I think that's what they meant.
@Codebuff team, does it make sense to provide a documentation.md with exposition on the systems?
Codebuff natively reads any files ending in "knowledge.md", so you can add any extra info you want it to know to these files.
For example, to make sure Codebuff creates new endpoints properly, I wrote a short guide with an example on the three files you need to update, and put it in backend/api/knowledge.md. After that, Codebuff always create new endpoints correctly!
Cursor Composer doesn't handle that and seems geared towards a small handful of handpicked files.
Would codebuff be able to handle a proper sized codebase? Or do the models fundamentally not handle that much context?
But Codebuff has a whole preliminary step where it searches your codebase to find relevant files to your query, and only those get added to the coding agent's context.
That's why I think it should work up to medium-large codebases. If the codebase is too large, then our file-finding step will also start to fail.
I would give it a shot on your codebase. I think it should work.
The code extruded from the LLM is still synthetic code, and likely to contain errors both in the form of extra tokens motivated by the pre-training data for the LLM rather than the input texts AND in the form of omission. It's difficult to detect when the summary you are relying on is actually missing critical information.
Even if the set up includes the links to the retrieved documents, the presence of the generated code discourages users from actually drilling down and reading them.
This is still a framing that says: Your question has an answer, and the computer can give it to you.
1 https://buttondown.com/maiht3k/archive/information-literacy-...
We build a description of the codebase including the file tree and parsed function names and class names, and then just ask Haiku which files are relevant!
This works much better and doesn't require slowly creating an index. You can just run Codebuff in any directory and it works.
RAG: providing the LLM with contextual data you’ve pulled from outside its weights that you believe relate to a query
Things that aren’t RAG, but are also ways to get a LLM to “know” things that it didn’t know prior:
1. Fine-tuning with your custom training data, since it modifies the model weights instead of adding context. 2. LoRA with your custom training data, since it adds a few layers on top of a foundation model. 3. Stuffing all your context into the prompt, since there is no search step being performed.
This sounds like RAG and also that you’re building an index? Did you just mean that you’re not using vector search over embeddings for the retrieval part, or have I missed something fundamental here?
Either way, we do the search step a little different and it works well.
I'm currently working on a demonstration/POC system using my ElasticSearch as my content source, generating embeddings from that content, and passing them to my local LLM.
Forgive my naivety, I don't now anything about LLMs.
For anyone interested:
- here's the Codebuff session: https://gist.github.com/craigds/b51bbd1aa19f2725c8276c5ad36947e2
- The result was this PR: https://github.com/koordinates/kart/pull/1011
It required a bit of back and forth to produce a relatively small change, and I think it was a bit too narrow with the files it selected (it missed updating the implementations of a method in some subclasses, since it didn't look at those files)So I'm not sure if this saved me time, but it's nevertheless promising! I'm looking forward to what it will be capable of in 6mo.
Hopefully the demo on our homepage shows a little bit more of your day-to-day workflows than other codegen tools show, but we're all ears on ways to improve this!
To give a concrete example of usefulness, I was implementing a referrals feature in Drizzle a few weeks ago, and Codebuff was able to build out the cli app, frontend, backend, and set up db schema (under my supervision, of course!) because of its deep understanding of our codebase. Building the feature properly requires knowing how our systems intersect with one another and the right abstraction at each point. I was able to bounce back and forth with it to build this out. It felt akin to working with a great junior engineer, tbh!
EDIT: another user shared their use cases here! https://news.ycombinator.com/item?id=42079914
> To give a concrete example of usefulness, I was implementing a referrals feature in Drizzle a few weeks ago, and Codebuff was able to build out the cli app, frontend, backend, and set up db schema
Record this!
Better yet, stream it on Twitch and/or YouTube and/or Discord and build a small community of followers.
People would love to watch you.
Yup, I had the same thought. I just ran into an issue during today's launch and used Codebuff to help me resolve it: https://www.tella.tv/video/solving-website-slowdown-with-ai-.... Next time, I'll try to record before I start working, but it's hard to rememeber sometimes.
I will admit, however, that my context switching has increased a ton, and that's probably not great. I often tell Codebuff to do something, inevitably get distracted with something else, and then come back later barely remembering the original task
Claude wrote me a prosemirror extension doing a bunch of stuff that I couldn’t figure out how to do myself. It was very convenient.
Where LLMs shine is in being a personal Stack Overflow: asking a question and having a personalized, specific answer immediately, that uses one's data.
But solving actual, real problems still seem out of reach. And letting them touch my files sound crazy.
(And yes, ok, maybe I just suck at prompting. But I would need detailed examples to be convinced this approach can work.)
> produce large amounts of convoluted code that in the end prove not only unnecessary but quite toxic.
What does that say about your prompting?
The reason we don't ask for human review is simply: we've found that it works fine to not ask.
We've had a few hundred users so far and usually people are skeptical of this at first, but as they use it they find that they don't want it to ask for every command. It enables cool use cases where Codebuff and iterate by running tests, seeing the error, attempting a fix, and running them again.
If you use source control like git, I also think that it's very hard for things to go wrong. Even if it ran rm -rf from your project directory, you should be able to undo that.
But here's the other thing: it won't do that. Claude is trained to be careful about this stuff and we've further prompted it to be careful.
I think not asking to run commands is the future of coding agents, so I hope you will at least entertain this idea. It's ok if you don't want to trust it, we're not asking you to do anything you are uncomfortable with.
Could you please explain a bit how you are sure about it?
I think we have many other users who are similar. To be fair, sometimes after watching it install packages with npm, people are surprised and say that they would have preferred that it asked. But usually this is just the initial reaction. I'm pretty confident this is the way forward.
But we're open to adding more restrictions so that it can't for example run `cd /usr && rm -rf .`
Assuming you trust it with the files in your codebase, and them being shared with third parties. Which is a hard pill to swallow for a proprietary program.
I can add it if tree sitter adds support for Svelte. I haven't checked, maybe it already is supported?
On the project itself, I don't really find it exciting at all, I'm sorry. It's just another wrapper for a 3rd party model, and the fact that you can 1) describe the entire workflow in 3 paragraphs, and 2) built it and launched it in around 4 months, emphasizes that.
Congrats on launch I guess.
No worries if this isn't a good fit for you. You're welcome to try it out for free anytime if you change your mind!
FWIW I wasn't super excited when James first showed me the project. I had tried so many AI code editors before, but never found them to be _actually usable_. So when James asked me to try, I just thought I'd be humoring him. Once I gave it a real shot, I found Codebuff to be great because of its form factor and deep context awareness: CLI allows for portability and system integration that plugins or extensions really can't do. And when AI actually understands my codebase, I just get a lot more done.
Not trying to convince you to change your mind, just sharing that I was in your shoes not too long ago!
> CLI allows for portability and system integration that plugins or extensions really can't do
In the past 6 or 7 years I haven't written a single line of code outside of a JetBrains IDE. Same thing for all of my team (whether they use JetBrains IDEs or VS Code), and I imagine for the vast majority of developers.
This is not a convincing argument for the vast majority of people. If anything, the fact that it requires a tool OUTSIDE of where they write code is an inconvenience.
> And when AI actually understands my codebase, I just get a lot more done.
But Amazon Q does this without me needing to type anything to instruct it, or to tell it which files to look at. And, again, without needing to go out of my IDE.
Having to switch to a new tool to write code using AI is a huge deterrent and asking for it is a reckless choice for any company offering those tools. Integrating AI in tools already used to write code is how you win over the market.
I was thinking the same. My (admittedly old-ish) 2070 Super runs at 25-30% just looking at the landing page. Seems a bit crazy for a basic web page. I'm guessing it's the background animation.
We might have a bit of an advantage because we pull more files as context so the edit can be more in the style of your existing code.
One downside to use pulling more context is we burn more tokens. That's partly why we have to charge $99 whereas cursor is $20 per month.
With Codebuff, you just chat from the terminal. After trying it, I think you might not want to go back to Cursor haha.
If you have multiple repos, you could create a directory that contains them all, and that should work pretty well!
It's cool to have this natively on the remote system though. I think a safer approach would be to compile a small binary locally that is multi-platform, and which has the command plus the capture of output to relay back, and transmit that over ssh for execution (like how MGMT config management compiles golang to static binary and sends it over to the remote node vs having to have mgmt and all it's deps installed on every system it's managing).
Could be low lift vs having a package, all it's dependencies and credentials running on the target system.
It’s a weird catch-22 giving praise like that to LLMs.
If you are, then you might be able to intuit and fill in the gaps left my the LLM and not even know it.
And if you’re not, then how could you judge?
Not really much to do with that you were saying, really, just a thought I had.
> It’s a weird catch-22 giving praise like that to LLMs.
It's a bit asymmetrical though isn't it -- judging quality is in fact much easier than producing it.
> you might be able to intuit and fill in the gaps left my the LLM and not even know it
Just because you are able to fill gaps with it doesn't mean it's not good. With all of these tools you basically have to fill gaps. There are still differences between Cline vs Cursor vs Aider vs Codebuff.
Personally I've found Cline to be the best to date, followed by Cursor.
There’s still a skill floor required to accurately judge something.
A layman can’t accurately judge the work of a surgeon.
> Just because you are able to fill gaps with it doesn't mean it's not good.
If I had to fill in my sysadmin’s knowledge gaps I wouldn’t call them a good sysadmin.
Not saying the tool isn’t useful, mind you, just playing semantics with calling a tool a “good sysadmin” or whatever.
Sure but it's not high at all.
Your typical sysadmin is doing a lot of Googling. If perplexity can tell you exactly what to do 90% of the time without error, that's a pretty good sysadmin.
Your typical programmer is doing a lot of googling and write-eval loops. If you are doing many flawless write-eval loops with the help of cline, cline is a pretty good programmer.
A lot of things AI is helping with also have good, easy to observe / generate, real-time metrics you can use to judge excellence.
I see Codebuff as a premium version of Cline, assuming that we are in fact more expensive. We do a lot of work to find more relevant files to include in context.
Admittedly the last time I used manicode was a while back but I even preferred Cursor to it, and Cursor hallucinates like a mf'er. What I liked about cursor is that I can just tell composer what files I want it to look at in the UI. But I just use Cline now because I find its performance to be the best.
Other datapoints: backend / ML engineer. Maybe other kinds of engineers have different experiences.
If you're nervous about this, I'd suggest throwing Codebuff in a Docker container or even a separate instance with just your codebase.
Fundamentally, I think codegen a pretty new space and lots of people are jumping in because they see the promise. Remains to be seen what the consolidation looks like. With the rate of advancement in LLMs and codegen in particular, I wouldn't be surprised to see even more tools than we do now...
It is also a true agent. It can run terminal commands to aid the request. For one request it could: 1. Write a unit test 2. Run the test 3. Edit code to fix the error 4. Run it again and see it pass
If you try out Codebuff, I think you'll see why it's unique!
We are doing some tricks so it should be able to edit the file without rewriting it, but occasionally that fails and we fallback to rewriting it all, which may time out on such a file.
where the problems start: cost of inference vs quality, latency, multi modality (vision + imagen), ai service provider issues (morning hours in US time zones = poor quality results)
the best part is being able to adjust it to my work style
If you want to make a multi-file edit in cursor, you open composer, probably have to click to start a new composer session, type what you want, tell it which files it needs to include, watch it run through the change (seeing only an abbreviated version of the changes it makes), click apply all, then have to go and actually look at the real diff.
With codebuff, you open codebuff in terminal and just type what you want, and it will scan the whole directory to figure out which files to include. Then you can see the whole diff. It's way cleaner and faster for making large changes. Because it can run terminal commands, it's also really good at cleaning up after itself, e.g., removing files, renaming files, installing dependencies, etc.
Both tools need work in terms of reliability, but the workflow with Codebuff is 10x better.
Could this tool get a command from the LLM which would result in file-loss? How would you prevent that?
One is that I think it is simpler for the end user to not have to add their own keys. It allows them to start for free and is less friction overall.
Another reason is that it allows us to use whichever models we think are best. Right now we just use Anthropic and OpenAI, but we are in talks with another startup to use their rewriting model. Previously, we have used our own fine-tuned model for one step, and that would be hard to do with just API keys.
The last reason that might be unpopular is that keeping it closed source and not allowing you to bring your keys means we can charge more money. Charging money for your product is good because then we can invest more energy and effort to make it even better. This is actually beneficial to you, the end user, because we can invest in making the product good. Capitalism works, cheers.
Per your last question, I do advise you use git so that you can always revert to return to your old file state! Codebuff does have a native "undo" command as well.
I've seen people say "you don't have to add files to Codebuff", but Aider tells me when the LLM has requested to see files. I just have to approve it. If that bothers you, it's open source, so you could probably just add a config to always add files when requested.
Aider can also run commands for you.
What am I missing?
Have you used Aider extensively? How are you finding it for your coding needs vs IDE-based chats?
It actually got line number not too wrong, and so they might have been helpful. (I included the line numbers for the original file in context).
Ultimately though, this approach was still error prone enough that we recently switched away.
I've been using Zed editor as my primary workhorse, and I can see codebuff as a helper CLI when I need to work. I'm not sure if a CLI-only interface outside my editor is the right UX for me to generate/edit code — but this is perfect for refactors.
Totally understand where you're coming from, I personally use it in a terminal tab (amongst many) in any IDE I'm using. But I've been surprised to see how different many developers' workflows are from one another. Some people use it in a dedicated terminal window, others have a vim-based setup, etc.
> Codebuff has limited free usage, but if you like it you can pay $99/mo to get more credits...
> One user racked up a $500 bill...
Those two statements are kind of confusing together. Past the free tier, what does $99/month get you? It sounds like there's some sort of credit, but that's not discussed at all here. How much did this customer do to get to that kind of bill? I get that they built a flutter app, but did it take a hour to run up a $500 bill? 6 hours? a whole weekend? Is there a way to set a limit?
The ability to rack up an unreasonable bill by accident, even just conceptually, is a non-starter for many. This is interactive so it's not as bad as accidentally leaving a GPU EC2 instance on overnight, but I'll note that Aider shows per query and session costs.
The user had spent the entire weekend developing the app, and admitted that he would have been more careful to manage his Codebuff usage had it not been for this bug.
We're open to adding hard limits to accounts, so you're never charged beyond the credits you paid for. We just wanted to make sure people could pay a bit more to get to a good stopping point once they passed their limits.
Have you considered a bring your own api key model?
Would however pay for actual software that I can just buy instead of rent to do the task of inline shell assitance, without making network calls behind my back that i'm not in complete perfectionist one hundred point zero zero per cent control of.
Sorry just my opinion in general with these types of products. If you don't have the skills to make a fully self contained language model type of product or something do this then you are not skilled enough team for me to trust with my work shell.
So do you want to buy tens of thousands of dollars in GPUs or do you want to rent them second-by-second? Most people will choose the latter. I understand you don't trust the infrastructure and that's reasonable. If self-hosting was viable it would be more popular.
It’s a crowded space and I don’t know how it’ll play, but in a space that hasn’t always brought out the best in the community, this Launch HN is a winner in my book.
I hope it goes great. Congratulations on the launch.
Ultimately, I think a future where the limit to good software is good ideas and agency to realize them, as opposed to engineering black boxes, mucking with mysterious runtime errors, holy wars on coding styles, etc. is where all the builders in this space are striving towards. We just want to see that happen sooner than later!
Wasn't there a recent startup in F24 that stole code from another YC company and fire was quickly put out by everyone?
I think you just need to try it to see the difference. You can feel how much easier it is haha.
We don't store your codebase, and have a similar policy to Cursor, in that our server is mostly a thin wrapper that forwards requests to LLM providers.
The PearAI debacle is another story, but mostly they copied the open source project Continue.dev without giving proper attribution.
I’m curious what exactly people say causes them to make the switch from Cursor to Codebuff? Or do people just use both?