1. If a breakpoint debugger exists for the stack, it should still be convenient and configured, and the programmer should have some experience using it. It's a skill/capability that needs to be in reserve.
2. The project has automatic protections against leftover statements being inadvertently merged into a major branch.
3. The dev environment allows loading in new code modules without restarting the whole application. Without that, someone can easily get stuck in rather long test iterations, especially if #1 is not satisfied and "it's too much work" to use another approach.
Then one evening you discover they've been staying late manually reformatting and reindenting all of their code using notepad before each commit. They explain this is because it's what they know will work reliably, and those other tools gave odd errors on their computer or had too many confusing options or needed some kind of bridge or dependency.
I might be impressed with their work ethic, but I can't just not-care about the problem that has risen into view. (Unless I'm literally counting the days until I move elsewhere.)
But formatting is trivial whereas debugging is not.
Best practices and tricks for debugging a given project will impact things like:
* The overall productivity of the team.
* Writing guides and documentation.
* CI/CD scripts, such as preventing accidental leakage of print statements.
* How development environments are set up and configured, especially if there are containers or port forwarding or remote services involved.
* How team members collaborate when there's more than one person trying to diagnose a problem.
* Generally how you teach new juniors or experienced developers arriving with a learning curve.
I pretty much only use print debugging. I know how to use a real debugger but adding print/console.log etc. keeps me from breaking context and flow.
It's an absolutely damning indictment of the developer experience for the web that this is the case. Why aren't our IDEs and browsers beautifully integrated like every other development environment I use integrates the runtime and the IDE?
Why hasn't some startup, somewhere, fixed this and the rest of the web dev hellscape? I don't know.
0: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
JS: debugger;
C#: System.Diagnostics.Debugger.Break();
Rust: std::intrinsics::breakpoint();
Go: runtime.Breakpoint();
Zig: @breakpoint();
Spend a few days debugging in PyCharm and you'll scream when you open developer tools.
Some IDEs do have integrated JS runtimes, so you can use a debugger in the IDE. However since JS runs on browsers and devices out of your control that only works up to a point.
To see the nature of the race condition, just put some print statements in some strategic locations and then see the interleaving, out of order, duplicate invocations etc that are causing the trouble. It's hard to see this type of stuff with a debugger.
Still agree that print debugging is more useful in such situations (and I prefer it in general).
I've had a bug like that and the intuitive way to handle it turned out to be entirely sufficient.
The bug (deep in networking stack, linux kernel on embedded device) was timing sensitive enough that printk() introduced unsuitable shifts. Instead I appended single-character traces into pre-allocated ring buffer memory. The overhead was down to one memory read and two memory writes, plus associated TLB misses if any; not even a function call. Very little infra was needed, and the naive, intuitive implementation sufficed.
An unrelated process would read the ring buffer (exposed as /proc/ file) at opportune time and hand over to the developer.
tl;dr know which steps introduce significant processing, timing delays, or synchronization events and push them out of critical path
I did something similar to debug concurrent treatments in Java, that allows to accumulate log statements in thread-local or instance-local collections and then publish them with possibly just a lazySet():
https://github.com/jeffhain/jolikit/blob/master/src/main/jav...
Stepping through with a debugger will take you at least a minute per cycle, won't turn up the concurrency issue, and will spend a great deal of your daily concentration budget.
Best of both worlds.
I personally think if you can’t use a debugger in a multithreaded codebase, the architecture is bad or one doesn’t understand the code. So yeah, full circle, if print debugging helps one learn the code better, that is only a positive.
I’m so amused about how debuggers have become a debate around here. “Printf vs debugger” is like “emacs vs vi” right now, and it really shouldn’t be. Sometimes I put a breakpoint AT my printf statement.
Just like printing, a debugger would be a suboptimal tool to use for that usecase.
If you have a time travel debugger then you can record concurrency issues without pausing the program then debug the whole history offline, so you get a similar benefit without having to choose what to log up front.
E.g. use Microsoft's WinDbg time travel integration: https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
Or on Linux use rr (https://rr-project.org/) or Undo (https://undo.io - disclaimer: I work on this).
These have the advantage that you only need to repro the bug once (just record it in a loop until the bug happens) then debug at your leisure. So even rare bugs are susceptible.
rr and Undo also both have modes for provoking concurrency bugs (Chaos Mode from rr - https://robert.ocallahan.org/2016/02/introducing-rr-chaos-mo..., Thread Fuzzing from Undo - https://undo.io/resources/thread-fuzzing-wild/)
I've never satisfied myself that you can't just make a legacy codebase work that way, given enough effort, but I am not fully convinced it's always a good idea.
Which in some cases I see as related to a sort of macho attitude in programming where people are oddly proud of forgoing using good tooling (or anything from the 21st century really).
However, if I know I'm going to be working on a project for a long time, I usually try to pay the upfront cost of setting up a debugger for common scenarios (ideally I try to make it as easy as hitting a button). When I run into debugging scenarios later, the cost/benefit analysis looks a lot better - set a breakpoint, hit the "debug" button, and boom, I can see all values in scope and step through code.
This isn't just an assumption I'm making: years of being in developer leadership roles, and then watching a couple of my own sons learning the practice, has shown me in hundreds of cases that if print-type debugging is seen, a session demonstrating how to use the debugger to its fullest will be a very rewarding effort. Even experienced developers from great CS programs sometimes are shocked to see what a debugger can do.
Walk the call stack! See the parameters and values, add watches, set conditional breakpoints to catch that infrequent situation? What! It remains eye opening again and again for people.
Not far behind is finding a peer trying to eyeball complexity to optimize, to show them the magic of profilers...
Is it real mind share? Is it bullshit?
Print debugging is the literal pocket knife of debugging.
And to be clear, print debugging and pervasive, configurable logging are very different things and the latter is hugely encouraged (even with logging levels), while the former is almost always suboptimal. Being able to have your client turn on "DEBUG" logging and send you the logs after some abnormal behaviour is supremely useful. Doing "prinftf("Here!")" in one's project is not, or at least not remotely as useful as better approaches.
When debugging parsers for my toy programming languages print debugging is less helpful and I make heavy use of all the debug tools you mention. The same goes for most types of business logic—writing a test and stepping through it in the debugger is usually the way to go.
But when troubleshooting odd behavior in a complex web app, the inverse is true—there are usually many possible points where the failure could occur, and many layers of function calls and API calls to check, which means that sticking a debug statement prematurely slows down your troubleshooting a lot. It's better to sprinkle logs everywhere, trigger the unexpected behavior, and then skim the logs to see where things stop making sense.
In general I think there are two conditions that make the difference between the debugger or print being better:
* Do you already know which unit is failing?
* Is there concurrency involved?
If you don't yet know the failing unit and/or the failing part of the code is concurrent, the debugger will not help you as much as logs will. You could use logs to narrow down the surface area until you know where the failure is and you've eliminated concurrency, but you shouldn't jump straight to the debugger.
Last time I tried it you were able to add logging statements "after the fact" (i.e. after reproducing the bug) and see what they would have printed. I believe they also have the ability to act like a conventional debugger.
I think they're changing some aspects of their business model but the core record / replay tech is really cool.
1 - If your parsers are not pure, you either have a very weird application or should change that.
e.g. LOG(INFO_LEVEL, "Service startup") and printf("Here11") are completely different situations.
Indeed, the very submission is arguing for printf style debugging instead of logging. Like it uses it as the alternative.
Real-world projects should have logging. It should have configurable logging levels such that a failing project in the wild can be configured to a higher logging level and you can gather up a myriad of logs from a massive, heterogenous cross-runtimes and platforms project and trace through to figure out where things went awry. But that isn't print debugging or the subject of this discussion.
Yeah, this is a bogus distinction they're drawing. Logging and printf style debugging are the same thing at different phases of the software lifecycle, which means they can't be alternatives to each other because they can't exist in the same space at the same time.
As soon as your printfs are deployed to prod, they become (bad) logs, and conversely your "printf debugging" may very well actually use your log library, not printf itself.
The scenario I gave is when there are performance problems with a developed project (you know -- where a profiler is actually usable, after they already decided on an approach and implemented code) and the developer is effectively guessing at the issues, doing iterative optimize-this-part then run and see if it's fixed pattern. This is folly 100% of the time. Yet it's a common pattern.
I like the solid approach described in Understanding Software Dynamics by Richard L. Sites
Also suspending a thread to peek around is more likely to hide timing bugs than the extra time spent doing IO to print.
Also, what's "modern" about "Walk the call stack! See the parameters and values, add watches, set conditional breakpoints"? Those are all things we had many decades ago (for some languages, at least). If anything, many modern debugging environments are fat and clunky, compared with some of the ones from way back when. What has greatly improved though are time travellers, because we didn't use to have the necessary amounts of memory lots of the time.
So please refrain from calling people with different preferences uneducated. [Ed. I retract this bit, though I think it is not unreasonable to associate lack of knowledge with lack of education (not necessarily formal education!) I don't want to quibble over semantics.]
OP said:
> … people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.
I am educated. I have a metric fuckton of incomplete knowledge in all areas of life.
You’re poking at something that wasn’t said.
(Well, other than my own conditional breakpoint features built into the code, doing things like programmatically trigger a breakpoint whenever an object with a specific address (that being settable in the debugger interactively) passes certain points in the garbage collector.)
We have successfully sorted out how to manage both loops, and set effective breakpoints to debug efficiently. We also log extensively.
I know it’s possible because I do it every day.
"Those are all things we had many decades ago"
I didn't claim this is some new invention, though. Though as someone who has been a heavy user of debuggers for DECADES, debuggers have dramatically improved in usability and scenarios where they are useful.
"So please refrain from calling people with different preferences uneducated."
But...I didn't. In fact I specifically noted that graduates of excellent CS programs often haven't experienced how great the debuggers in the platforms they target are.
We all have incomplete knowledge about a lot of things.
It minimises the mental effort to get to the next potential clue. And programmers are naturally drawn to that because:
1. True focus is a limited resource, so it's usually a good strategy to do the mentally laziest thing at each stage if you're facing a hard problem.
2. It always feels like the next time might be it - the final clue.
But these can lead to a trap when you don't quickly converge on an answer and end up in a cycle of waiting for compilation repeatedly whilst not making progress.
While perhaps this is true of some sort of junior developer, I have both written my own debuggers and still lean heaviest on print debugging. It's trivially reproducible, incurs basically zero mental overhead, and can be reviewed by another person. Any day I break out a debugger is a bleak day indeed.
Profilers are much easier to argue for as it is very difficult for one to produce equivalent results without also producing something that looks an awful lot like a profiler. But in most cases the mechanisms you mention are just straight unnecessary and are mostly a distraction from a successful debugging session.
Edit: in addition to agreeing with a sibling comment that suggests different problems naturally lend themselves more to debugging (eg when writing low-level code a debugger is difficult to replace), I'd also like to suggest a third option languages can take: excellent runtime debugging ala lisp conditions. If you don't have to unwind a stack to catch an exception, if in fact you can modify the runtime context at runtime and resume execution, you quickly realize the best of both worlds without having to maintain an often extremely complex tool that replicates an astonishing amount of the language itself, often imperfectly.
I still print stuff plenty, but when the source of an issue is not immediately obvious I’m reaching for the debugger asap.
This does sound painful, but this is not what most people who advocate for print debugging are advocating for.
If I'm only going to add one print statement, that's obviously a place where a breakpoint would serve. When I do print debugging, it's precisely because I haven't narrowed down the problem that far yet—I may have ten theories, not one, so I need ten log statements to test all ten theories at the same time.
Print debugging is most useful when the incorrect behavior could be in one of many pieces of a complex system, and you can use it to rapidly narrow down which of those pieces is actually the culprit.
However, at least personally, I've also felt that there was a lot of truth to that Ken Thompson quote. Something along the lines of: "when your program has a bug, the first thing you should do is turn off the computer and think deeply."
Basically, a bug when where your mental model of the code has diverged from what you've actually written. I think about the symptoms I'm observing, and I try to reason about where in the code it could happen and what it could be.
The suggestion in the parent comment that I'm just too stupid to look into or learn about debuggers is so condescending and just plain wrong. I've looked into them, I know how to use them, I can use them when I want to. I simply tend not to, because they don't solve any problem that I have.
Also, the implication that I don't use completely unrelated tools like profilers is equally asinine. Debuggers and profilers are two completely different tools that solve completely different problems. I use profilers almost every day of my career because it solves an actual problem that I have.
If your insecurity leads you to misrepresent what someone actually said so disgustingly, maybe Hacker News isn't for you.
Something is broken in prod, you cannot reproduce it in your test environment because you think it may be due to a config (some signing keys maybe) you can't check. And it looks like someone forgot to put logs around whatever is the problem.
You can either: spend multiple hours trying to reproduce and maybe find the cause. Or take 5mn, bash into one of your nodes, add some logging live and have a result right now: either you have your culprit or your hunch is false.
Remote debugging is a thing that exists.
It's particularly annoying on projects that are set up without considering proper debuggers, because often it's impossible or difficult to use them, e.g. if your program is started via a complicated bash script or makefile rather than directly.
All the processes end up being debugging simultaneously in the same instance of the debugger, which I've found to make light work of certain types of annoying bug. You might need a mode where the timeouts are disabled though!
not always; sometimes print debugging is much more time efficient due to the very slow runtime required to run compute-intensive programs in debug mode. I'll sometimes forego the debugger capabilities in order to get a quick answer.
That said, on one project I did have a semi-decent experience with a debugger for PHP (couple of decades back) and when it worked - it was great. PHP didn't have much of a REPL then, though.
I use PyCharm for my projects including Python, for instance, and it has absolutely fantastic debugging facilities. I wouldn't want to use an IDE that lacked this ability, and my time and the projects are too valuable to go without. Similar debugging facilities are there for Lua, PHP, Typescript/JavaScript, and on and on. Debuggers can cross processes and even machines. Debuggers can walk through your stored procedures or queries executing on massive database systems.
Several times in this thread, and in the submission, people have referenced Brian Kernighan's preference for print versus debugging. He said it in 1979 (when there was basically an absence of automated debugging facilities), and he repeated it in an interview in 1999. This is used as an appeal to authority and I think it's just massively obsolete.
As someone who fought with debuggers in the year 2000, they were absolute dogshit. Resource limitations meant that using a debugging meant absolutely glacial runtimes and a high probability that everything would just crash into a heap dump. They were only usable for the tiniest toy projects and the simplest scenarios. As things got bigger it was back to printf("Here1111!").
That isn't the case anymore. My IDEs are awesomely comprehensive and capable. My machine has seemingly infinite processor headroom where even a 1000x slowdown in the runtime of something is entirely workable. And it has enough memory to effortlessly trace everything with ease. It's a new world, baby.
Of course, I pair this with a modern emulator for the target platform where I can at least see my disassembly, set breakpoints, watch memory values. But when I'm working on some issue that I can only reproduce on hardware, we get to bust out all the fun manual toys, because I just don't have anything else available. On the very worst days, it's "run this routine to crash on purpose and paint the screen pink. Okay, how far into the code do we get before that stops happening? Move the crash handler forward and search. (Each time we do this we are flashing an eeprom and socketing that into the board again.)"
If you add a bunch of print statements every few lines, it's easier to run the code and see you got to the checkpoint line 1580 and 1587 but not 1601, than to have to manually click through a dozen breakpoints and note the last one you passed before the problem occurs.
If you have a "hard crash" where you can give a stack trace, that's less of a need, but often it's something like "this value was 125 when it entered module Foo and came back as 267". Monitoring the expression can sometimes help, but it might also be a red herring (we trap that it got set at the end of function Bar, but then we have to dig into function Bar to find the trace). Printfs can include whatever combination of values are worth reporting at any time.
Yes, any debugger can do all of that, but when trying to spin it up ad-hoc, printfs can be less hassle than trying to pull up the debugging tools and wire it up inside the IDE.
That said, I use print debugging all of the time. It is simply more practical in many cases.
Linters make this a non-issue, and most modern development environments support them.
https://eslint.org/docs/latest/rules/no-console
https://docs.astral.sh/ruff/rules/print/
In situations where for whatever reason it's infeasible to use a linter, a basic grep can surface prints in a codebase reliably enough.
https://news.ycombinator.com/item?id=42146864
from "Seer: A GUI front end to GDB for Linux" (15.11.2024)
I remember the good old days when I was first learning programming with Applesoft BASIC where print debugging was all there was, and then again in my early days of 8051 programming when I didn't yet have the sophisticated 8051 ICE equipment to do more in depth debugging. Now with the ARM Cortex chips I most often program and their nice SWD interface, print debugging isn't usually necessary. But I still use it occasionally over a serial line because it is simple and why not?
It's often faster and easier to set things up to run test cases fast and drop some prints around. Then if there's too much unimportant stuff or something else you want to check on, just switch around the prints and run it again.
I wish there was something similar like rerun but for code: you record the whole program running and capture all snapshots then stop it running. Now you can analyze all app execution and variables even offline and use any data queries, modify prints without execution and feed it too AI as extra context. I guess RAM would be a big obstacle to make it work since you would either have to capture snapshot at every program modification or some less snapshot but some diffs between what changed.
More like regular debuggers, it seems to me like it's something to set up only when print debugging just isn't getting the job done and you think you need something extra to help solve the problem.
How can you do this using print debugging? For every print statement I add, I can add a breakpoint. Even more importantly, I can see the stack frame and know which functions led to the current one. I can inspect any and all variables in scope, and even change their values if I want to pretend that the code before was fine and proceed further.
Granted, there's nothing really stopping you from using an interactive debugger with frequent short executions, but using print debugging seems to encourage it and interactive debuggers kind of discourage it.
For example, I rarely used a debugger in my career as an Android driver developer (mostly C), for several reasons.
1. My first step when debugging is looking at the code to build working hypotheses of what sort of issues could be causing the incorrect behavior that is observed.
2. I find assertions to be a great debugging tool. Simply add extra assertions in various places to have my expectations checked automatically by the computer. They can typically unwind the stack to see the whole call trace, which is very useful.
3. Often, there only choice was command-line GDB, which iI found much slower than GUI debuggers.
4. Print statements can be placed inside if statements, so that you only print out data when particular conditions occur. Debuggers didn't have as much fine control.
5. Debugging multi threaded code. Prints were somewhat less likely to interfere with race conditions. I sometimes embedded sleep() calls to trigger different orderings to occur.
It makes command history not work by default but IIRC "focus cmd" fixes that.
Don't care about the tool care about the performance.
Anecdotally, debuggers are faster than print statements in most cases for me. I've been able to find bugs significantly faster using a debugger than with using print statements. I still do use print statements on occasion when I'm developing something where a debugger is very complicated to set up, or in cases where I'm dealing with things happening in parallel/async, where a debugger is less suited. I'm not going to shame you for using print statements, but I do hope that you've tried both and are familiar/comfortable with both approaches and can recognise their strengths/weaknesses -- something I'm not convinced of by this author, which only outlines the strengths of one approach.
Also not a fan of the manufactured outrage of saying people are being "shamed" for using print statements. Coupled with listing a bunch of hyperbolic articles -- many of which don't even seem to be about debugging but about logging libraries.
(Also as a side note: don't forget if you are using print statements for debugging to check if your language buffers the print output!! You'll likely want to have it be unbuffered if you're using print for debugging)
Person was a wizard at finding bugs. And they didn’t write many. My goal in code reviews was to find them from this person. I found 2 in 5 years.
I've seen dozens.
Particularly in Rust, the type system is very complex and debuggers often fail to show enough information and dbg! becomes superior. I mostly use debuggers when I try to understand someone else’s code.
Delete your debug cruft!
Adding a `..` to the end of a variable triggers a macro that changes `val` into something like `print("val:", val) // FIXME: REMOVE`. Then a pre-commit hook makes sure I am unable to commit lines matching this pattern.
Not all projects use git, but the ones that do:
It is possible to set up a global pre-commit hook in git, but it requires some manual configuration because git does not natively support global hooks out of the box. By default, hooks are local to each repository and reside in the .git/hooks directory.
You could do:
git config --global core.hooksPath ~/.git-hooks
But... setting `core.hooksPath` overrides the use of local hooks in .git/hooks.At least you can combine global and local hooks by modifying your global hook scripts to manually invoke the local hooks.
I do think that it’s worth learning your debugger well for programming environments that you use frequently.
In particular, I think that the debugger is exceptionally important vs print debugging for C++. Part of this is the kinds of C++ programs that exist (large, legacy programs). Part of this is that it is annoying to e.g. print a std::vector, but the debugger will pretty-print it for you.
I wrote up a list of tips on how to use gdb effectively on C++ projects awhile back, that got some discussion here: https://news.ycombinator.com/item?id=41074703
It is tricky. I understand why people have a bad experience with gdb. But there are ways to make it better.
What print debugging and debuggers have in common, in contrast to other tools, is that they can extract data specific to your program (e.g values of variables and data structures) that your program was not instrumented to export. It's really a shame that we generally don't have this capability for production software running at scale.
That's why I'm working on Side-Eye [1], a debugger that does work in production. With Side-Eye, you can do something analogous to print debugging, but without changing code or restarting anything. It uses a combination of debug information and dynamic instrumentation.
And then there are more technically superficial, but crucial, aspects related to specific programming language support. Side-Eye understands Go maps and such, and the Go runtime. It can do stuff like enumerate all the goroutines and give you a snapshot of all their stacks. We're also working on integrating with the Go execution traces collected my the Go scheduler, etc.
Print debugging is how you make software talk back to you. Having software do that is an obvious asset when trying to understand what it does.
There are many debugging tools (debuggers, sanitisers, prints to name a few), all of them have their place and could be the most efficient route to fixing any particular bug.
Complicated setup, slow startup, separate custom UI for adding watches and breakpoints.
Make a debugger integrated with the language and people will use it.
You can then pile up on it subsequent useful features but you have to get basic UI right first. Because half of programmers now are willing to give up stepping, tree inspection even breakpoints just to avoid dealing with the crappy UI of debuggers.
It's always so weird to switch to another language which DOES have a debugger...
A debugger gives you insight into the context of a particular code entity - expression, function, whatever.
Seems silly to be dogmatic about this. Both techniques are useful!
There's a section of an interview with John Carmack (https://youtu.be/tzr7hRXcwkw) where he laments the same thing. It's what the Windows/game development corner of the programming world actually got right, people generally use effective tools for software development.
It also ties into the importance of logging. If you know how to do print debugging well you'll know how to do logging well. And while a crash dump is very useful and allows you to inspect the crash with a debugger, only a good log can give you the necessary context to determine what led up to the crash.
Then, when I code myself, I use print debugging like 99.9% of the time :D I have the feeling that, for me, the debugger tends to be not worth the effort. If the bug is very simple, print debugging will do the job fast so the debugger would make me waste time. If the bug is very complex, it can be difficult to know where to set the breakpoints, etc. in the debugger (let alone if there's concurrency involved). There is a middle ground where it can be worth it but for me, it's infrequent enough that it doesn't seem worth the effort to spend time making the decision on whether to use the debugger or not. So I just don't use it except once in a blue moon.
I'm aware this can be very personal, though, hence my tries to have my students get some practice with the debugger.
Often I have to debug bugs I can't reproduce. If method 1 - staring at the code - doesn't work, then it's add print/log statements and send it to the user to test. Repeat until you can reproduce the bug yourself or you fixed it.
Create a debugger that has easily accessible history of execution and we can talk.
The data streams of course can be simulated, then "true" debugging with breakpoints and watches becomes practical, but the simulation is never 100% and getting it close to 100% is sometimes harder than debugging the app out using print debugging. So with most of the code, i only use debugger to analyse crash dumps.
(defmethod move ((a-ship object) (a-place port))
; do something
)
(defmethod move :around (what where)
(print `(start moving object ,what to ,where))
(call-next-method))
Above prints the list to the REPL. The REPL prints the list as data, with the objects WHAT and WHERE included. It remembers that a specific printed output is caused by some object. Later these objects can be inspected or one can call functions on them...This combines print debug statements with introspection in a read-eval-print-loop (REPL).
Writing the output as :before/:around/:after methods or as advise statements, makes it later easier to remove all print output code, without changing the rest of the code. -> methods and advises can be removed from the code at runtime.
* Not all languages have good debuggers.
* It's not always possible to connect a debugger in the environment where the code runs.
* Builds don't always include debug symbols, and this can be very high-friction to change.
* Compilers sometimes optimize out the variable I'm interested in, making it impossible to see in a debugger. (Haskell is particularly bad about this)
* As another commenter mentioned, the delay introduced by a debugger can change the behavior in a way that prevents the bug. (E.g. a connection times out)
* In interpreted languages, debuggers can make the code painfully slow to run (think multiple minutes before the first breakpoint is hit).
One technique that is easier to do in printf debugging is comparing two implementations. If you have (or create) one known-good implementation and have a buggy implementation, you can change the code to run both implementations and print when there's a difference in the result (possibly with some logic to determine if results are equivalent, e.g. if the resulting lists are the same up to ordering).
> import IPython; IPython.embed()
That'll drop you into an interactive shell in whatever context you place the line (e.g. a nested loop inside a `with` inside a class inside a function etc).
You can print the value, change it, run whatever functions are visible there... And once you're done, the code will keep running with your changes (unless you `sys.exit()` manually)
print Debugging is my main goto. Plus I have no shame :)
I have this in buffer 'd' on vim and Emacs ready for use:
fprintf(stderr, "DEBUG %s %d -- \n", __FILE__, __LINE__); fflush(stderr);
I do print debugging most of the times, together with reasoning and some understanding what the code does (!), and I'm usually successful and quick enough with it.
The point here is: today's Internet, with all the social media stuff, is an attention economy. And also some software developers try to get their piece of the cake with extreme statements. They then exaggerate and maximally praise or demonize something because it generates better numbers on Twitter. It'd as simple as that. You shouldn't take everything too seriously. It's people crying for more attention.
This is similar to the "debug f-strings" introduced in python 3.8: print(f"{foo=}"). But it's much easier to type dump(foo) and you get prettier output for complex types.
x = 3
foo = dict(bar=1, baz=dict(hello="world"))
dump(x)
dump(foo)
# prints...
x: 3
foo:
{
"bar": 1,
"baz": {
"hello": "world"
}
}
https://github.com/Aider-AI/aider/blob/main/aider/dump.pyI mostly write Zig these days (love it) and the main thing I'm working on is an interactive program. So the natural way to test features and debug problems is to spin the demo program up and provide it with input, and see what it's doing.
The key is that Zig has a lazy compilation model, which is completely pervasive. If a branch is comptime-known to be false, it gets dropped very early, it has to parse but that's almost it. You don't need dead-code elimination if there's no dead code going in to that phase of compilation.
So I can be very generous in setting up logging, since if the debug level isn't active, that logic is just gone with no trace. When a module starts getting noisy in the logs, I add a flag at the top `const extra = false;`, and drop `if (extra)` in front of log statements which I don't need to have printing. That way I can easily flip the switch to get more detail on any module I'm investigating. And again, since that's a comptime-known dead branch, it barely impacts compiling, and doesn't impact runtime at all.
I do delete log statements where the information is trivial outside of the context of a specific thing I'm debugging, but the gist of what I'm saying is that logging and print debugging blend together in a very nice way here. This approach is a natural fit for this kind of program, I have some stubs for replacing live interaction with reading and writing to different handles, but I haven't gotten around to setting it up, or, as a consequence, firing up lldb at any point.
With the custom debug printers found in the Zig repo, 'proper' debugging is a fairly nice experience for Zig code as well, I use it heavily on other projects. But sometimes trace debugging / print debugging is the natural fit to the program, and I like that the language makes it basically free do use. Horses for courses.
The closest between the two is a logging breakpoint, but the UI for them is generally worse than the UI of the main editor and the logging breakpoint has the same weakness as regular print calls, i.e. you've turned the data into a string and can therefore no longer inspect the objects in the trace.
What I would expect from a debugger in IntelliJ is that when you set a logging breakpoint, then the editor inserts the breakpoint logic source code directly inline with the code itself, so that you can pretend that you are writing a print call with all the IDE features, but the compiler never gets to see that line of code.
Print debugging was pretty useless back then because compilation took minutes (a full compile took over an hour) rather than milliseconds. If your strategy was "try something, add a print, compile, try something else, add a print, compile" then you were going to have a very bad time.
People working on modern, fast-dev-cycle, interpreted languages today have it easy. You don't know the terror of looking at your code, making sure you have thought of "everything that you're going to need to debug that problem" and hitting compile, knowing that you'll know after lunch whether you have enough debugging information included. I'm sure it was even worse in the punch card era!
Looking at the comments here, I'm going to have to try to figure out how to use a debugger in pycharm on Monday!
Any tips or good videos on this?
There is nothing bad about print debugging, there is no reason to avoid it if that's what works with your workflow and tools. The real question is why you are using print and not something else. In particular, what print does better than your purpose-built debugger? If the debugger doesn't get used, maybe one should look down on that particular tool and think of ways of addressing the problem.
I see many comments against print debugging that go around the lines of "if you learn to use a proper debugger, that's so much better". But in many modern languages that's actually the problem, you have to invest a lot of time and effort on something that should be intuitive. I remember when I started learning programming, with QBasic, Turbo Pascal, etc... using the debugger was the default, and so intuitive I used a debugger before even knowing what a debugger was! And it was 90s tech, now we have time travel debugging, hot reloading, and way more capable UIs, but for some reason, things got worse, not better. Though I don't know much about it, it seems the only ones who get it right are in the video game industry. The rest tend to be stuck with primitive print debugging.
And I say "primitive" not because print debugging is bad in general, but because if print debugging was really to be embraced, it could be made better. For example by having dedicated debug print functions, an easy way to access and print the stack trace, generic object print, pretty printers, overrides for accessing internal data, etc... Some languages already have some of that, but often stopping short of making print debugging first class. Also, it requires fast compilation times.
Also on the FE it's often much easier to just console.log than to set breakpoints in the sources in your browser.
An in-circuit emulator was unavailable, so stepping through with a debugger was also not an option.
I ended up figuring out a way to be able to poke values into a few unused registers in an ancillary board within the system, where I could then read the values via the debug port on that board.
So I would figure out what parts of the serial comms code I wanted to test and insert calls that would increment register addresses on the ancillary board. I would compile the code onto a pair of floppy disks, load up the main CPU boards and spend between five and ninety minutes triggering redundancy changeovers until one of the serial ports shat itself.
After which I would probe the registers of the corresponding ancillary board to see which register locations were still incrementing and which were not, telling me which parts of the code were still being passed through. Study the code, make theories, add potential fixes, remove register increments and put in new ones, rinse and repeat for two weeks.
We are working on a system that could have nearly total visibility, down to showing us a simulation of individual electrons moving through wires, yet we're programming basically blind. The default is I write code and run it, without any visual/intuitive feedback about what its doing besides the result. So much of my visual system goes completely unused. Also debuggers can be a pain to set up, way more reading and typing than "print()"