"A discard communicates intent to the compiler and others that read your code: You intended to ignore it.
You indicate that a variable is a discard by assigning it the underscore (_) as its name."
https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
2020: http://dontcodetired.com/blog/post/Variables-We-Dont-Need-No...
Underscores and dashes are not that different - especially as this is not just pre-computer but pre-typewriter.
And now, underscores are a logical choice when the dash is already in use as a minus sign.
1) https://literature.stackexchange.com/questions/1944/why-are-...
https://forums.welltrainedmind.com/topic/141704-jane-austens...
Replacing hyphen/minus with underscore has been done precisely for removing the ambiguity with the minus operator (In COBOL the ambiguity was less annoying, because the arithmetic operations were usually written with words, e.g. ADD, SUBTRACT, MULTIPLY and so on).
I'll guess it's probably Prolog, but maybe Planner (Prolog's predecessor) had it too.
The 1965 draft had _ https://dl.acm.org/doi/10.1145/363831.363839
The first standard edition with _ was 1968 https://www.rfc-editor.org/info/rfc20
The 1977 version is also available https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub1-2-197...
At this point, it feels like a matter of time before Rust replaces C/C++.
Multiply that code base size by like, 78.3, and you’re possibly in the same galaxy as the all the c++ codebases out there that will be maintained for the next 50 years.
Rust may eat the lunch of c++ moving forward, the language will never go away.
Just like COBOL! Seriously, _just like COBOL_. The language will fade in importance over time. C++ will be relegated to increasingly narrow niches. It will eventually only be found in "legacy" systems that no one wants to spend to rewrite. No one young will bother at all to learn C++ at all. Then, in a few decades, they'll be pulling folks like you and me out of retirement to maintain old systems for big bucks.
"In Rust that's just pushing the problem to the borrow checker and codegen which has plenty of memory corruption issues too but just calls it "bugs". In C++ you avoid those issues by reducing dynamic memory allocations to a minimum, and using checked APIs as 'safety boundaries' instead of directly poking around random arrays. Different approach but not any less safe than C++".
Both statements are pretty ridiculous. It's pretty clear that moving up in terms of the safe abstractions that are possible makes your code safer because the safety is there when you reach for that kind of programming paradigm & the programming paradigms you need are typically a property of the problem domain rather than the language (it's it's the intersection of "how is this problem solved" and "what tools does the language give you"). In C it gives you few tools and you either twist into a pretzel trying to stick to that or you write our own tools from scratch every time and make the same but different mistakes over and over. No language is perfect and all will guide you to different takes on the same problem that better suit to its paradigm, but there are intractable parts that will force you to do things (e.g. heap allocation and more involved ownership semantics are not uncommon). Moreover C very limited ability to manage your own codebase to define API boundaries - the only tool is opaque types with a defined set of functions declared in 2 different files.
The fact that it's mostly backwards compatible means you can reproduce almost all issues of c in c++ awell, but the average case fares much better. Real world C++ does not have double-frees, for example. (As real world C++ does not have free() calls).
> it's safer and higher productivity than C or assembly
Debatable - in C and even more so in assembly, the unsafety is at least directly in your face, while in C++ it's cleverly diguised under C++ stdlib interfaces (like dangling std::string_views, all the iterator invalidation scenarios etc... meaning that memory corruption bugs may be much harder to investigate in C++ than in C (or even assembly) - which in turn affects productivty (e.g. what's the point in writing higher level buggy code faster when debugging takes much longer).
> it feels like a matter of time before Rust replaces C/C++
It may replace C++, but C will most likely outlive at least C++, and I wouldn't be surprised if Rust too - even if just as C API wrappers to allow code written in different modern languages to talk to each other ;)
I've never seen any users who avoid using Rust's stdlib on principle. The closest thing is the use of the parking_lot crate over the standard Mutex types, but that's getting better over time, not worse, as the stdlib moves closer to parking_lot in both implementation and API. Other than that, there's only been one "wrong" API that needed to be deprecated with a vengeance (mem::uninitialized, replaced by mem::MaybeUninit). Especially compared to C++, the fact that Rust doesn't have a frozen ABI means it's free to improve the stdlib in ways that C++ can only dream of. While I do wish that the Rust stdlib contained some more things (e.g. a standard interface to system entropy, rather than making me pull in the getrandom crate), for what it provides the Rust stdlib is quite good.
Care to provide examples?
I think they generally do a very good job at curating the APIs to feel very consistent. And with editions, they can fix almost any mistakes they make whereas c++ doesn’t have that. In fact, the situation in c++ is so bad that they can’t fix any number of issues in basic containers and refuse to introduce updated fixes because “it’s confusing”. The things they keep adding to the stdlib aren’t even core useful things that come out of the box. Like missing a networking API in 2025. The reason is they have to get it perfect because otherwise they can’t fix issues and they can’t get it perfect because networking is a continuously moving target. Indeed we managed to get a new tcp standard before c++ even though it had even worse ossification issues. Or not caring about the ecosystem enough to define a single dependency management system because the vendors can’t get agreement or a usable module system. Or macros being a hot mess in c++ some 20 years after it was already known to really suck.
Now it’s possible given enough time rust will acquire inconsistent warts over time similar to c++, but I just don’t think it’ll ever be as bad in the standard library due to editions and automated migration tools being easier against rust. Similarly, I think editions give them the flexibility to address even many language level issues. Inertia behind ways of doing things are the harder challenges that are shared between languages (eg adoption of Unsend might prevent the simpler Move trait from being explored and migrated to), but c++ is getting buried under its own weight because it’s not a maintainable language in terms of the standard because the stewards refuse to realize they don’t have the necessary editorial tools and keep not building themselves such tools (even namespace versions ended up being aborted and largely unused).
I expect Rust's successor might have a shot at replacing C++.
By the time C++ was as old as Rust, it had conquered the world. If Rust coulda, it woulda.
As soon as you start writing big(ger) software in Rust, its lacking ergonomics really become apparent.
.as_mut().unwrap().unwrap().as_mut().do_stuff() gets really old after a while.
And I am not convinced that borrow checking is the panacea that it's made out to be. Region-based memory management can accomplish the same safety goals (provided there's a sane concurrency model), without forcing people into manually opting into boxing and/or reference counting on a per-reference basis.
Throw into that the pain of manual lifetime management (it's not always elided, and when it needs to change, it's painful), I honestly believe it's far more reasonable to ask programmers to throw shit into two or three arenas and then go hog-wild with references than it is to expect them to deal with the tediousness of the way Rust does things.
We are just cargo-culters (no pun intended).
...
"This solution is also similar to other languages’ features or conventions"
As far as I know, in Rust you can't use "_" for that, as the value will be dropped right away, so the mutex/resource/etc. won't live for the scope.
https://rust.godbolt.org/z/P1z7YeP4Y
As you see, Rust specifically rejects this code because it's never what you meant, either write explicitly "I want to take and then immediately give away the lock" via drop(LOCK.lock()) (this really might be what you wanted, the Linux kernel does this in a few places) or write an actual named placeholder variable like in my third example function.
Yeah, it's the same in c#. This is noticeable when in the same scope you can have multlple "_" vars. If these were actual names, they would be a name clash.
One of the uses is take some parts of a tuple return and ignore the others.
e.g.
var (thingIwant, _, _, _) = FunctionThaReturnsATupleOfFourThings();
There are three "_" in that expression, and they are different vars with different types. They don't really have the same name, they don't have names at all.
> rebind a variable in the same scope
But only re-assigning values of the same type. Otherwise:
int x = foo();
int x = bar();
error: redefinition of 'x'
and, int x = foo();
long x = bar();
error: redefinition of 'x'
What are you talking about?At this point only LLMs will be able to decipher every intricacy of C++.
As for your particular issue, using an IDE is essential, and the typedef keyword is almost obsolete, so I guess you stumbled upon a strange project. I would be curious to know what it is if it's open-source.
typedef struct ShaderReflection ProgramLayout;
typedef enum SlangReflectionGenericArgType GenericArgType;
https://github.com/shader-slang/slang/blob/master/include/sl...> years of collaboration between researchers at NVIDIA, Carnegie Mellon University, Stanford, MIT, UCSD and the University of Washington
Now I understand why, it's the kind of project that you can't upgrade easily.
Typedef'ing common types to your own type names is absolutely fine as long as it is unlikely to collide with the typedefs of other libraries in the same project.
I read the parent post as indicating that "this is C++; we spell this as `using Foo = Bar;` now." Type aliases (or namespace aliases, or using-declarations) are not dead, but the typedef keyword in C++ is largely only retained for backwards compatibility.
The core issue here is that type aliases add a layer of indirection. That can be useful if the user shouldn't need to know the implementation details and can interact with said type purely through the API -- do I care if a returned Handle is a pointer, an int, or some weird custom class? Everyone is used to file descriptors being ints, but you aren't going to do math on them.
And there’s absolutely no reason to typedef _standard_ int types anymore. Not in C, and definitely not in C++. That’s just crusty old practices. Maybe if you want to have nice short types like u8, i8, etc, I can understand that. But SlangUint32 is just ugly.
And honestly, anyone who relies on the scenarios Pragma once fails in (Compiling off network shared and symlinks) should really fix those monstrous issues instead. The places Pragma once trips up in are likely to be issues for detecting modifications in incremental builds anyway.
use pragma once where it works in internal code but never in headers you ship to someone else is a simple rule that should work well enough.
As I said in my previous comment, those cases are very very often cases where the compialtion model is broken, and it's held together by luck.
> Every attempt to fix those last ones breaks some other rare case
My experience (and I have experiences of this) have been that the cases where pragma once fails, other tools (source control, built tools) cause "random" problems that usually are hand waved away with "oh that problem, just run this other makefile target that someone has put together to clear that file and rebuild".
> or profilings shows significant compile time increases.
Again, my experience here has been that the compile time change is usually a result of a broken project model, and that there are other gotchas like "oh you can only include files a.h and c.h but if you include b.h then you'll break the internal build tool". Also, that taking the hit to make your build correct unlocks optimisations that weren't possible before.
Also, the projects that have these kinds of warts are usually allergic to doing anything new whatsoever, making any changes or improvements, upgrading compilers and libraries. I suspect using C++17 is enough to scare them off in many cases.
If it's good enough for QT or LLVM, it's good enough for me.
i also know why I had to do that aweful abuse to what anyone sane would call wholesome. I don't like it but there are other things going on and I don't want to talk about it anymore.
> the same include file is accessible under different path names (granted, that's a very esoteric scenario)
And most likely a build system problem anyway if, say, different versions of the same library get included in your build.
Editing png with a text editor is also much harder than editing ppm. But there is no reason to consider this usecase when defining a image format.
This is true for any language, but it's especially true for C++, where most large codebases have tons of invisible code flying around - implicit casts, weird overloads, destructors, all of these possibly virtual calls, possibly over type-erased objects accessed accessed via smart pointers, possibly over many threads - if you want to stand any chance of even beginning to reason about all that you NEED to see the actual, concrete, scientific types of things.
with Rust that ship has sailed
> few implicit casts
Just because it doesn't (often) implicitly convert/pun raw types doesn't mean it has "few implicit casts". Rust has large amounts implicit conversion behavior (e.g. deref coercion, implicit into), and semi-implicit behavior (e.g. even regular explicit ".into()" distances conversion behavior and the target type in code). The affordances offered by these features are significant--I like using them in many cases--but it's not exactly turning over a new leaf re: explicitness.
Without good editor support for e.g. figuring out which "into" implementation is being called by a "return x.into()" statement, working in large and unfamiliar Rust codebases can be just as much of a chore as rawdogging C++ in no-plugins vim.
Like so many Rust features, it's not breaking with specific semantics available in prior languages in its niche (C++); rather, it's providing the same or similar semantics in a much more consciously designed and user focused way.
> lifetimes
How do lifetimes help (or interact with) IDE-less coding friendliness? These seem orthogonal to me.
Lastly, I think Rust macros are the best pro-IDE argument here. Compared to C/C++, the lower effort required (and higher quality of tooling available) to quickly expand or parse Rust macros means that IDE support for macro-heavy code tends to be much better, and much better out of the box without editor customization, in Rust. That's not an endorsement of macro-everything-all-the-time, just an observation re: IDE support.
As for how lifetimes help? One of the more annoying parts of coding C is to constantly have to look up who owns a returned pointer. Should it be freed or not?
And I do not find into() to be an issue in practice.
Implicit casts are the only reason for the existence of the object-oriented programming languages, where any object can be implicitly cast to any type from which it inherits, so it can be passed as an argument to any function that expects an argument of that type, including member functions.
The whole purpose of inheritance is to allow the programmer to use implicit casts. Otherwise, one would just declare a structure member of the class from which one would inherit in the OOP style and a virtual function table pointer, and one could write an identical program with the OOP program, but in a much more verbose way.
(In the C language, not only the implicit mixed signed-unsigned casts are bad, but also any implicit unsigned-unsigned casts are bad, because there are 2 interpretations of "unsigned" frequently used in programs, as either non-negative numbers or as modular numbers, and the direction of the casts that do not lose information is reversed for the 2 interpretations, i.e. for non-negative numbers it is safe to cast only to a wider type, but for modular numbers it is safe to cast only to a narrower type. Moreover, there are also other interpretations of "unsigned", i.e. as binary polynomials or as binary polynomial residues, which cannot be inter-converted with numbers. For all these 4 interpretations, there are distinct machine instructions in the instruction sets of popular CPUs, e.g. in the x86-64 and Aarch64 ISAs, which may be used in C programs through compiler intrinsics. Even worse is that the latest C standards specify that the overflow behavior of "unsigned" is that of modular numbers, while the implicit casts of "unsigned" are those of non-negative numbers. This inconsistency guarantees the existence of perfectly legal C programs, without any undefined behavior, but which nonetheless compute incorrect "unsigned" values, regardless which interpretation was intended for "unsigned".)
No, you don't have to do that. Once you start thinking about memory and manually managing it, it you'll figure out there's simpler, better ways to structure your program, rather than having a deep class hierarchy with a gazillion heap-allocated objects, each with distinct lifetime, all pointing at each other.
Here's a trivial example. Say you're writing a JSON parser - if you approach it with an OOP mindset, you would probably make a JSONValue class, maybe subclass it with JSONNumber/String/Object/Array. You would walk over the input string and heap allocate JSONValues as you go. The problems with this are:
1. Each allocation can be very slow as it can enter the kernel
2. Each allocation is a possible failure point, so the number of failure points scales linearly with input size.
3. When you free the structure, you must walk over the entire tree and free each obejct one by one.
4. The output of this function is suboptimal as the memory allocator can return values that are far away in memory.
There's an alternate approach that solves all these problems. If you're thinking about the lifetimes of your data, you would notice that this entire data structure is used and discarded at once, so you allocate a single big buffer for all the nodes. You keep a pointer to the head of that buffer, and when you need a new node, you stick it in there and advance the pointer by its size. When you're done you return the first node, which also happens to be the start of the buffer.Now you have a single point of failure - the buffer allocation, your program is way faster, you only need to free one thing when you're done, and your values are tightly packed in memory, so whatever is using its output will be faster as well. You've spent just a little time thinking about memory and now you have a vastly superior program in every single aspect, and you're happy.
adding values to a dict via add() and removing them via remove() should not expose to the caller if the underlying implementation is an array of hash indexed linked lists or what. the implementation can be changed safely.
inheritance is orthogonal to object orientation. or rather, inheritance requires oop, but oop does not require inheritance.
golang lacks inheritance while remaining oop, for instance, instead using interfaces that allows any type implicitly defining the specified interface to be used the.
Using different words does not necessarily designate different things. Most things that are promoted at a certain time by fashions, like OOP, abuse terminology by giving new names to old things in the attempt of appearing more revolutionary than they really are.
Most classic works about OOP define OOP by the use of inheritance and of virtual functions a.k.a. dynamic polymorphism. Both features have been introduced by SIMULA 67 and popularized by Smalltalk, the grandparents of all OOP languages.
When these 2 features are removed, what remains from OOP are the so-called abstract data types, like in CLU or Alphard, where you have data types that are defined by the list of functions that can process values of that type, but without inheritance and with only static polymorphism (a.k.a. overloading).
The example given by you for hiding an implementation is not OOP, but it is just the plain use of modules, like in the early versions of Ada, Mesa or Modula, which did not have any OOP features, but they had modules, which can export types or functions whose implementations are hidden.
Because all 3 programming language concepts, modules, abstract data types and OOP have as an important goal preventing the access to implementation details, there is some overlap between them, but they are nonetheless distinct enough so that they should not be confused.
Modules are the most general mechanism for hiding implementation details, so they should have been included in any programming language, but the authors of most OOP languages, especially in the past, have believed that the hiding provided by granting access to private structure a.k.a. class members only to member functions is good enough for this purpose. However this leads sometimes to awkward programs where some classes are defined only for the purpose of hiding things, for which real modules would have been more convenient, so many more recent versions of OOP languages have added modules in some form or another.
Dynamic dispatch can be accomplished in any language with a function type by using a structure full of functions to dispatch the incoming invocations, as Linux does in C to implement its file systems.
In recent C standards, it has been defined that unsigned numbers behave with respect to the arithmetic operations as modular numbers, which never overflow.
The implicit casts of C unsigned numbers are from narrower to wider types, e.g. from "unsigned short" to "unsigned" or from "unsigned" to "unsigned long".
These implicit casts are correct for non-negative numbers, because all values that can be represented as e.g. "unsigned short" are included among those represented by "unsigned" and they are preserved by the implicit casts.
However, these implicit casts are incorrect for modular numbers, because they attempt to compute the inverse of a non-invertible function.
For instance, if you have an "unsigned char" that is a modular number with the value "3", it is incorrect to convert it to an "unsigned short" modular number with the value "3", because the same "unsigned char" "3" corresponds also to 255 other "unsigned short" values, i.e. to 259, 515, 781, 1027 and so on.
If you have some very weird reason when you want to convert a number modulo 256 to a number modulo 65536 by choosing a certain number among those with the same residue modulo 256, then you must do this explicitly, because it is not an information-preserving conversion.
If on the other hand you interpret a C "unsigned" as a non-negative number, then the implicit casts are OK, but you must add everywhere explicit checks for unsigned overflow around the arithmetic operations, otherwise you will obtain erroneous results.
Mathematically, there is no clearly defined way how one would have to map from one residue system in modular arithmetic to the next, so there is no "correct" or "incorrect" way. Mapping to the smallest integer in the equivalency class makes a lot of sense though, as it maps corresponding integers to itself when going to a larger type and and the reverse operation is then the inverse, and this is exactly what C does.
val/var/let/auto declarations destroy the locality of understanding of a variable declaration without an IDE + a required jump-to-definition of a naive code reader. Also, a corollary of this problem also exists: if you don’t have an explicit type hint in a variable declaration, even readers that are using an IDE have to do TWO jump-to-definition actions to read the source of the variable type.
eg.
val foo = generateFoo()
Where generateFoo() has the signature fun generateFoo(): Foo
With the above code one would have to jump to definition on generateFoo, then jump to definition on Foo to understand what Foo is. In a language that requires the explicit type hint at declaration, this is only one step.
There’s a tradeoff here between pleasantries while writing the code vs less immediate local understanding of future readers / maintainers. It really bothers me when a ktlint plugin actually fails a compilation because a code author threw in an “unnecessary” type hint for clarity.
Related (but not directly addressing auto declarations): “Greppability is an underrated code metric”: https://morizbuesing.com/blog/greppability-code-metric/
I want to use a text editor => This is the wrong tool => Yes, but I want to use a text editor.
These people do use the wrong tooling. The only way to cure this grievance is to use proper tooling.
The github webui has some ide features, such as symbol search. I don't see any reason why not use a proper ide. github.dev is a simple click in the ui away. When you use gerrit, do a local checkout, that's one git command.
If you refuse to use the correct tools for the job, your experience is degraded. I don't see a reason to consider this case when writing code.
These where not some SWE wonderlands either. The code was truly awful at times.
The Joel test is 25 years old. It's a industry standard. I, and many other people consider it a minimum requirement for software engineering. If code the "2. Can you make a build in one step?" requirement i should be ide-browsable in one step.
If it takes weeks to replicate a setup the whole environment is deeply flawed. The one-step build is the second point on the list because Joel considered it the second most important thing, out of 12.
In those cases, I’m grateful for mildly less concise languages that are more explicit at call and declaration sites.
If you cannot recognize the type of an expression that is assigned to a variable, you do not understand the program you are reading, so you must search its symbols anyway.
Writing redundantly the type when declaring the variable is of no help when you do not know whether the left hand side expression has the same type.
When reading any code base with which you are not familiar, you must not use a bad text editor, but either a good text editor designed for programmers or any other tool that allows fast searching for the definitions of any symbols encountered in the source text.
Adding useless redundancy to the source text only bloats it, making reading more difficult, not easier.
I never use an IDE, but I always use good programming language aware text editors.
This isn’t necessarily the case. “Go to Definition” on the `val` goes to the definition of the deduced type in the IDEs and IDE-alikes I’ve ever used.
I'd like auto functions.
for (auto member : set_of_members)
and some other that are similar by nature auto is a god blessing.Use types when they are needed and use the tools at your disposal (IDEs BT every text editor has clang language server integration nowadays)
> with light documentation which means reading the code,
You have to read the code anyways, documentation is impossible to trust. There isn't one big library for which I didn't have to go read the code at some point. Two weeks ago I had to go read the internals of msvcrt, Microsoft's C runtime, to understand undocumented features of process setup on windows. I had to go read standard library code thousands of times, and let's not talk about UI libraries.
While I agree that auto is helpful, the amount of times I had to wait for clangd (or whatever the IDE is using) to parse a .cpp file and deduce the underlying type is frustrating. It happens too often with every IDE (Qt Creator, CLion, VS Studio, VS Code, etc...) I've tried whenever I'm programming with a non-desktop machine that's not super beefy.
Plus I often use Github to search for code when I'm trying out a new lib so having the type spelled out is extremely helpful.
You have to weigh up the cost of going through the code and changing all the type declarations
auto x = foo();
If you change the return type of foo here you don't have to change 300 call sites. Personally i'd rather change it in one place than 300.For that matter anyone reading the code from a web browser in any other context.
return foo().bar();
No `auto` and you still don't know the return type of foo.
And knowing the type might not be the only reason you'd want an IDE anyway. What is `foo()` doing? I want to be able to easily jump to the definition of that function to verify that the assumption taken by the calling function are correct.I like rust's approach in that it allows a mixture of explicit types and type inference using placeholders
For example: "let x : Result<Foo<int, _>, _> = make_foo();"
I agree with you! But:
> I can not think of a worse thing to do to your code in terms of readability and future maintainability.
Well, I definitely can. Using macros is one ;)
> I am trying to learn a new library right now with light documentation which means reading the code, and between typedefs, auto
I disagree with you on the typedefs. They're much better than auto. Auto doesn't provide any type checking, it works whatever the type is. Typedef tell you what the expected type actually is.
However, in hindsight, I think I was being overly conservative, and it worked out well, and adoption didn't cause any obvious problems.
Your concerns about learning a new library are valid, but the problem is the library if it's not clear, or well documented. To lay responsibility for this at the door of auto is a stretch. You can write great and terrible code with a number of language features (dubious use of goto is the classic example), and it sounds like you are tackling a library which could do with some love, either in documentation, or to clarify it's conventions.
I use `auto` when the type is obvious or doesn't really matter, and I seldom create aliases for types.
I feel like having verbose type names at function boundaries and using `auto` for dependent types is the sweet spot. I'll often avoid `auto` when referring to a class data member, so the reader doesn't have to refer to the definition.
void foo(const std::multimap<double, Order>& sells) {
for (const auto& [price, order] : sells) {
// ...
}
}
but also void foo(const OrderBook& book) {
const std::multimap<double, Order>& sells = book.sells;
for (const auto& [price, order] : sells) {
// ...
}
}
`auto` is convenient for iterators. Which of the following is better? auto iter = sells.begin();
std::multimap<double, Order>::const_iterator iter = sells.begin();
Having said this, I usually only use type inference when the types are obvious from context.
auto x = func(); // no idea about func return type
auto x = new Widget(); // DRY
auto sum (auto a, auto b); // template function without boilerplate
Use the same principle in other contexts. std::shared_ptr<T> p = std::make_shared<T>();
Then replace T with a very long type. And that's not the most verbose example, just an early one that popped into mind.Then you have lambdas. Imagine assigning a lambda into a stack variable without auto, also keeping in mind that std::function adds overhead.
auto p = std::make_shared<T>();
whereas the following isn't clear and isn't necessarily correct without looking up what the return type of foo() actually is: auto p = foo();
This could be mitigated with the name of foo() being more descriptive.
If the return type is particularly wordy, auto could still be appropriate.
welcome back Hungarian notation
What does this have to do with hungarian notation?
This works better:
auto uasStudents = getClassList()
In this case, “uas” prefix standing for an unsafe array of strings.
Then say you validate the list
auto sasStudents = validate(uasStudents)
(Now it’s a safe array of students!)
The "Hungarian notation" comment is correct - it's not strictly Hungarian notation, but annotating function (or variable) names when the language has a type system representing this same information is the same idea with the same problems as Hungarian notation.
In seriousness, no, that's not what I'm suggesting, and I find it an unusual thing to read from my comment. I'm saying a descriptive name for foo() can give you a hint about what the type is, even if it doesn't literally and directly tell you what the type is.
Every editor I use has tools that will provide this information in a single keystroke, macro or cluck. If you actively choose to avoid using tools to read code, I shouldn’t suffer for it.
I have no doubt you read and write code without any tools.
It’s like saying “knives are bad because you can kill someone” vs “knives are good because they can help make food”… nobody thinks of knives as being an exclusively good or exclusively bad thing; we all understand that context is key, and without context it’s meaningless.
Instead I feel it would be a lot more illuminating if the discussion centered around rules of thumb… which contexts auto is good, vs when it’s bad. There’s probably no complete list, but a few heuristics would be great.
My 2¢:
Explicit type declaration is a form of documentation, used to tell the casual reader (ie. Often in a web browser, code review, or someone seeing it copy/pasted as a snippet[0]) the meaning of a piece of code. It’s even better than comments: it’s guaranteed not to be a lie, or the code wouldn’t compile.
I’ve seen this all the time working in Rust, Swift, typescript, etc… sometimes an expression is super complicated, and the type checker can infer the type just fine, and my IDE even shows the type in an inlay… but I still worry that if these weren’t available, the expression would look super confusing to a reader. So I make the type explicit to aid in overall readability.
When to do this or not is a judgement call… different people will come to different conclusions. But it’s like any other form of line-level documentation. Sometimes the code is self explanatory, and sometimes it’s not. But be kind to the casual reader and use explicit types when it’s very non-obvious what the type would be.
[0] ie. Anyone without immediate access to an IDE or something else that would show the type.
Unfortunately I think this either this isn't actually the case for many people, or too often they just never even stop to consider that other perspectives might be possible, better or even more common than their own.
In chatting with technical people online for the last 30 years, the biggest issue I have always had is their attitude. IRC seems the worst for it but every platform has this problem in my experience.
God complexes visible from space run rampant, people always speaking in absolutes and seeing things as black and white, complete lack of empathy and humility etc.
I think most arguments in the world, and even other things like crime, might actually just stem from people's inability to communicate with each other effectively.
int wtf = omgtype(); // and read the compiler error
Auto in a function signature is syntactic sugar for introducing a template parameter. It needs to be monomorphized at some point to generate code.
https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
auto it = data.begin();
Is a lot more readable than
std::vector<std::pair<std::vector<foo>, int>::iterator it = ....
Well I blame C++ for calling it "auto" in the first place. Fortunately this is easily fixed:
#define let auto
#define var auto
;-)To cite your previous sentence, why don't you use your IDE?
Or is this a magnetized needle sort of situation.
Jetbrains can annotate your source with what the actual type is.
and auto can help future maintainability if you need to change concrete types that have the same API surface.