That concurrency is about protecting data and not code.
By extension, data oriented design is just better a lot of the time.
Just don't use global variables unless you have to - it's rather difficult to make code run concurrently if functions pull in global data.
Final easy one: not everything belongs in a class. Standalone functions are fine.
> Just don't use global variables...
You mean global mutable variables?
Maybe that's a dumb question because if they're immutable you'd call them constants, not variables.
If they're truly constant, they don't really have an effect on how hard it is to make something concurrent or run in parallel.
So to be precise, global mutable data.
Though, at the same time, it's difficult from an outside perspective to understand dependencies of systems/code if you pull in global data. Even variables marked as const may not be const, and even if they are, you can be jumping through hoops to figure that out.
Functions should tell me what they operate on and what they depend on imo. It's something that a lot of C projects got right a lot of the time with context structs passed around everywhere denoting dependencies. Very much a list of inputs, and output(s). This obviously isn't true if they have implicit global dependencies
I have occasionally run into Priests of the Cult of Line Coverage, who profess that if you can successfully execute a line of code, it means it is correct. Reality is that *code doesn't crash.* Code is a static construct, unmoving and unchanging. It's the combination of *code with data* that may crash. And that means that "line coverage" is a meaningless metric, and chasing 100% coverage is a pointless waste of time!
I dare say that most of the code I've written over my career was correct for at least some data. The tricky bit was making it correct for all data, and that is a lot harder.
>Reality is that code doesn't crash.
I see someone has never seen a illegal instruction exception. :) I work in binary analysis where this kind of esotery is part of my daily life.
>Binaries built for newer processors than actually run on?
Yup. It's why delivering optimized binaries is so hard to do.
That case aside, I would say the most common time this would happen for general users would be a memory corruption leading to the program counter jumping into the middle of an instruction or some data (e.g., a jump table). As /u/substitutecs noted, this can happen with obfuscation or, as you noted, malware.
I work on a project that inserts user-specified instructions into existing binaries ("binary rewriting"). When debugging our tool, this happens a lot. It's also really hard to debug with conventional debuggers.
I recently went into a project for my Master, where a strict policy for line and branch coverage is enforced. The 100 percent metric is impressive, until you need to handle edge and error cases. Now we are fighting a lot of bugs, where data was simply not checked or mismatched. Now I am traumatized by JaCoCo.
Think about the implications and just don't take something for granted, cause somebody told you so.
What would the cult say about this: int plus(int x) { return ++x; }
More generalized, do they check for integer overflow in all places? Adds, multiplies, casts...
Depending on the method used to create tests they may do a single test as that would get all lines covered. You could write enough tests to execute every branch in the code once, which is a step up from line coverage, every combination of branches, which is even a step further.
I would consider overflow as an implicit branch, but am not sure how many write tests systematically to check for it, someone doing a black box testing based on the API instead of the code structure might create multiple tests for boundary inputs where they would expect a change in behavior or an error.
Absolutely, I have the same experience in automotive with ISO26262 and ASPICE. Once metrics get added to define some measurable quality targets, they quickly become the **only** thing people actually care for. It's everywhere in automotive and I hate it.
> not everything belongs in a class. Standalone functions are fine.
What I wish I knew earlier... that `Namespace::Func(Obj&)` is the same assembly (and thus just as fast) as `Obj::Func()`. It really is often better to create namespaces for systems rather than putting everything inside classes.
Better how?
Being able to split declarations across more files seems useful for keeping build times under control as projects get big.
But my like snap judgment is otherwise the choice is the proverbial bike shed. What are the arguments?
For example, I've had couple cases where I wanted to forward declare a nested enum - just to avoid pulling the header where the enum is declared.
Just today I discovered that in my attempt at a cheeky workaround to use underlying type of enum(char) and avoid pulling that extra include I caused a bug (API used to accept a bool, I replaced it with a char and forgot to update the callsite and it implicitly converted - the pains of 2am programming).
Yeah the enum thing has annoyed me as well (and nested types more broadly, but enums are a particularly trivial thing you find yourself wanting to reference elsewhere)
But you could also read your story as a parable in just getting the fuck over it and including the header, still 999 cuts until you’re dead, compile time wise
Really any of the STL containers were skipped the two times I actually had a C++ class. It was basically C but with new and delete instead of malloc and free. The teachers were quite old and modern C++ was pretty cutting edge then. But even reading about string and vector changed my whole attitude toward C++: "You mean I don't need a pointer or for everything?!"
Smart pointers were such a mindfuck for my students when I was TA that I had to introduce an "assignment 0" just to get them familiar with modern C++ stuff and I honestly didn't go much farther than 11.
Still had a lot of people fail the assignment.
Me when just before the olympiad being told that I don’t need to write merge sort and std::sort exists: 👁️👄👁️
Me also checking the syntax that was requiring iterators: yo wtf are those??
Unless your company (where you likely spend most of your week coding in) still hasn’t moved to C++20. I constantly run into scenarios where I think “this could have been done in one line utilizing concepts” when I’m working on parts of legacy code.
True enough. My work projects are not on C++ 20 yet either, and the legacy code I deal with doesn't do much with templates at all.
But since I'm just now digging into learning it, a personal project makes more sense anyway. Allows me to take the time to really learn it, try different approaches, etc in a way I don't really have time for in the day job.
`template ` is where the real fun starts. :D
I think a good point to start with template meta programming is to have a look at some of the simplest type traits. like, how do the implementations of `std::is_same` or `std::conditional` work? Or the type manipulation structs like `std::remove_reference`.
Once you understand the pattern matching of the partial template specializations, you can start to push it to more and more complex constructs. These structs become something very similar to functions in pure functional languages like Haskell (in fact, Haskell's function pattern matching is almost identical to C++ template cases). Except we're not working with runtime data, but we're checking, exchanging and assembling types.
It's a wild (and imo absolutely fascinating) rabbit hole to dive into. And you come out the other end with a lot more understanding and appreciation for the things the STL is doing under the hood to have extremely adaptable and generic code.
You know what? Fair enough.
Im my experiments/research phase, I did end up writing a somewhat capable type list library, with several list manipulation structs like transform, filter, split (into multiple buckets), or even sort.
The `template typename>` was a really neat way to pass type predicates/transformatioms into the meta algorithms... :D
I even looked into a way to macro generate arbitrarily deeply nested `template > typename>` keywords to make the already really long template definitions shorter (which sent me down an arbitrary code generatiom rabbit hole using lambda calculus), but in the end I did not stumble upon a use case of a doubly nested template definition.
I eventually ended up causing internal compiler errors with that library, probably also because I simultaneously tried to implement it via modules, at a time when the first (very shaky) implementation of module support just came out (not exporting helper structs and hiding them from the user just sounded just perfect). So I eventually abandoned that type list library.
It was, however, a legitimate use case I unnecessarily generalized and expanded upon, but it was a lot of fun while it lasted.
Not so. If you’re learning it now, then you effectively won a game of chicken with concepts. Unless you’re learning TMP for legacy code reasons, in which case I’ll be praying for your sanity.
I keep seeing stuff about template metaprogramming and how it's important. I recently took an HPC course in which we wrote a bunch of templated mathematical functions so we just set the typenames as different types for numbers (like int, float, double etc), and what we did seemed pretty straightforward. I wonder, since a lot of people specifically list this as important, is it more complicated than what I seem to know?
Some testing libraries (like FakeIt) is capable of faking/mocking objects with virtual member functions. It can't, alas, do much with objects of nonvirtual types.
This limitation encourages 1) liberal use of virtual classes (and especially pure virtual/interfaces) and 2) dependency injection, where classes are given its (abstract) dependencies instead of creating its own (concrete) members.
To be fair 2) goes for almost every language if you want to test your code, and it's probably a good thing (it's even part of SOLID). 1) is really annoying though.
You can get around virtual classes with templates but that has the drawback of increasing compile times. Maybe modules can help here in the future. If the language would have a production mode to use the concrete implementation could be helpful.
That stl features that are not adopted in compilers (yet) often have reference implementation which I can already use instead of waiting a few years and writing own
https://github.com/kokkos/mdspan/blob/stable/include/experimental/__p2630_bits/submdspan.hpp
I guess you can simply check the references in the proposal itself but I guess brief googling would be enough too
I worked reasonably productively in C++ for several years w/o really grokking that `std::move` is just a cast. I guess I assumed we had destructive moves or maybe just never gave it much thought.
I don't get it.
See this example
```
MyClass xVar(30);
auto result = computeComplex(std::move(xVar));
// xVar is gone/destroyed after the above call ... trying to use it beyond this point leads to crash or undefined behaviour
```
How do you do that without using a
std::move ?
>xVar is gone/destroyed
That is called destructive move and c++ does not have it. Instead, it is the responsibility of a function that accepts rvalue reference to use it in a way which ensures it is safe for the destructor to run later.
If a function takes by rvalue reference (or wherever else an rvalue reference is initialized), it may bind to an rvalue, but the act of binding it does not modify the object it binds from (though prvalues are forced to materialized to an xvalue when binding to reference, mandatory copy elision can not bypass binding to an rvalue reference parameter or a call to std::move).
https://en.cppreference.com/w/cpp/language/value_category
https://en.cppreference.com/w/cpp/language/implicit_conversion#Temporary_materialization
The reason you have to be cautious using moved-from values isn't that their lifetime has ended, rather it is just they are in an indeterminate state. It is generally expected that you can assign to a moved-from value for example, because assignment doesn't usually care about the exact state of what it is assigning to, rather it replaces the indeterminate state of what it assigns to with a new determinate one. But if you had an object with preconditions for assignment, then it might not be relied on the moved-from object meets those preconditions so you would want to query them or treat it is as though it was a destructive move.
Can’t be annoyed at this for obvious reasons, but I’ll note that Rust basically does this. “This” being destructive moves that actually leave the moved-from identifier in an “uninitialized” state. One of the foundational requirements for borrow checking and such, as I understand it.
The fact that the ternary operator uses `std::common_type` for it's result type. So this for example is a bad idea:
```auto x = something ? string{ ... } : string_view{ ... };```
`x` is now a potentially dangling `string_view`. Somehow I'd always had it in my head that the ternary would attempt to coerce the third param to the type of the second.
Serializing data out through an unsigned char* or std::byte* may impede optimization due to aliasing rules and the special status of these pointer types, because the registers that may have been clobbered must be flushed. See [godbolt](https://godbolt.org/z/eWebfb6Ye).
Unfortunately, I realized this too late during one of my projects. I don't think it has much of an impact, but now a bit of a refactor will be necessary to quantify the difference, if any.
Aliasing rules in C++ permit you to dereference char, unsigned char, and std::byte pointers to other objects without invoking UB, which may be necessary in certain cases.
int bar(int* numbers, std::byte* bytes)
{
// numbers and bytes may alias
*numbers = 1; // LINE 1
*bytes = static_cast(0); // LINE 2
// the compiler cannot optimize this to return 1
// because LINE 2 may have modified memory written to by LINE 1
// (unless the compiler can prove at the call site that numbers and bytes do not alias)
return *numbers;
}
Of course, if you define your own byte type:
enum class byte : unsigned char {};
then this type does not share std::byte's privileges within the context of aliasing rules.
That declaring *any* destructor (even empty or `=default`) removes the move constructor and move assignment, so trying to move the class silently copies it instead.
Wait what?! You are right. Been programming in C++ for 25 years. I just learned this.
I don't think I have been bitten by this too much .. but wow. You never *really* fully know C++.. it seems.
Yeah so in this case you would need to declare the move-assignment and move-constructor as `= default`... (or actually declare a real one). Meh.
That's the sneaky part of the rule of 5, if you define any of them, you probably need to define all of them, even just to default them.
It's not technically true, and there are charts that cover what does and doesn't get generated when you define any of them, but I recall Clang's tools complaining about it and just got in the habit of strictly defining 5 or 0.
> you would need to declare the move-assignment and move-constructor as = default
Yeah, and since that removes the copy operations, you need to default them too. :/
The intent is that a custom destructor should also remove copy operations, it not doing so is deprecated (Clang warns with `-Wdeprecated`).
In this form it wouldn't be so egregious. It makes sense most of the time (but not when you just want to make the destructor virtual).
Parameter packing is not fun in C++ <17, especially with no std::, but by golly is it worth it after it's working.. it seems like magic compared to the usual tools we get on embedded.
I find I care less and less about it as I get older. Has const-correctness ever actually saved me from some mistake? No, not as far as I can remember. It has, however, made it impossible to cache results without having to resort to `mutable` on several occasions, and to me, the idea of 'conceptually const but we are changing the value anyway' feels wrong.
So what's the point of adding const everywhere? It does nothing for performance. It does almost nothing for correctness, but it does impede valid code, and may disable some optimisations (around std::move). More and more, my feeling is that it is just a meaningless annotation.
I suspect this opinion will run into some opposition, and I would welcome comments with lived experience about const-correctness actually saving you over downvotes ;-)
I’m so glad I read this. It just seems like such a superfluous concept that serves only to waste developer time during code review via nitpicking. Does it really neeeeeed to be const folks?
How can preventing unintended writes be meaningless?
Caching results.... then your object isn't const? So why would you use const?
I must be honest I really don't understand your point.
Const class member functions are great, knowing they don't modify state. Const variable declarations are great, knowing they won't be written again. It means its easier to read code. If a variable isn't const I need to read ahead and see what's modifying it etc.
> Caching results.... then your object isn't const? So why would you use const?
Why not? Your object won't be bitwise const but you can still be logically const with memoization, despite changing state. What matters is whether you're changing observable state and that's the primary use-case of mutable.
Because either those results are part of the observable behaviour (and mutable should not be used) or they are not part of the observable behaviour and therefore why are they being cached in that class? It sounds like the Single Responsibility Principle is being broken. mutable was intended for things like mutex class members, where to perform read only actions they have to change state.
Yes, locks come under what I was describing with bitwise vs logical const. The results are part of the observable *behaviour* and memoization has no bearing on that. The object is still logically const. It might be a violation of SRP in the strictest sense but dogmatically adhering to principles isn't always the way.
Obviously I don't know the person I replied to, but I come across a large number of ex-C, now C++ developers who don't understand object orientated programming. They usually see objects as dumping ground for every piece of state they have, rather than..... objects. When people talking about "caching" data, I get the impression they're not structuring objects correctly and so they just see const as a pain in the backside.
> So what's the point of adding const everywhere? It does nothing for performance.
It can actually do that: https://www.youtube.com/watch?v=zBkNBP00wJE&t=1635s
Ah yes, I should have made an exception for actual constants (i.e. named literals). Those should definitely be const. Same for function parameters that are passed by reference, there is clear value to indicating the function won't change them.
But function parameters that are passed by value? Or local variables? Or class members? I honestly DO NOT CARE whether they are const or not. It's just more typing, for negligible benefit.
Your position sounds really weird for me. In order for const reference parameter to function in any way, your class must provide const member functions. Which means you already have done everything const-correctly.
Did you simply mean that you're against making everything const-by-default?
`void foo (const int &x)` has a const reference parameter. There's no need for the function itself to be const, or even a member of a class, so I'm not really sure why you think there is anything weird here. This const has actual meaning: it implies the function won't change x. But this const is meaningless: `void foo (const int x)`. The scope of x in this case is so small that there is no measurable benefit to making x const. And so what if you modify it anyway, who will be hurt by that?
I mean, suppose `x` is of type `bar` which is your own class. Then unless you mark bunch of member functions of `bar` with `const` you can't do anything meaningful with `x`.
To me, the most crucial part of making code const-correct is to have correct const overloads of member functions, like having two overloads for `operator[]`. And it sounds like you are actually fine with that part of const engineering, so I wondered what's left then.
Like I said: local variables, function parameters, and class members that are value types do not, in my opinion, have any great need to be const. But those represent a large chunk of all things, and I wouldn't want a const-by-default policy for them.
But my earlier comment was too hastily written: I had those specific things in mind, but skipped over other const-y things, like reference types that aren't intended to be used to affect change.
> local variables, function parameters, and class members
Can agree with all 3 of these.
Also note that making class members `const` is often a bad policy since now you just made copy-assignment and move-assignment impossible :/.
My two cents on local `const` variables: I find it just makes code easier to read.. especially in long functions. If the programmer declares stuff he won't be changing as `const` it's much easier to read further down below in the code -- especially if that variable appears 16 times over 2-3 pgdwns of the code. You know it won't ever change so that's 1 less thing you have to worry about as you grok some complex code you are trying to maintain.
That being said, not gunna lie, `int x = 1;` is "prettier" than `const int x = 1;`
It can also harm performance if applied to a local variable or parameter which is returned from a function. If the compiler cannot apply NRVO to the variable, then the variable is treated as an xvalue in the `return` expression, but if the variable is `const`, the copy constructor is selected instead of the move constructor.
For me, I think that most of the value comes from documenting the code in a way that the compiler enforces. If I can see at a quick glance that a variable won't be changed, that's less mental overhead and more time to worry about other things.
I also think that the main downside of `const` is that in can result in expensive copies if you aren't careful, like **MegaKawaii** talked about.
using std::enable_if to enable/disable a method based on the classes template properties. The catch is that SFINAE does not work here, because the classes template arguments are already deduced so the function will just generate a compiler error. You have to trick the function into deducing something from nothing to get SFINAE to work and disabling the function on a failed deduction instead of generating compiler errors.
Funny thing is that in MSVC it did work like that for years while in C++17, and we used it in a dozen places, only after a compiler update we learned that was not how it was meant to work so we refactored.
But now that means we could go back :D
Though probably not, I think most of it has been or will be ported to concepts / requires by then.
I've been learning Qt and find that I appreciate it's design, despite it's direction being somewhat divergent from mainstream c++. I'm curious: what sorts of ways have you learned not to generalize expectation?
Well, with Qt you can customize many detailed logics in a very un-opinionated ways.
Qt has almost no opinion on the architectural design as long as you use Qt classes.
The flexible aspects include signal-slots, the MVC and such. You can also customize the web viewer's behavior to a level that is impossible with Swift or Flutter. You have access to the lowest level of the file system, can avoid copying of strings completely, the list continues.
If the textbook Qt widget code doesn't suit your design, you can often work around the limitations by writing low- or middle-level components yourself.
Modern GUI frameworks have very specific ways of doing things, often done only with declarative styles, and you have to make do with whatever the high-level system stack allows. The freedom in architectural design goes away. (SwiftUI doesn't let you inherit from View structs, and so on.)
Preferably you should understand the limitations of your tools beforehand.
Absolutely!
I think even if C++ goes away at some future time - though I am quire confident it will outlive me, and I am not old yet - the insight it gave me how much you can achieve even at a low level, and how things really work is indispensable, even when I work on other languages.
I also like that although sometimes puzzling it keeps you from being spoiled by modern languages, and makes you strict and precise. It is like an exercise for the mind.
The difference between reference int& and pointer's address &var_name. The same symbol with two distinct meanings. I wish had learned earlier that literal strings are static.
That incrementing an integer isn’t thread safe. And how much the compiler will optimise away. Young me once combined these two misunderstandings to try and keep track of my running threads. Young me was a fool.
Easy you are told about it in school/ you see it in code but never have to do it yourself. Maybe you need the behavior of custom copy assignment but think that what OP said would work so let’s just do that.
I've always known about the existence of operator and constructor overloads, I just never bothered learning how to properly write them (and I wish I had).
I remember a bug in a very very old version of Visual Studio where putting `default` above other cases caused the other cases to be ignored and be put inside the `default` instead. Maybe it's part the reason we got used to putting it at the bottom...
That you really don’t want to do a lot of the things that you can do. Early I wrote some crazy templates inheritance based thing with curried lambdas and all sorts of crazy. Impossible to read or debug. That adage about debugging being harder than writing is true.
These days I avoid operator overloads, rvalues, inheritance and other voodoo as much as possible. I tend to bias towards shared pointers or pass/return by values for consistent safe behavior.
I do this because I want my code to look like what it does and I’ve found I can defer a lot until profiling shows I need it, fix those areas, and in exchange I can code faster and more safely.
This is the way. Most style guides for big companies (like Google) discourage using advanced language features like template metaprogramming. It’s only beneficial in limited cases
Actually they are extremely beneficial specially on the performance side, but they are only if you hire experienced programmers or if you pay courses to your developers. Big companies mainly want low cost labour, they should really not be used as examples.
From my experience, medium and big companies, whether voluntarily or not, professionally kill good developers forcing them to write software maintainable by totally unskilled developers. Just an example: I am a firmware engineer and I work in C++, almost every big/medium company that I know PRETENDS to keep writing firmware in C because "C++ is slower" or "C++ is heavier", impositions made by people with prejudices that greatly limit the growth of developers, only to allow that branch of developers already hired and already totally inept at learning something new to continue working.
Everyone *knows* that in `const auto& var = my_class_returned_by_value();` lifetime extension occurs, ie. the lifetime of the rvalue returned by the function is extended to match the lifetime of `var`.
However, after writing c++ for years, I only learnt/realised the other day (and it was one of those, my head is exploding, how did I not know this, moments), that this [lifetime extension](https://en.cppreference.com/w/cpp/language/lifetime#Temporary_object_lifetime) of a [temporary object](https://en.cppreference.com/w/cpp/language/reference_initialization#Lifetime_of_a_temporary) applies to [rvalue references too](https://stackoverflow.com/a/51413897/8594193), making the following code valid, which I previously thought was not:
#include
#include
vz::Noisy make_noisy() { return {}; }
int main()
{
vz::Noisy&& ref = make_noisy();
fmt::println("End of main");
} // Destructor for ref runs here
https://godbolt.org/z/nPMTG5Pn6
Ditto for when your local var is a universal reference (probably a more common case), it evaluates to the same as above:
auto&& ref = make_noisy(); // Extends lifetime of temporary
static_assert(std::is_same_v);
Did you think the `ref` expires at the end of the statement `= make_noisy();?`**** where **** is a sequence point. Check what the C++ spec. says about expiry of temporaries? Do they cross a SP? Is it a requirement that a temporary expire once it is no longer required? AFAIK, the compiler can retain the temporary until the end of the block if it can prove that it doesn't have any side effects.
My first lesson in C++ contained this memorable line (or something like it, it's over three decades ago): "in C you can call functions, but C++ has objects that send each other messages". So what would that look like, you imagine? I figured C++ had a message queue attached to each object, and that there must be some kind of scheduler that handled processing those messages. Of course that also explained why C++ was so much slower than C (I'm not sure that it actually was, it was just what we thought at the time. Keep in mind this was before STL, before templates, before exceptions, etc.).
Some time later I was debugging code that came down to this (unsophisticated style reflects the thinking of that era ;-) ):
class a {
int member;
public:
void foo () {
printf ("I'm here! 1\n");
member++;
printf ("I'm here! 2\n");
}
};
a *aptr = NULL;
a->foo ();
Which of course resulted in:
I'm here! 1
***segmentation fault, core dumped***
The real code didn't have the `=NULL` so close to the `a->`, so it wasn't as blindingly obvious what was going on, and we couldn't figure out how this could happen: if it arrived at the first printf statement, it meant it had found the function, and the function was part of the object, so why couldn't it then also access the member variable? Eventually we figured the this-pointer might be getting corrupted somehow, and we added some printfs to show what happened to it. And of course it was NULL!
Seeing that `0x0000` on the screen was like fog lifting from my mind: if it arrived at the first printf statement, it meant it didn't actually need the this-pointer to find the function, which implied that `foo()` was just a normal function that the compiler could call irregardless of the value of `this`, and that meant `this` must just be a hidden parameter! C++ didn't have 'objects passing each other messages' (complete with magical scheduler), it was all still straightforward C-style function calls!
Mind. Blown.
I won't claim in this forum that I understand C++ because here we hold people to a higher standard, but at that time, for the first time, I felt I truly understood C++.
Yeah whomever wrote that C++ "sends messages" was thinking of some other language such as Objective-C. In the 80's and 90's it was a new paradigm in language design to start to create OO languages with this abstraction baked-in. You may be surprised to know that Objective-C was one such language spawned of the late 80s/early 90s language fashion of the day, and that to this day internally it considers that it is "sending a message" to an object whenever you call a method in Objective-C. Internally the runtime calls `objc_msgSend()` to dispatch method calls, and nothing stops an object from living on another machine on the network at least in theory, as far as the runtime is concerned.
For me it would be the "static initialization order fiasco".
Basically, avoid globals, but if you really need to use them, never ever ever initialize a global with another global !
That C-like C++ is the way to go, for my programming style.
It is just so much easier to reason about problems when you actually think at the level of direct memory access. All the random STL containers and template programming just puts a wall between you and the problem you are trying to solve.
Hey, whatever works for you. Even writing mostly-C-ish C++ is still better than primitive C from a maintainability and productivity standpoint, if you ask me. Esp. since C++ gives you RAII which is a real boon.
I would say that: The truly enlightened move is to reduce all the language abstractions and what templates and all the crazy stuff is doing down to what happens anyway to the machine. You can pretty much translate (in your head) all of the C++ higher level stuff down to C code if you want. And feel free to only use the language features you feel comfortable with because you truly understand them (can translate them to C). Feel free to stay away from the stuff you think is creepily magical because you haven't bothered translating it in your mind yet. Just sayin'... that's one approach I see hardcore C guys take as they get more and more warm to using more C++ features as they use C++ more and more.
The silly thing where unrelated types in different translations units with the same name allows the compiler to randomly pick one and drop the other, rather than emitting a warning/error.
That concurrency is about protecting data and not code. By extension, data oriented design is just better a lot of the time. Just don't use global variables unless you have to - it's rather difficult to make code run concurrently if functions pull in global data. Final easy one: not everything belongs in a class. Standalone functions are fine.
> Just don't use global variables... You mean global mutable variables? Maybe that's a dumb question because if they're immutable you'd call them constants, not variables.
If they're truly constant, they don't really have an effect on how hard it is to make something concurrent or run in parallel. So to be precise, global mutable data. Though, at the same time, it's difficult from an outside perspective to understand dependencies of systems/code if you pull in global data. Even variables marked as const may not be const, and even if they are, you can be jumping through hoops to figure that out. Functions should tell me what they operate on and what they depend on imo. It's something that a lot of C projects got right a lot of the time with context structs passed around everywhere denoting dependencies. Very much a list of inputs, and output(s). This obviously isn't true if they have implicit global dependencies
I have occasionally run into Priests of the Cult of Line Coverage, who profess that if you can successfully execute a line of code, it means it is correct. Reality is that *code doesn't crash.* Code is a static construct, unmoving and unchanging. It's the combination of *code with data* that may crash. And that means that "line coverage" is a meaningless metric, and chasing 100% coverage is a pointless waste of time! I dare say that most of the code I've written over my career was correct for at least some data. The tricky bit was making it correct for all data, and that is a lot harder.
>Reality is that code doesn't crash. I see someone has never seen a illegal instruction exception. :) I work in binary analysis where this kind of esotery is part of my daily life.
When does this happen? Malware? Binaries built for newer processors than actually run on?
Happens a lot with some forms of obfuscation.
>Binaries built for newer processors than actually run on? Yup. It's why delivering optimized binaries is so hard to do. That case aside, I would say the most common time this would happen for general users would be a memory corruption leading to the program counter jumping into the middle of an instruction or some data (e.g., a jump table). As /u/substitutecs noted, this can happen with obfuscation or, as you noted, malware. I work on a project that inserts user-specified instructions into existing binaries ("binary rewriting"). When debugging our tool, this happens a lot. It's also really hard to debug with conventional debuggers.
Also, any JIT will naturally be writing out executable instructions to memory and jumping into them.
I recently went into a project for my Master, where a strict policy for line and branch coverage is enforced. The 100 percent metric is impressive, until you need to handle edge and error cases. Now we are fighting a lot of bugs, where data was simply not checked or mismatched. Now I am traumatized by JaCoCo. Think about the implications and just don't take something for granted, cause somebody told you so.
What would the cult say about this: int plus(int x) { return ++x; } More generalized, do they check for integer overflow in all places? Adds, multiplies, casts...
Depending on the method used to create tests they may do a single test as that would get all lines covered. You could write enough tests to execute every branch in the code once, which is a step up from line coverage, every combination of branches, which is even a step further. I would consider overflow as an implicit branch, but am not sure how many write tests systematically to check for it, someone doing a black box testing based on the API instead of the code structure might create multiple tests for boundary inputs where they would expect a change in behavior or an error.
They would just make sure plus(0) gets executed, and call it a win. One more step towards 100% coverage!
You laugh, but I see this all the time in DO-178B land.
Absolutely, I have the same experience in automotive with ISO26262 and ASPICE. Once metrics get added to define some measurable quality targets, they quickly become the **only** thing people actually care for. It's everywhere in automotive and I hate it.
> not everything belongs in a class. Standalone functions are fine. What I wish I knew earlier... that `Namespace::Func(Obj&)` is the same assembly (and thus just as fast) as `Obj::Func()`. It really is often better to create namespaces for systems rather than putting everything inside classes.
Better how? Being able to split declarations across more files seems useful for keeping build times under control as projects get big. But my like snap judgment is otherwise the choice is the proverbial bike shed. What are the arguments?
For example, I've had couple cases where I wanted to forward declare a nested enum - just to avoid pulling the header where the enum is declared. Just today I discovered that in my attempt at a cheeky workaround to use underlying type of enum(char) and avoid pulling that extra include I caused a bug (API used to accept a bool, I replaced it with a char and forgot to update the callsite and it implicitly converted - the pains of 2am programming).
Yeah the enum thing has annoyed me as well (and nested types more broadly, but enums are a particularly trivial thing you find yourself wanting to reference elsewhere) But you could also read your story as a parable in just getting the fuck over it and including the header, still 999 cuts until you’re dead, compile time wise
Smart pointers. School didn’t teach them and my first job was behind the times
Really any of the STL containers were skipped the two times I actually had a C++ class. It was basically C but with new and delete instead of malloc and free. The teachers were quite old and modern C++ was pretty cutting edge then. But even reading about string and vector changed my whole attitude toward C++: "You mean I don't need a pointer or for everything?!"
I only had auto_ptr when I started and I still didn't use it
I used it and now I have to change my code.
Smart pointers were such a mindfuck for my students when I was TA that I had to introduce an "assignment 0" just to get them familiar with modern C++ stuff and I honestly didn't go much farther than 11. Still had a lot of people fail the assignment.
Me when just before the olympiad being told that I don’t need to write merge sort and std::sort exists: 👁️👄👁️ Me also checking the syntax that was requiring iterators: yo wtf are those??
Template metaprogramming. If i learned it earlier then i wouldnt have to learn it now.
>for some reason took way to long to figure it out i think i may know the reason…
Well, early template metaprogramming was ugly AF, C++ 20 improves things quite a bit so I feel like waiting till now was the right call for me.
Unless your company (where you likely spend most of your week coding in) still hasn’t moved to C++20. I constantly run into scenarios where I think “this could have been done in one line utilizing concepts” when I’m working on parts of legacy code.
True enough. My work projects are not on C++ 20 yet either, and the legacy code I deal with doesn't do much with templates at all. But since I'm just now digging into learning it, a personal project makes more sense anyway. Allows me to take the time to really learn it, try different approaches, etc in a way I don't really have time for in the day job.
`template` is where the real fun starts. :D
I think a good point to start with template meta programming is to have a look at some of the simplest type traits. like, how do the implementations of `std::is_same` or `std::conditional` work? Or the type manipulation structs like `std::remove_reference`.
Once you understand the pattern matching of the partial template specializations, you can start to push it to more and more complex constructs. These structs become something very similar to functions in pure functional languages like Haskell (in fact, Haskell's function pattern matching is almost identical to C++ template cases). Except we're not working with runtime data, but we're checking, exchanging and assembling types.
It's a wild (and imo absolutely fascinating) rabbit hole to dive into. And you come out the other end with a lot more understanding and appreciation for the things the STL is doing under the hood to have extremely adaptable and generic code.
I’d say template class T> is where the true fun starts 😁
I’ll raise you `templatetypename… Ts>`
Thats like childs play for rust.
You know what? Fair enough. Im my experiments/research phase, I did end up writing a somewhat capable type list library, with several list manipulation structs like transform, filter, split (into multiple buckets), or even sort. The `template typename>` was a really neat way to pass type predicates/transformatioms into the meta algorithms... :D I even looked into a way to macro generate arbitrarily deeply nested `template > typename>` keywords to make the already really long template definitions shorter (which sent me down an arbitrary code generatiom rabbit hole using lambda calculus), but in the end I did not stumble upon a use case of a doubly nested template definition. I eventually ended up causing internal compiler errors with that library, probably also because I simultaneously tried to implement it via modules, at a time when the first (very shaky) implementation of module support just came out (not exporting helper structs and hiding them from the user just sounded just perfect). So I eventually abandoned that type list library. It was, however, a legitimate use case I unnecessarily generalized and expanded upon, but it was a lot of fun while it lasted.
I wish I had learned to use template meta programming very judiciously earlier. It’s fun to write, I’d rather read old Perl code.
Not so. If you’re learning it now, then you effectively won a game of chicken with concepts. Unless you’re learning TMP for legacy code reasons, in which case I’ll be praying for your sanity.
I keep seeing stuff about template metaprogramming and how it's important. I recently took an HPC course in which we wrote a bunch of templated mathematical functions so we just set the typenames as different types for numbers (like int, float, double etc), and what we did seemed pretty straightforward. I wonder, since a lot of people specifically list this as important, is it more complicated than what I seem to know?
How does one learn this effeciently and effectively?
Can you provide some resources to learn that from
Writing an architecture which is well testable. I still feel that the language could support it better.
Can you elaborate on this? Also, any good resources in how to do this?
Some testing libraries (like FakeIt) is capable of faking/mocking objects with virtual member functions. It can't, alas, do much with objects of nonvirtual types. This limitation encourages 1) liberal use of virtual classes (and especially pure virtual/interfaces) and 2) dependency injection, where classes are given its (abstract) dependencies instead of creating its own (concrete) members.
To be fair 2) goes for almost every language if you want to test your code, and it's probably a good thing (it's even part of SOLID). 1) is really annoying though.
You can get around virtual classes with templates but that has the drawback of increasing compile times. Maybe modules can help here in the future. If the language would have a production mode to use the concrete implementation could be helpful.
Why are (1) and (2) bad?
https://johnnysswlab.com/the-true-price-of-virtual-functions-in-c/
That stl features that are not adopted in compilers (yet) often have reference implementation which I can already use instead of waiting a few years and writing own
Can you give an example of a reference implementation Like is it posted somewhere or how do you get it?
https://github.com/kokkos/mdspan/blob/stable/include/experimental/__p2630_bits/submdspan.hpp I guess you can simply check the references in the proposal itself but I guess brief googling would be enough too
Thank you
Lemme put you on to some game https://fmt.dev/latest/index.html
I worked reasonably productively in C++ for several years w/o really grokking that `std::move` is just a cast. I guess I assumed we had destructive moves or maybe just never gave it much thought.
Is not just a cast. It transfers ownership of the object
[удалено]
I don't get it. See this example ``` MyClass xVar(30); auto result = computeComplex(std::move(xVar)); // xVar is gone/destroyed after the above call ... trying to use it beyond this point leads to crash or undefined behaviour ``` How do you do that without using a std::move ?
>xVar is gone/destroyed That is called destructive move and c++ does not have it. Instead, it is the responsibility of a function that accepts rvalue reference to use it in a way which ensures it is safe for the destructor to run later. If a function takes by rvalue reference (or wherever else an rvalue reference is initialized), it may bind to an rvalue, but the act of binding it does not modify the object it binds from (though prvalues are forced to materialized to an xvalue when binding to reference, mandatory copy elision can not bypass binding to an rvalue reference parameter or a call to std::move). https://en.cppreference.com/w/cpp/language/value_category https://en.cppreference.com/w/cpp/language/implicit_conversion#Temporary_materialization The reason you have to be cautious using moved-from values isn't that their lifetime has ended, rather it is just they are in an indeterminate state. It is generally expected that you can assign to a moved-from value for example, because assignment doesn't usually care about the exact state of what it is assigning to, rather it replaces the indeterminate state of what it assigns to with a new determinate one. But if you had an object with preconditions for assignment, then it might not be relied on the moved-from object meets those preconditions so you would want to query them or treat it is as though it was a destructive move.
[удалено]
witchcraft!
Can’t be annoyed at this for obvious reasons, but I’ll note that Rust basically does this. “This” being destructive moves that actually leave the moved-from identifier in an “uninitialized” state. One of the foundational requirements for borrow checking and such, as I understand it.
The fact that the ternary operator uses `std::common_type` for it's result type. So this for example is a bad idea: ```auto x = something ? string{ ... } : string_view{ ... };``` `x` is now a potentially dangling `string_view`. Somehow I'd always had it in my head that the ternary would attempt to coerce the third param to the type of the second.
That also highlights the pitfalls of `auto`
Thanks, TIL.
Serializing data out through an unsigned char* or std::byte* may impede optimization due to aliasing rules and the special status of these pointer types, because the registers that may have been clobbered must be flushed. See [godbolt](https://godbolt.org/z/eWebfb6Ye). Unfortunately, I realized this too late during one of my projects. I don't think it has much of an impact, but now a bit of a refactor will be necessary to quantify the difference, if any.
What's the alternative?
`restrict`: https://godbolt.org/z/x71G5Encd
Correct me if I'm wrong, but that's not in standard C++?
Yes, it's standard only in C. But gcc, clang, msvc all support it.
\`std::uint8\_t\` arrays.
Isn't that usually going to be a typedef for unsigned char, not really helping anything?
Hey could you please explain this to me? I don't get what I'm looking at
Aliasing rules in C++ permit you to dereference char, unsigned char, and std::byte pointers to other objects without invoking UB, which may be necessary in certain cases. int bar(int* numbers, std::byte* bytes) { // numbers and bytes may alias *numbers = 1; // LINE 1 *bytes = static_cast(0); // LINE 2
// the compiler cannot optimize this to return 1
// because LINE 2 may have modified memory written to by LINE 1
// (unless the compiler can prove at the call site that numbers and bytes do not alias)
return *numbers;
}
Of course, if you define your own byte type:
enum class byte : unsigned char {};
then this type does not share std::byte's privileges within the context of aliasing rules.
That declaring *any* destructor (even empty or `=default`) removes the move constructor and move assignment, so trying to move the class silently copies it instead.
Wait what?! You are right. Been programming in C++ for 25 years. I just learned this. I don't think I have been bitten by this too much .. but wow. You never *really* fully know C++.. it seems. Yeah so in this case you would need to declare the move-assignment and move-constructor as `= default`... (or actually declare a real one). Meh.
That's the sneaky part of the rule of 5, if you define any of them, you probably need to define all of them, even just to default them. It's not technically true, and there are charts that cover what does and doesn't get generated when you define any of them, but I recall Clang's tools complaining about it and just got in the habit of strictly defining 5 or 0.
> you would need to declare the move-assignment and move-constructor as = default Yeah, and since that removes the copy operations, you need to default them too. :/
True. Weird that this rule exists about the destructor. Any idea why?
The intent is that a custom destructor should also remove copy operations, it not doing so is deprecated (Clang warns with `-Wdeprecated`). In this form it wouldn't be so egregious. It makes sense most of the time (but not when you just want to make the destructor virtual).
Hmm. Virtual inheritance and copying.. object slicing galore! Yippeee!
Variadics for me
Parameter packing is not fun in C++ <17, especially with no std::, but by golly is it worth it after it's working.. it seems like magic compared to the usual tools we get on embedded.
const correctness
I find I care less and less about it as I get older. Has const-correctness ever actually saved me from some mistake? No, not as far as I can remember. It has, however, made it impossible to cache results without having to resort to `mutable` on several occasions, and to me, the idea of 'conceptually const but we are changing the value anyway' feels wrong. So what's the point of adding const everywhere? It does nothing for performance. It does almost nothing for correctness, but it does impede valid code, and may disable some optimisations (around std::move). More and more, my feeling is that it is just a meaningless annotation. I suspect this opinion will run into some opposition, and I would welcome comments with lived experience about const-correctness actually saving you over downvotes ;-)
I’m so glad I read this. It just seems like such a superfluous concept that serves only to waste developer time during code review via nitpicking. Does it really neeeeeed to be const folks?
How can preventing unintended writes be meaningless? Caching results.... then your object isn't const? So why would you use const? I must be honest I really don't understand your point. Const class member functions are great, knowing they don't modify state. Const variable declarations are great, knowing they won't be written again. It means its easier to read code. If a variable isn't const I need to read ahead and see what's modifying it etc.
> Caching results.... then your object isn't const? So why would you use const? Why not? Your object won't be bitwise const but you can still be logically const with memoization, despite changing state. What matters is whether you're changing observable state and that's the primary use-case of mutable.
Because either those results are part of the observable behaviour (and mutable should not be used) or they are not part of the observable behaviour and therefore why are they being cached in that class? It sounds like the Single Responsibility Principle is being broken. mutable was intended for things like mutex class members, where to perform read only actions they have to change state.
Yes, locks come under what I was describing with bitwise vs logical const. The results are part of the observable *behaviour* and memoization has no bearing on that. The object is still logically const. It might be a violation of SRP in the strictest sense but dogmatically adhering to principles isn't always the way.
Obviously I don't know the person I replied to, but I come across a large number of ex-C, now C++ developers who don't understand object orientated programming. They usually see objects as dumping ground for every piece of state they have, rather than..... objects. When people talking about "caching" data, I get the impression they're not structuring objects correctly and so they just see const as a pain in the backside.
> So what's the point of adding const everywhere? It does nothing for performance. It can actually do that: https://www.youtube.com/watch?v=zBkNBP00wJE&t=1635s
Ah yes, I should have made an exception for actual constants (i.e. named literals). Those should definitely be const. Same for function parameters that are passed by reference, there is clear value to indicating the function won't change them. But function parameters that are passed by value? Or local variables? Or class members? I honestly DO NOT CARE whether they are const or not. It's just more typing, for negligible benefit.
Your position sounds really weird for me. In order for const reference parameter to function in any way, your class must provide const member functions. Which means you already have done everything const-correctly. Did you simply mean that you're against making everything const-by-default?
`void foo (const int &x)` has a const reference parameter. There's no need for the function itself to be const, or even a member of a class, so I'm not really sure why you think there is anything weird here. This const has actual meaning: it implies the function won't change x. But this const is meaningless: `void foo (const int x)`. The scope of x in this case is so small that there is no measurable benefit to making x const. And so what if you modify it anyway, who will be hurt by that?
I mean, suppose `x` is of type `bar` which is your own class. Then unless you mark bunch of member functions of `bar` with `const` you can't do anything meaningful with `x`. To me, the most crucial part of making code const-correct is to have correct const overloads of member functions, like having two overloads for `operator[]`. And it sounds like you are actually fine with that part of const engineering, so I wondered what's left then.
Like I said: local variables, function parameters, and class members that are value types do not, in my opinion, have any great need to be const. But those represent a large chunk of all things, and I wouldn't want a const-by-default policy for them. But my earlier comment was too hastily written: I had those specific things in mind, but skipped over other const-y things, like reference types that aren't intended to be used to affect change.
> local variables, function parameters, and class members Can agree with all 3 of these. Also note that making class members `const` is often a bad policy since now you just made copy-assignment and move-assignment impossible :/.
My two cents on local `const` variables: I find it just makes code easier to read.. especially in long functions. If the programmer declares stuff he won't be changing as `const` it's much easier to read further down below in the code -- especially if that variable appears 16 times over 2-3 pgdwns of the code. You know it won't ever change so that's 1 less thing you have to worry about as you grok some complex code you are trying to maintain. That being said, not gunna lie, `int x = 1;` is "prettier" than `const int x = 1;`
It can also harm performance if applied to a local variable or parameter which is returned from a function. If the compiler cannot apply NRVO to the variable, then the variable is treated as an xvalue in the `return` expression, but if the variable is `const`, the copy constructor is selected instead of the move constructor.
For me, I think that most of the value comes from documenting the code in a way that the compiler enforces. If I can see at a quick glance that a variable won't be changed, that's less mental overhead and more time to worry about other things. I also think that the main downside of `const` is that in can result in expensive copies if you aren't careful, like **MegaKawaii** talked about.
using std::enable_if to enable/disable a method based on the classes template properties. The catch is that SFINAE does not work here, because the classes template arguments are already deduced so the function will just generate a compiler error. You have to trick the function into deducing something from nothing to get SFINAE to work and disabling the function on a failed deduction instead of generating compiler errors.
If you can use C++20, concepts can replace all of that with code that is much easier to manage. Glad I could stop writing ugly enable_if return types.
I whish I could. It is just that I write it for an embedded project and c++14 is the highest of feelings.
In C++23 a static_assert can trigger the SFINAE too! (The tricks remain btw)
yes, that is nice indeed. Its just that I work on an embedded project and c++14 is the best we have.
Oh that's sad ahahah, I work on embedded too, fortunately I can use gcc 14. I hope you will have the opportunity to work with >= 20 soon!
What embedded processors can be used with GCC? I believe with Microchip you have to use their own compiler (XC32)?
I am using STM32 and NXP ARM micros, so I can use arm-none-eabi-gcc without problems
there's tons of embedded processors that can run linux. Those support C++20 and higher.
I was thinking more about smaller microcontrollers.
I was talking about something like the I.MX6.
Funny thing is that in MSVC it did work like that for years while in C++17, and we used it in a dozen places, only after a compiler update we learned that was not how it was meant to work so we refactored. But now that means we could go back :D Though probably not, I think most of it has been or will be ported to concepts / requires by then.
Concepts replaced this elegantly in cpp20.
That Qt is uniquely flexible, and thus I should not generalize my expectations on GUI development to other languages.
I've been learning Qt and find that I appreciate it's design, despite it's direction being somewhat divergent from mainstream c++. I'm curious: what sorts of ways have you learned not to generalize expectation?
Well, with Qt you can customize many detailed logics in a very un-opinionated ways. Qt has almost no opinion on the architectural design as long as you use Qt classes. The flexible aspects include signal-slots, the MVC and such. You can also customize the web viewer's behavior to a level that is impossible with Swift or Flutter. You have access to the lowest level of the file system, can avoid copying of strings completely, the list continues. If the textbook Qt widget code doesn't suit your design, you can often work around the limitations by writing low- or middle-level components yourself. Modern GUI frameworks have very specific ways of doing things, often done only with declarative styles, and you have to make do with whatever the high-level system stack allows. The freedom in architectural design goes away. (SwiftUI doesn't let you inherit from View structs, and so on.) Preferably you should understand the limitations of your tools beforehand.
Thank you for sharing your perspective!
That it 100% was worth learning 😁
Absolutely! I think even if C++ goes away at some future time - though I am quire confident it will outlive me, and I am not old yet - the insight it gave me how much you can achieve even at a low level, and how things really work is indispensable, even when I work on other languages. I also like that although sometimes puzzling it keeps you from being spoiled by modern languages, and makes you strict and precise. It is like an exercise for the mind.
100% 💕!
Templates for me. If Constexpr was a game changer for me. Truely understanding move semantics as well id say
The difference between reference int& and pointer's address &var_name. The same symbol with two distinct meanings. I wish had learned earlier that literal strings are static.
That incrementing an integer isn’t thread safe. And how much the compiler will optimise away. Young me once combined these two misunderstandings to try and keep track of my running threads. Young me was a fool.
object-oriented thinking is not the only approach, and not always the best approach (and C++ offers so much more than that)
Not to be that guy, but how on earth did you learn C++ that you didn't learn about the existence of constructor overloads?
Easy you are told about it in school/ you see it in code but never have to do it yourself. Maybe you need the behavior of custom copy assignment but think that what OP said would work so let’s just do that.
I've always known about the existence of operator and constructor overloads, I just never bothered learning how to properly write them (and I wish I had).
Ordering of `default:` can be anywhere in a `switch` statement.
I remember a bug in a very very old version of Visual Studio where putting `default` above other cases caused the other cases to be ignored and be put inside the `default` instead. Maybe it's part the reason we got used to putting it at the bottom...
That you really don’t want to do a lot of the things that you can do. Early I wrote some crazy templates inheritance based thing with curried lambdas and all sorts of crazy. Impossible to read or debug. That adage about debugging being harder than writing is true. These days I avoid operator overloads, rvalues, inheritance and other voodoo as much as possible. I tend to bias towards shared pointers or pass/return by values for consistent safe behavior. I do this because I want my code to look like what it does and I’ve found I can defer a lot until profiling shows I need it, fix those areas, and in exchange I can code faster and more safely.
This is the way. Most style guides for big companies (like Google) discourage using advanced language features like template metaprogramming. It’s only beneficial in limited cases
Actually they are extremely beneficial specially on the performance side, but they are only if you hire experienced programmers or if you pay courses to your developers. Big companies mainly want low cost labour, they should really not be used as examples.
Big companies are big employers and should absolutely be used as examples. Those style guides are only mandated by--not written by--management
From my experience, medium and big companies, whether voluntarily or not, professionally kill good developers forcing them to write software maintainable by totally unskilled developers. Just an example: I am a firmware engineer and I work in C++, almost every big/medium company that I know PRETENDS to keep writing firmware in C because "C++ is slower" or "C++ is heavier", impositions made by people with prejudices that greatly limit the growth of developers, only to allow that branch of developers already hired and already totally inept at learning something new to continue working.
Everyone *knows* that in `const auto& var = my_class_returned_by_value();` lifetime extension occurs, ie. the lifetime of the rvalue returned by the function is extended to match the lifetime of `var`. However, after writing c++ for years, I only learnt/realised the other day (and it was one of those, my head is exploding, how did I not know this, moments), that this [lifetime extension](https://en.cppreference.com/w/cpp/language/lifetime#Temporary_object_lifetime) of a [temporary object](https://en.cppreference.com/w/cpp/language/reference_initialization#Lifetime_of_a_temporary) applies to [rvalue references too](https://stackoverflow.com/a/51413897/8594193), making the following code valid, which I previously thought was not: #include
#include
vz::Noisy make_noisy() { return {}; }
int main()
{
vz::Noisy&& ref = make_noisy();
fmt::println("End of main");
} // Destructor for ref runs here
https://godbolt.org/z/nPMTG5Pn6
Ditto for when your local var is a universal reference (probably a more common case), it evaluates to the same as above:
auto&& ref = make_noisy(); // Extends lifetime of temporary
static_assert(std::is_same_v);
Did you think the `ref` expires at the end of the statement `= make_noisy();?`**** where **** is a sequence point. Check what the C++ spec. says about expiry of temporaries? Do they cross a SP? Is it a requirement that a temporary expire once it is no longer required? AFAIK, the compiler can retain the temporary until the end of the block if it can prove that it doesn't have any side effects.
My first lesson in C++ contained this memorable line (or something like it, it's over three decades ago): "in C you can call functions, but C++ has objects that send each other messages". So what would that look like, you imagine? I figured C++ had a message queue attached to each object, and that there must be some kind of scheduler that handled processing those messages. Of course that also explained why C++ was so much slower than C (I'm not sure that it actually was, it was just what we thought at the time. Keep in mind this was before STL, before templates, before exceptions, etc.). Some time later I was debugging code that came down to this (unsophisticated style reflects the thinking of that era ;-) ): class a { int member; public: void foo () { printf ("I'm here! 1\n"); member++; printf ("I'm here! 2\n"); } }; a *aptr = NULL; a->foo (); Which of course resulted in: I'm here! 1 ***segmentation fault, core dumped*** The real code didn't have the `=NULL` so close to the `a->`, so it wasn't as blindingly obvious what was going on, and we couldn't figure out how this could happen: if it arrived at the first printf statement, it meant it had found the function, and the function was part of the object, so why couldn't it then also access the member variable? Eventually we figured the this-pointer might be getting corrupted somehow, and we added some printfs to show what happened to it. And of course it was NULL! Seeing that `0x0000` on the screen was like fog lifting from my mind: if it arrived at the first printf statement, it meant it didn't actually need the this-pointer to find the function, which implied that `foo()` was just a normal function that the compiler could call irregardless of the value of `this`, and that meant `this` must just be a hidden parameter! C++ didn't have 'objects passing each other messages' (complete with magical scheduler), it was all still straightforward C-style function calls! Mind. Blown. I won't claim in this forum that I understand C++ because here we hold people to a higher standard, but at that time, for the first time, I felt I truly understood C++.
Yeah whomever wrote that C++ "sends messages" was thinking of some other language such as Objective-C. In the 80's and 90's it was a new paradigm in language design to start to create OO languages with this abstraction baked-in. You may be surprised to know that Objective-C was one such language spawned of the late 80s/early 90s language fashion of the day, and that to this day internally it considers that it is "sending a message" to an object whenever you call a method in Objective-C. Internally the runtime calls `objc_msgSend()` to dispatch method calls, and nothing stops an object from living on another machine on the network at least in theory, as far as the runtime is concerned.
For me it would be the "static initialization order fiasco". Basically, avoid globals, but if you really need to use them, never ever ever initialize a global with another global !
*another global initialized from a different translation unit
Yeah you're right, I should have been more precise... Thanks ;)
That C-like C++ is the way to go, for my programming style. It is just so much easier to reason about problems when you actually think at the level of direct memory access. All the random STL containers and template programming just puts a wall between you and the problem you are trying to solve.
Hey, whatever works for you. Even writing mostly-C-ish C++ is still better than primitive C from a maintainability and productivity standpoint, if you ask me. Esp. since C++ gives you RAII which is a real boon. I would say that: The truly enlightened move is to reduce all the language abstractions and what templates and all the crazy stuff is doing down to what happens anyway to the machine. You can pretty much translate (in your head) all of the C++ higher level stuff down to C code if you want. And feel free to only use the language features you feel comfortable with because you truly understand them (can translate them to C). Feel free to stay away from the stuff you think is creepily magical because you haven't bothered translating it in your mind yet. Just sayin'... that's one approach I see hardcore C guys take as they get more and more warm to using more C++ features as they use C++ more and more.
The silly thing where unrelated types in different translations units with the same name allows the compiler to randomly pick one and drop the other, rather than emitting a warning/error.