This one was a bit of game changer for me. Had to listen to it 3 three times to solidify things. I just wish that Eric had kept the figure showing the promise and future structure on more of the slides, as I’m not an expert on that specification. Eventually, I drew the figure and referred to it as Eric worked through examples. I suspect he’s probably talking to a room full of experts who might not need the visual aid. Regardless, this was a great talk. Much Appreciated.
@SamMason0
4 жыл бұрын
in case anybody else is interested in the talk that uses coroutines to hide memory latency, it's: Gor Nishanov, Nano-coroutines to the Rescue! (Using Coroutines TS, of Course), CppCon 2018 whole talk is good, but this is where the background stops and he starts showing how coroutines can help: kzitem.info/news/bejne/y2-q0n53qn-Haoo and performance change is shown here: kzitem.info/news/bejne/y2-q0n53qn-Haoo
@yonathanashebir6324
Ай бұрын
These people GOT ME!
@alexmopleen7944
4 жыл бұрын
Great talk, really enjoyed it! Thank you, David and Eric! And it's just as interesting seeing Vinnie Falco in the comments finding it complex. Nothing but respect for him too, don't get me wrong. Can anyone help me figure out if this can be done in C++11? I'm completely failing to come up how to fake returning a generic lambda from a function.
@EricNiebler
4 жыл бұрын
Thanks! Generic lambdas are nothing but class types with templated function call operators. You can go that route pre-C++11. The only drag is having to define the class at namespace scope.
@mamahuhu_one
5 ай бұрын
@16:19 was a good joke : "ups, got ahead(a head) of myself"
@xinpingzhang4506
4 жыл бұрын
10:15 If you take the same process A and B, make then concurrent, you are still not guaranteed to get a 2. I don't get the presenter's point.
@think2086
4 жыл бұрын
Can someone comment on the use of std::forward and R-value casts used throughout Eric's presentation before things like the call operator, etc? Having trouble wrapping my brain around those and what they actually achieve/avoid. Thanks! For example, @36:45 forward(f)( (R&&) r); as opposed to simply: f( move(r) ); So here, he's recasting f as r-value if it bound to the incoming argument as r-value. But then he's just calling the operator() on it anyway, so why was this necessary? Operator() isn't doing anything with f itself anyway, right (which presumably is passed in implicitly by the C++ compiler as a pointer in the first parameter, i.e. "this"). Thanks!
@BowBeforeTheAlgorithm
4 жыл бұрын
The short version is that passing the move reference (R&&) allows him to avoid the penalty of copying until he is ready. In your example move(r) becomes a temporary_object and then f( temporary_object) is forced to copy that temporary_object. With his version he could pass the move reference R&& up several layers and then do just 1 move operation at the final destination with no copying between functions. Hope that helps.
@rinket7779
8 ай бұрын
He said it's just due to slideware, in practice he'll use std::forward or std::move
@TerminalJack505
4 жыл бұрын
So, at 30:00, you don't actually execute the task until you are ready to wait for it to complete. I don't see how this is any different than simply running the task on the same thread.
@EricNiebler
4 жыл бұрын
That's true. Now imagine algorithms like when_all or when_any, which encapsulate different fork/join strategies, programmed to the same abstraction, letting you build a whole task graph. Further imagine a spawn algorithm that launches the task graph and returns a future to it. The possibilities are endless.
@YourCRTube
4 жыл бұрын
Wonder how much in compile times and code size this will all cost, TANSTAAFL and all.
@manuelfehlhammer6424
2 жыл бұрын
Compile times/code size, because generated template classes? Really? Even the old/critizised std::future/promise are templates ... But - if you follow the presentation - they are far inferior regarding runtime performance opposed to the approach shown here by Eric! And if you are in the area of async/multithreaded programming, runtime performance is the central thing, for which you are doing all this ... and you give a sh*t about a maybe slightly bigger build-time!
@thevinn
4 жыл бұрын
Does anyone else think this is over-engineered and overly complex?
@steamyprogramming666
4 жыл бұрын
Yeah, honestly I was under the impression that the standards committee was working toward automatic parallelism where applicable. But a lot of this talk is covering futures and promises which is all old hat. The senders and receivers are just an abstraction over futures and promises to eliminate the potential errors you could make leveraging futures and promises.
@iddn
4 жыл бұрын
No more so than C++'s current async offerings. They're totally right about std::future being crap though
@Omnifarious0
4 жыл бұрын
Not me. I think it's just poorly explained. I made something a lot like this as an attempt to create a way of having the compiler automatically handle the inversion of control problem you get with event driven systems. I prefer Mercurial and need to find new Mercurial hosting, but for now it can be found on Github. It's called Sparkles - github.com/Omnifarious/Sparkles
@jamesofnoaffiliation
4 жыл бұрын
Vinnie Falco in the comments section saving me from using this video to procrastinate on learning Asio/Beast. Thanks to you good sir.
@tiagocardoso4702
4 жыл бұрын
Dunno... I'm using a lot of laziness and composition to improve my C++ code's parallelism these days... Mainly by using a pool of threads stacked at an io_context's "run()" and using bind to "post()" to io_context or a strand... But "post()" completely lacks a return (error, value or cancel) model... I don't like general function callbacks, so I'm stuck to passing a pointer to the caller when "posting" which requires writing code that is strictly coupled to the caller (it improves code readability (plus navigability on IDE's) and/or easy of understanding). IMHO
Пікірлер: 32