Finally. It was worth the wait. Finally we have a video explaining Jenkins-Traub in the best way possible! And this is the first time I actually see the fractal from the method... ever!
@OscarVeliz
2 ай бұрын
I really wanted to get this out much sooner. Thanks for sticking with the channel.
@alexandrevachon541
2 ай бұрын
@@OscarVeliz I never gave up on you. You made your best video in my opinion. And probably the most important one in the channel's history.
@krumpy8259
Ай бұрын
Gem of a channel, thank you❤
@johnphilmore7269
Ай бұрын
Hey Oscar, it has been a while! I’ve never even heard of Jenkins Traub… honestly at first I thought it was WAY to convoluted for a polynomial, but the fact it gets so many things right is … almost unique in the field. And it’s globally convergent?! That’s insane. I guess the work was worth it. Can we use a multidimensional version of this as well? Is that even possible? Anyway I wanted to see if you knew anything about numerical methods for calculating limits. I know we can use interpolating polynomials to get a REALLy accurate example, but I was hoping for more of the limits as x goes to infinity. Something robust and generally applicable. I feel there are so few examples of such methods. Do you know of any? Again…what a video. Learned something new today
@OscarVeliz
28 күн бұрын
I don't believe there would be a multidimensional version. Ford's generalization wasn't about multiple variable polynomials. Numerical limits are a topic in the queue since I have some familiarity but I plan on doing a lot more digging. Glad you liked the video 😁
@nukeeverything1802
2 ай бұрын
This is such a wonderful video and I love your presentation! I especially love the fractal part. I think my only criticism is that you sometimes mumble words too quickly (e.g. you occasionally say "pomial" when you say "polynomial") and I had to rewind a few times to catch what you were saying. But ignoring that, your video is really good and I could follow along without getting lost.
@alexandrevachon541
21 күн бұрын
A helpful tip is to normalize the H polynomials as we progress in the iterative process as H̅ polynomials. Since the leading coefficient of H_{λ + 1}(x, s_λ) is -H_{λ}(s_λ)/p(s_λ), we get: H̅_{λ + 1}(x, s_λ) = 1/(x - s_λ) (p(x) - p(s_λ)/H_{λ}(s_λ) H_{λ}(x)) = 1/(x - s_λ) (p(x) - p(s_λ)/H̅_{λ}(s_λ) H̅_{λ}(x)). This greatly avoids the need to normalize said polynomials later on, and helps.
@AllemandInstable
2 ай бұрын
thank you for this nice video It's nice to see these and try implementing them afterwards actually I think it is one of the best way to learn a new language, making something useful in it I will try to implement it in Mojo
@Alceste_
Ай бұрын
But what's mojo and why are you learning it?
@ashie.official
2 ай бұрын
wow!!! great video, thanks :)
@harold2718
2 ай бұрын
You could show some root-finding methods for finite fields as well
Пікірлер: 13