Oscar, I wish your videos were more popular. We give millions of views to other KZitemrs that deal with content that is as captivating as useless. On the other hand, you deal with actual problems that can be encountered in school or at work. You provide simple and effective explanations, resources, and code. This is a very pragmatic and scientist oriented approach. No fricking fireworks and smoke, and this penalizes you. I hope that real life rewards you as you deserve. Keep up the good work!!
@OscarVeliz
7 ай бұрын
Thank you for this comment. It means a lot.
@alexandrevachon541
2 жыл бұрын
Impressive work right there. Aside from facing a risk of dividing by zero, I like how it is guaranteed to converge, which is amazing. This modification of Newton's method is, in my eyes, a multi-purpose algorithm, since it might find an optimum or a root, so I like that.
@kitzelnsiebert
2 жыл бұрын
I like the concept of damping to create global convergence, thanks for sharing your knowledge
@hosz5499
Жыл бұрын
Thanks for a Nice and short intro to the partial Newton method. It suggests a good philosophy in life and economics too. Go straight by intuition or profit but verify and adapt
@erikhaag4250
Жыл бұрын
to minimize/maximum a function f(x), I like to use Newton's method on f` (ie. find where the function has a derivative of zero) then plug those into f and pick the x with the smallest/largest output
@AJ-et3vf
2 жыл бұрын
Wow! So awesome man! I immediately clicked upon seeing your video. This is very nice and interesting. I tried the Newton-bisectjon hybrid before, but my implementation didn't really speed up much more faster than pure bisection. I might try this one sometime. And nice for mentioning the Julia programming language :) I dabbled with that months ago. Very interesting for numerical computation. Has a very promising future for scientific computation. The newness still means some packages that are not as popular as the big packages like DifferentialEquations.jl won't have as much good documentation, but it'll definitely improve in the future. And very nice beautiful fractals btw at the end. Very cool lava color for the fractals!
@ominollo
Жыл бұрын
OMG this channel is great 👍 Thanks for these very interesting and useful videos!
@olbluelips
8 ай бұрын
Wonderful! I needed a way to approximate implicit relations robustly! I'll need to change it so that instead of finding a minimum/maximum, it will find the complex solution, but this is really useful thanks for the upload
@OscarVeliz
8 ай бұрын
Instead of dividing by 2, try dividing by 2i
@olbluelips
7 ай бұрын
@eliz Finally got it implemented! It was harder than i expected, because i didn't understand that even when globally convergent, newton's method is really sensitive to the initial guess. I had it converging on erroneous solutions for a bit because i didn't understand how to pick a good initial
@johnphilmore7269
2 жыл бұрын
Heck yes, another numerical analysis vid. I’m excited for this one. As a question, would it also work by saying a_i+1=0.9*a_I for instance?
@OscarVeliz
2 жыл бұрын
Certainly, there could also be a more heuristic approach to identify a good shrinking rate. A more efficient approach might also recognize when heading toward a minimum and start with a smaller value for a in order to avoid unnecessary function calls.
@johnphilmore7269
2 жыл бұрын
@@OscarVeliz interesting…I’ll have to have a look at that then.
@johnphilmore7269
2 жыл бұрын
I had a look at this, specifically adding Wegsteins method to it to induce convergence and accelerate it. Unfortunately this takes a lot more computational power than I would like. So I had a look around at a few other heuristics. The simplest is to use a higher value for a_i, something like 0.75. Reason for this is because a_i=0.5a_i favor smaller shrinking parameters, slowing the algorithm down just a little. And it turns out, we can approximate an optimal shrinking parameter bit more numerically (not my idea) In essence, we want to take a shrinking parameter that results in the minimal value of ||f(x)|| on the straight line between x_n and a normal iteration x_n of Newton-Raphson. If we approximate ||f(x)|| with a quadratic equation, we can then approximate the best iteration and go from there
@theevilcottonball
2 ай бұрын
Pretty cool, never knew it existed...
@Strutter1998
2 жыл бұрын
If you could improve the sound it would be great! Sometimes it can be hard to hear what you say. This channel has no real competition, so I don't know why it has so few subscribers. Your videos has helped me a lot when choosing between algorithms. Always looking forward to the next video.
@Strutter1998
2 жыл бұрын
In this video, the sound was actually OK. I must have thought about another video. :)
@elmer6123
Жыл бұрын
Your algorithm is exactly what I was looking for. I have a C2+ continuous 3D convex surface s(p)=0 where p=(x,y,z) represents any point on the surface. Given any constant unit vector n, I want to find the point p where n is normal to the surface and pointing outward. If s_p is the partial derivative of s with respect to p, it follows that |s_p|-s_p*n>=0, and |s_p|-s_p*n=0 at the desired solution, where |s_p| is the norm of s_p. This problem has five equations in three unknowns, where one equation is the above inequality. The equations can be written for iterative solution as J*dp=c(p) and p=p-dp where J is a 5 by 3 Jacobian matrix with full column rank. Here c(p) is a 5 by 1 matrix of constraint equations where c(p)=0 at the desired solution. This is a least squares problem, easily solvable by Gaussian Elimination with Complete Pivoting or by applying Cholesky factorization to solve J^T*J*dp=J^T*c(p), where J^T*J is a 3 by 3 positive definite matrix and J^T means matrix transpose.
@VaradMahashabde
Жыл бұрын
Ensuring that mod decreases is a nice choice. Btw you always mention that the back-tracking steps are hidden. What if we don't hide them?
@OscarVeliz
Жыл бұрын
When converging on a root (and original Newtown would diverge), it usually takes a few globals before it starts acting like normal Newtown. If heading to a minimum, then each step potentially takes longer and longer trying smaller values of a. There's essentially an internal while loop which might never run, run a few times then stop, or always run.
@emjizone
Жыл бұрын
Gradient descending is like skying.
@johnphilmore7269
2 жыл бұрын
Hey, I got a quick question, perhaps you can help me with it. I had a look at your minimization algorithm series, specifically the ones focusing on efficiently finding the minimum of a function in 1D like Brent. I love Brent, and I thought it would be perfect to use in a global Newton method. I had a look at some other papers, and they said to apply a minimization technique over the line from x_n to the next Newton Raphael iteration x_n+1. Now the problem comes in certain instances, like with f(x)=tanh(x) (as a 1D example) and with starting point x_0=2. Normal Newton Raphson diverges, so Brent should find the argument of the minimum norm and go from there. The problem is, the minimum is SO close to zero at some point and it’s in such a small interval that Brent doesn’t notice it and finds the wrong minimum, causing the algorithm to again diverge. So, what do you suggest to fix this? Backtracking works just fine, i just prefer the faster Brent.
@OscarVeliz
2 жыл бұрын
Brent's minimization method starts with an assumption that the function is unimodal at the interval. In the case of tanh(x), there isn't an interval that would ever be unimodal so Brent's minima method can't be used. Brent's root-finding method though would work amazingly though with an interval to the left and right of zero. If instead your function is f(x)=|tanh(x)| that is a different story since it is unimodal. I tried this very quickly in octave using fminbnd (also known as Brent's minimization method) over the interval [-2,2] with the following command: fminbnd(@(x) abs(tanh(x)),-2,2) Resulting in: ans = 2.2204e-16 which is fairly close to zero. If you're starting with global newton method and using two points to specify a line, does that line also guarentee that the function is unimodal between those points? I would argue that you would need a third point to identify that there was at least one minimum so long as the middle point was smaller than the other two. Brent's min method could still work if there was more than one minimum, but it would only find one of them.
@johnphilmore7269
2 жыл бұрын
Well…ok so let’s see if I can structure my thoughts in a constructive. I’ve done some more research and I have some new thoughts . Let’s take Newton iteration x_n+1=x_n+delta_x with delta_x=-J_f(x_n)^-1f(x_n) being a direction of decreasing function norm. It’s not exactly the same as gradient descent but it should still give guarantee that there exists a minimum in norm between x_n and x_n+delta_x. This is applicable for a function f:R^n->R^n but let’s still to a one-dimensional example for ease. Like what you said in your video, we want to find a between 0 and 1 so that |f(x_n+a*delta_n)| is minimized. So with f(x)=tanh(x), |tanh(x_n+a*delta_x)| is unimodal over any interval with 0 in its interior. But here’s the problem. Let’s say x_0=2. Then x_1=-11.645 is a basic Newton iteration. Now the norm of function is indeed unjmodal but the minimum is only noticeable on about 1/4 of the space between 2 and -11.645. So a needs to be chosen very accurately. Unfortunately this phenomenon exploded exponentially the more Newton Raphson diverges: a needs to be determined with exponentially more accuracy the more diverging the next Newton Raphson iteration is. Eventually Brent minimization doesn’t see the function as a unimodal function at all, but merely as a flat line: it doesn’t know any better. Now this Whole scenario admittedly can be avoided because only one iteration needs to fall within a root’s basin of attraction for the Newton Raphson algorithm to converge. but its still a nuisance especially if the initial guess is very bad. Not to mention sometimes Brent selects the minimum to be too close to x_n to be practical or too close to x_n+1 to be convergent. There is a reason why actual exact minimization isn’t used. Practically algorithmists now use the Wolfe conditions to choose a approximate “minimum” that causes convergence and behaves well. I like it better than backtracking line searching for a few reasons. The first is that I found an algorithm that uses a variant of the bisection method to find an appropriate minimum, a robust approach. The second is that it doesn’t require the repeated division by 2 to sometimes an absurd amount. Computers are able to handle it but…idk I just don’t like the approach in terms of computational tolerance and round off errors (I have a GitHub implementation if you would like, I just can’t post links here). Now not to say I am an expert in the field. Let me be clear, I love your content and your approach is standard, simple, and robust. That’s why I enjoy the conversation, being able to talk as one algorithmist to another and comparing ideas.
@johnphilmore7269
2 жыл бұрын
Also, Merry Christmas!!!!
@OscarVeliz
2 жыл бұрын
It is good to look at the base-case 1-D, but you'll also need to consider what a 2-D or higher Brent's would look like. The square-peg approach would be to fix all but one parameter and apply the minimizer, then fix that parameter and unfix a different one and repeat. There are more sophisticated approaches though and a video on multidimensional bisection is coming soon.
@johnphilmore7269
2 жыл бұрын
I don’t think that multidimensional Brent would ever be needed though. See, in every iteration of Newton method, you would find a n-dimensional vector. So finding the minimum of the norm of the function between one vector and another in n-dimensional space is isomorphic to a corresponding 1-dimensional case using the transformation g:[0,1]->R where g(t)=||f(x_n - t*delta_x)||_2. That being said, mutlidimensional Brent in and off itself would be super interesting… Oh seriously? Oh I can’t wait then, looking forward to the new vid.
@miro.s
2 жыл бұрын
Difficult to follow when you present partial results and don't provide space to people to think about solution steps
Пікірлер: 30