Finding the minimum distance from a point to a curve

Question:

I need to find the minimum distance from a point (X,Y) to a curve defined by four coefficients C0, C1, C2, C3 like y = C0 + C1X + C2X^2 + C3X^3

I have used a numerical approach using np.linspace and np.polyval to generate discrete (X,Y) for the curve and then the shapely ‘s Point, MultiPoint and nearest_points to find the nearest points, and finally np.linalg.norm to find the distance.

This is a numerical approach by discretizing the curve.

My question is how can I find the distance by analytical methods and code it?

Asked By: KansaiRobot

||

Answers:

Problem definition

For the sake of simplicity let’s use P for the point and Px and Py for the coordinates. Let’s call the function f(x).

An other way to look at you’re problem is that you’re trying to find an x that minimzes the distance between the P and the point (x, f(x))

The problem can then be formulated as a minimization problem.

Find x that minimizes (x-Px)² + (f(x)-Py)²

(Not that we can drop the square root that should be there because square root is a monotonic function and doesn’t change the optima. Some details here.)

Analytical solution

The fully analytical way to solve this would be a pen and paper approach. You can develop the equation and compute the derivative, see where they cancel out to find out where extremums are (This will be a lengthy process to do analytically. @Yves Daoust addresses it in his answer. Either do that or use a numerical solver for this part. For example a version of Newton’s method should do). Then check if the extremums are maximums or minimums by computing the point and sampling a few points around to check how the function is evolving. From this you can find where the global minimum is and that gives you the x you’re looking for. But developing this is content probably better suited for mathematics.

Optimization approach

So instead I’m gonna suggest a solution that uses numerical minimization that doesn’t use a sampling approach. You can use the minimize function from scipy to solve the minimization problem.

from math import pow
from scipy.optimize import minimize

# Define function
C0 = -1
C1 = 5
C2 = -5
C3 = 6
f = lambda x: C0 + C1 * x + C2 * pow(x, 2) + C3 * pow(x, 3)

# Define function to minimize
p_x = 12
p_y = -7
min_f = lambda x: pow(x-p_x, 2) + pow(f(x) - p_y, 2)

# Minimize
min_res = minimze(min_f, 0)  # The starting point doesn't really matter here

# Show result
print("Closest point is x=", min_res.x[0], " y=", f(min_res.x[0]))

Here I used your function with dummy values but you could use any function you want with this approach.

Answered By: LNiederha

You need to differentiate (x – X)² + (C0 + C1 x + C2 x² + C3 x³ – Y)² and find the roots. But this is a quintic polynomial (fifth degree) with general coefficients so the Abel-Ruffini theorem fully applies, meaning that there is no solution in radicals.

There is a known solution anyway, by reducing the equation (via a lengthy substitution process) to the form x^5 – x + t = 0 known as the Bring–Jerrard normal form, and getting the solutions (called ultraradicals) by means of the elliptic functions of Hermite or evaluation of the roots by Taylor.


Personal note:

This approach is virtually foolish, as there exist ready-made numerical polynomial root-finders, and the ultraradical function is uneasy to evaluate.

Anyway, looking at the plot of x^5 – x, one can see that it is intersected once or three times by and horizontal, and finding an interval with a change of sign is easy. With that, you can obtain an accurate root by dichotomy (and far from the extrema, Newton will easily converge).

After having found this root, you can deflate the polynomial to a quartic, for which explicit formulas by radicals are known.

Answered By: Yves Daoust
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.