Fractal dimension with the Mass-radius method

Question:

I have some images for which I want to calculate the Mass-Radius dimension to determine the fractal characteristics in the image. Here is one of them :

Ottawa.png:

Ottawa

The mass dimension defines the relationship between the area located within a certain radius and
the size of this radius (or box). This is performed for various radiuses beginning at the center of mass. The mass dimension can be estimated from the log-log plot of the area (the number of non black pixel) as a function of the radius (in pixel).

My Step-by-Step:

  • Find the center of mass
  • Create a circle of radius r and find the number of non black pixels
  • Put the log of those values inside 2 lists (R, N(R))
  • Iterate until we finished analyzing the image
  • Do a linear regression between (R, N(R))
  • The coefficient a from ax+b is the dimension

I’m using the following code to calculate the fractal dimension:

from scipy.stats import linregress
from skimage import io
import matplotlib.pyplot as plt
import numpy as np

# Verify if a pixel is black
def test_pixel(l):
    return (l == np.array([0, 0, 0])).all()

# Find the center of the image
def center_of_mass(img):
    i, j, f = img.shape
    x,y,p = 0,0,0
    for k1 in range(1, i - 1):
        for k2 in range(1, j - 1):
            if not test_pixel(img[k1][k2]):
                p += 1
                x += k2
                y += k1
    return int(x / p), int(y / p)

# Return two lists of logs : the radius and the number of pixel at the radius
def D_R(img):
    (x, y) = center_of_mass(img)
    i, j, f = img.shape
    k = int(max(j - x, i - y, x, y)) # Stop when the image is analyzed
    X, Y, A, p = [], [], 0, 0
    for t in range(1, k):
        for k1 in range(-t, t+1):
            for k2 in range(-t, t+1):
                # Verify if (x+k2, y+k1) is in the image
                if -1 < (x + k2) < j and -1 < (y + k1) < i:
                    # Test if (x+k2, y+k1) is not black and is in the circle of radius t
                    if A == 0 and k1 ** 2 + k2 ** 2 < t ** 2 and not test_pixel(img[k1+y][k2+x]):
                        p += 1
                    elif A ** 2 <= k1 ** 2 + k2 ** 2 < t ** 2 and not test_pixel(img[k1+y][k2+x]):
                        p += 1
        A = t
        if p != 0:
            Y.append(np.log(p))
            X.append(np.log(t))
    return X, Y

I = io.imread('Ottawa.png')
X, Y = D_R(I)
a, b, r, p_value, std_err = linregress(X, Y)
print(a) # Corresponds to the Dimension
print(b)
print(r**2)

However, this code is really slow.

I want to know how can I make this code more efficient.

Asked By: Jotadiolyne Dicci

||

Answers:

The reason your code is so slow is because of your three nested loops. I imagine you’ve worked this out yourself, but loops (and especially nested loops) are a bad idea in Python. Instead, you should try to use Numpy broadcasting.

Just one detail to watch out for: because you’re working with a PNG, you may end up with four channels (one of them being the alpha channel), which may not be what you intended.

If you have a loop whose output only depends on the iteration variable, and that variable is a simple range, it is frequently possible to replace this with np.arange. If you have nested loops, you can broadcast np.aranges together to produce 2D (or nD) arrays which replicate your for loops. In this case, it’s fairly easy to do this with k1 and k2, eliminating the two inner loops:

k1 = np.arange(-t, t+1)[:, np.newaxis]
k2 = np.arange(-t, t+1)

You can then convert most of your if/elif section to boolean maths:

r = k1**2 + k2**2
x_k2 = x + k2
y_k1 = y + k1
p_mat = (-1 < x_k2) & (x_k2 < j) & (-1 < y_k1) & (y_k1 < i) & ((A == 0) | (A**2 <= r)) & (r < t**2)

Note that here r is now a 2D array and causes all other operations/arrays to broadcast to the same shape.

The only real complication of this approach is actually dealing with the image, since the indexing of the image doesn’t always produce a consistently sized output, such as at the edge of the image. It’s a bit complicated because of the edge cases, but this is how it turns out:

cropped_img = img[max(y-t, 0):y+t+1,max(x-t, 0):x+t+1]
offset_y = max(-(y-t), 0)
offset_x = max(-(x-t), 0)
p_mat = p_mat[offset_y:offset_y+cropped_img.shape[0], offset_x:offset_x+cropped_img.shape[1]] & (np.any(cropped_img != 0, axis=-1))

If you just leave the t loop, this already significantly improves performance:

def D_R(img):
    (x, y) = center_of_mass(img)
    i, j, f = img.shape
    k = int(max(j - x, i - y, x, y)) # Stop when the image is analyzed
    X, Y, A, p = [], [], 0, 0
    for t in range(1, k):
        k1 = np.arange(-t, t+1)[:, np.newaxis]
        k2 = np.arange(-t, t+1)
        r = k1**2 + k2**2
        x_k2 = x + k2
        y_k1 = y + k1
        p_mat = (-1 < x_k2) & (x_k2 < j) & (-1 < y_k1) & (y_k1 < i) & ((A == 0) | (A**2 <= r)) & (r < t**2)
        cropped_img = img[max(y-t, 0):y+t+1,max(x-t, 0):x+t+1]
        offset_y = max(-(y-t), 0)
        offset_x = max(-(x-t), 0)
        p_mat = p_mat[offset_y:offset_y+cropped_img.shape[0], offset_x:offset_x+cropped_img.shape[1]] & (np.any(cropped_img != 0, axis=-1))
        p += p_mat.sum()
        A = t
        if p != 0:
            Y.append(np.log(p))
            X.append(np.log(t))
    return X, Y

Note that your center_of_mass function is still really slow. You could try to apply what I said earlier about replacing loops with np.arange to improve it.

There is a further improvement you could work on, completely removing the t loop: if you think about it, your t loop is basically defining an ever growing circle centred around the centre of mass and performing certain computations on its edge. If you think of t as being a 3rd dimension, this produces the shell of a cone. Each value in the cone is combined with the value in the underlying image with an &, and the result is cumulatively summed. Something like this:

def D_R(img):
    (x, y) = center_of_mass(img)
    i, j, f = img.shape
    k = int(max(j - x, i - y, x, y)) # Stop when the image is analyzed
    X, Y, A, p = [], [], 0, 0
    cone = k - 1 - np.sqrt((np.arange(i)[:, np.newaxis] - y) ** 2 + (np.arange(j) - x) ** 2)
    cone = np.maximum(cone.astype(np.int64), 0)
    cone = cone >= (k - 1 - np.arange(k).reshape((-1, 1, 1)))
    cone = cone[1:]
    cone[1:] &= ~cone[:-1]
    p_vec = np.any(img != 0, axis=-1) & cone
    p_vec = np.cumsum(np.sum(p_vec, (1, 2)))
    return np.log(np.arange(1, k)[p_vec != 0]).tolist(), np.log(p_vec[p_vec != 0]).tolist()

However, while this gets similar results to the original, it isn’t absolutely accurate, so I assume I’ve missed some detail. It’s close though, so you could work on it.

Building the function in this way has great potential for applying other interesting things like np.einsum, since an & operation is equivalent to a boolean multiplication and an np.any operation is equivalent to a sum along that axis, the two operations that np.einsum is built for.

Obviously in all these cases there is a tradeoff between compute time and memory. The original implementation leans heavily on the former, while my last version requires a lot of the latter. Though if you rewrite it using np.einsum you could get fairly good performance with low memory usage.

PS: In this line:

k = int(max(j - x, i - y, x, y)) # Stop when the image is analyzed

Are you sure this is correct? It seems to me you intended to cover the entire image, however because you’re working with circles, this misses all the corners of the image. Maybe this is what you intended, maybe not. If not, I suggest you mean:

k = int((max(j-x, x)**2 + max(i-y, y)**2)**0.5)

But I’m only guessing.

Edit: Regarding my suggestion of using einsum, turns out this doesn’t really work: the problem is that the array that uses most memory is the cone, which can’t be converted to einsum (at least, I can’t see any way).

However, there are other other methods to reduce memory and time: instead of converting the first step of the cone to a boolean array, just use the values directly. This is a slight compromise in terms of accuracy (compared to your original implementation) but a massive leap in performance and memory usage (I can’t even see the memory usage and it takes a quarter of the time of the previous best). It uses a function called np.bincount:

def D_R(img):
    (x, y) = center_of_mass(img)
    i, j, f = img.shape
    k = int(max(j - x, i - y, x, y))  # Stop when the image is analyzed
    cone = k - 1 - 
        np.sqrt((np.arange(i)[:, np.newaxis] - y)
                ** 2 + (np.arange(j) - x) ** 2)
    cone = np.maximum(cone.round().astype(np.int64), 0)
    p_vec = np.any(img != 0, axis=-1) * cone
    p_vec = np.bincount(p_vec.ravel(), minlength=k)[1:][::-1]
    p_vec = np.cumsum(p_vec)
    return np.log(np.arange(1, k)[p_vec != 0]).tolist(), np.log(p_vec[p_vec != 0]).tolist()

The reason this is different is again probably due to the exact thickness of the "circles" you’re testing. In this approach, the circles are very thin, since they can never overlap.

Answered By: David
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.