# Numpy – Determine nearly equivalent floats before finding max value and index in 1D array

## Question:

I have been stuck on this problem for hours.

Let’s say we have a 1D array of floats: `[0.12345678, 0.23456788, 0.23456789]`

I just need to find the max value of the array and it’s index. I know I can do that with `np.max`

and `np.argmax`

.

`res = np.max(array)`

`index = np.argmax(array)`

In this case, the answer would be `[0.23456789, 2]`

.

But what happens when I need to check for near equivalence first? Basically, `0.23456788`

and `0.23456789`

are nearly equivalent, so only the smallest instance of that value is used. Therefore, the answer would be `[0.23456788, 1]`

.

How exactly do I do this without using for or while loops? I figured I need to use `np.isclose()`

, but where do I use it? I’m kind of new to numpy, so any help is appreciated.

Example 1:

```
Array: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.9, 0.9, 0.9]
Expected Answer: [0.9, 7]
Explanation: 0.9 is the max value, and it first appears in index 7
```

Example 2:

```
Array: [0.11111111, 0.22222222, 0.33333333, 0.44444444, 0,44444445, 0.12345678]
Expected Answer: [0.44444444, 3]
Explanation: 0.44444445 is the max, but because 0.44444444 is nearly equivalent and we want the lowest value of that instance, the answer becomes [0.44444444, 3]
```

## Answers:

Just round the values to the number of decimal places you care about. That number is application dependent and only you can determine what will work for "close enough".

```
> data = np.array([0.12345678, 0.23456788, 0.23456789])
> decimals = 3
> data.round(decimals)
array([0.123, 0.235, 0.235])
> np.max(data.round(decimals)), np.argmax(data.round(decimals))
(0.235, 1)
```