Filter data frame based on absolute difference
Question:
I have the following data frame:
import pandas as pd
d1 = {'id': ["car", "car", "car", "plane", "plane", "car"], 'value': [1, 1.2, 5, 6, 1.3, 0.8]}
df1 = pd.DataFrame(data=d1)
df1
id value
0 car 1.0
1 car 1.2
2 car 5.0
3 plane 6.0
4 plane 1.3
5 car 0.8
I want to filter rows out, if all differences for a value are smaller than 1, so I get the following data frames:
d2 = {'id': ["car", "car", "car"], 'value': [1, 1.2, 0.8]}
df2 = pd.DataFrame(data=d2)
df2
id value
0 car 1.0
1 car 1.2
5 car 0.8
and
d3 = {'id': ["car", "plane", "plane"], 'value': [5, 6, 1.3]}
df3 = pd.DataFrame(data=d3)
df3
2 car 5.0
3 plane 6.0
4 plane 1.3
I tried the following function to save all values in a temporary list, but it did not work properly:
unique_list = []
def unique_2(df):
for id_1, value_1 in zip(df["id"], df["value"]):
for id_2, value_2 in zip(df["id"], df["value"]):
if id_1 == id_2:
if abs(value_1-value_2) > 0.01:
x = True
unique_list.append(x)
else:
x = False
unique_list.append(x)
else:
pass
Answers:
Het differencies with column value
with numpy broadcasting, get absolute values, comapre less like 1
with set False
to diagonal:
a = df1['value'].to_numpy()
m = np.abs(a - a[:, None]) < 1
np.fill_diagonal(m, False)
print (m)
[[False True False False False True]
[ True False False False False True]
[False False False False False False]
[False False False False False False]
[False False False False False False]
[ True True False False False False]]
Last fitler rows with at least one True
per rows:
mask = np.any(m, axis=1)
df11, df22 = df1[mask], df1[~mask]
print (df11)
id value
0 car 1.0
1 car 1.2
5 car 0.8
print (df22)
id value
2 car 5.0
3 plane 6.0
4 plane 16.0
You can use a custom groupby
to split the data:
grp = df1['value'].sort_values().diff().gt(1).cumsum()
out = [g for _, g in df1.groupby(grp)]
Note that it wasn’t clear whether you want to use <1
or ≤1
as threshold. If you want <1
replace gt(1)
by ge(1)
.
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0,
id value
4 plane 16.0]
Intermediate grp
:
5 0
0 0
1 0
2 1
3 1
4 2
Name: value, dtype: int64
grouping loners together
Assuming a different interpretation, if you want to groups loners (=rows that have no other row within 1) together, use:
grp = df1['value'].sort_values().diff().ge(1).cumsum()
grp = grp.mask(df1.groupby(grp).transform('size').eq(1), 'alone')
out = [g for _, g in df1.groupby(grp)]
Note that we’re only grouping rows that are less than 1 apart
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0
4 plane 16.0]
Intermediate grp
:
5 0
0 0
1 0
2 alone
3 alone
4 alone
Name: value, dtype: object
by ID:
grp = df1.sort_values(by='value').groupby('id', group_keys=False)['value'].apply(lambda g: g.diff().gt(1).cumsum())
grp = grp.mask(df1.groupby(['id', grp]).transform('size').eq(1), 'alone')
out = [g for _, g in df1.groupby(grp)]
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0
4 plane 1.3]
I have no solution because the logic is unclear:
I want to filter rows out, if all differences for a value are smaller than 1
def debug(sr):
print(f'[{sr.name}]')
arr = sr.values
val = np.abs(sr.values - sr.values[:, None])
print(pd.DataFrame(val, sr.tolist(), sr.tolist()))
print()
return np.max(val)
df1.groupby('id')['value'].transform(debug)
Output:
[car]
1.0 1.2 5.0 0.8
1.0 0.0 0.2 4.0 0.2 # one difference > 1
1.2 0.2 0.0 3.8 0.4 # one difference > 1
5.0 4.0 3.8 0.0 4.2 # 3 differences > 1
0.8 0.2 0.4 4.2 0.0 # 1 difference > 1
[plane]
6.0 16.0
6.0 0.0 10.0
16.0 10.0 0.0
0 4.2 # car group difference > 1
1 4.2
2 4.2
3 10.0 # plane group difference > 1
4 10.0
5 4.2
Name: value, dtype: float64
As you can see, for each combination, there is at least one value whose difference is greater than 1. So for a given group, you can’t split it into two parts. You can only set the whole group to one of the two lists:
- group1: all absolute differences is lower or equals than 1
- group2: at least one difference is greater than 1
I have the following data frame:
import pandas as pd
d1 = {'id': ["car", "car", "car", "plane", "plane", "car"], 'value': [1, 1.2, 5, 6, 1.3, 0.8]}
df1 = pd.DataFrame(data=d1)
df1
id value
0 car 1.0
1 car 1.2
2 car 5.0
3 plane 6.0
4 plane 1.3
5 car 0.8
I want to filter rows out, if all differences for a value are smaller than 1, so I get the following data frames:
d2 = {'id': ["car", "car", "car"], 'value': [1, 1.2, 0.8]}
df2 = pd.DataFrame(data=d2)
df2
id value
0 car 1.0
1 car 1.2
5 car 0.8
and
d3 = {'id': ["car", "plane", "plane"], 'value': [5, 6, 1.3]}
df3 = pd.DataFrame(data=d3)
df3
2 car 5.0
3 plane 6.0
4 plane 1.3
I tried the following function to save all values in a temporary list, but it did not work properly:
unique_list = []
def unique_2(df):
for id_1, value_1 in zip(df["id"], df["value"]):
for id_2, value_2 in zip(df["id"], df["value"]):
if id_1 == id_2:
if abs(value_1-value_2) > 0.01:
x = True
unique_list.append(x)
else:
x = False
unique_list.append(x)
else:
pass
Het differencies with column value
with numpy broadcasting, get absolute values, comapre less like 1
with set False
to diagonal:
a = df1['value'].to_numpy()
m = np.abs(a - a[:, None]) < 1
np.fill_diagonal(m, False)
print (m)
[[False True False False False True]
[ True False False False False True]
[False False False False False False]
[False False False False False False]
[False False False False False False]
[ True True False False False False]]
Last fitler rows with at least one True
per rows:
mask = np.any(m, axis=1)
df11, df22 = df1[mask], df1[~mask]
print (df11)
id value
0 car 1.0
1 car 1.2
5 car 0.8
print (df22)
id value
2 car 5.0
3 plane 6.0
4 plane 16.0
You can use a custom groupby
to split the data:
grp = df1['value'].sort_values().diff().gt(1).cumsum()
out = [g for _, g in df1.groupby(grp)]
Note that it wasn’t clear whether you want to use <1
or ≤1
as threshold. If you want <1
replace gt(1)
by ge(1)
.
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0,
id value
4 plane 16.0]
Intermediate grp
:
5 0
0 0
1 0
2 1
3 1
4 2
Name: value, dtype: int64
grouping loners together
Assuming a different interpretation, if you want to groups loners (=rows that have no other row within 1) together, use:
grp = df1['value'].sort_values().diff().ge(1).cumsum()
grp = grp.mask(df1.groupby(grp).transform('size').eq(1), 'alone')
out = [g for _, g in df1.groupby(grp)]
Note that we’re only grouping rows that are less than 1 apart
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0
4 plane 16.0]
Intermediate grp
:
5 0
0 0
1 0
2 alone
3 alone
4 alone
Name: value, dtype: object
by ID:
grp = df1.sort_values(by='value').groupby('id', group_keys=False)['value'].apply(lambda g: g.diff().gt(1).cumsum())
grp = grp.mask(df1.groupby(['id', grp]).transform('size').eq(1), 'alone')
out = [g for _, g in df1.groupby(grp)]
Output:
[ id value
0 car 1.0
1 car 1.2
5 car 0.8,
id value
2 car 5.0
3 plane 6.0
4 plane 1.3]
I have no solution because the logic is unclear:
I want to filter rows out, if all differences for a value are smaller than 1
def debug(sr):
print(f'[{sr.name}]')
arr = sr.values
val = np.abs(sr.values - sr.values[:, None])
print(pd.DataFrame(val, sr.tolist(), sr.tolist()))
print()
return np.max(val)
df1.groupby('id')['value'].transform(debug)
Output:
[car]
1.0 1.2 5.0 0.8
1.0 0.0 0.2 4.0 0.2 # one difference > 1
1.2 0.2 0.0 3.8 0.4 # one difference > 1
5.0 4.0 3.8 0.0 4.2 # 3 differences > 1
0.8 0.2 0.4 4.2 0.0 # 1 difference > 1
[plane]
6.0 16.0
6.0 0.0 10.0
16.0 10.0 0.0
0 4.2 # car group difference > 1
1 4.2
2 4.2
3 10.0 # plane group difference > 1
4 10.0
5 4.2
Name: value, dtype: float64
As you can see, for each combination, there is at least one value whose difference is greater than 1. So for a given group, you can’t split it into two parts. You can only set the whole group to one of the two lists:
- group1: all absolute differences is lower or equals than 1
- group2: at least one difference is greater than 1