Merge pandas DataFrame on column of float values

Question:

I have two data frames that I am trying to merge.

Dataframe A:

    col1    col2    sub    grade
0   1       34.32   x       a 
1   1       34.32   x       b
2   1       34.33   y       c
3   2       10.14   z       b
4   3       33.01   z       a

Dataframe B:

    col1    col2    group   ID
0   1       34.32   t       z 
1   1       54.32   s       w
2   1       34.33   r       z
3   2       10.14   q       z
4   3       33.01   q       e

I want to merge on col1 and col2. I’ve been pd.merge with the following syntax:

pd.merge(A, B, how = 'outer', on = ['col1', 'col2'])

However, I think I am running into issues joining on the float values of col2 since many rows are being dropped. Is there any way to use np.isclose to match the values of col2? When I reference the index of a particular value of col2 in either dataframe, the value has many more decimal places than what is displayed in the dataframe.

I would like the result to be:

    col1   col2   sub   grade   group    ID
0   1      34.32  x     a       t        z
1   1      34.32  x     b       s        w
2   1      54.32  s     w       NaN      NaN
3   1      34.33  y     c       r        z
4   2      10.14  z     b       q        z
5   3      33.01  z     a       q        e
Asked By: Megan

||

Answers:

You can use a little hack – multiple float columns by some constant like 100, 1000…, convert column to int, merge and last divide by constant:

N = 100
#thank you koalo for comment
A.col2 = np.round(A.col2*N).astype(int) 
B.col2 = np.round(B.col2*N).astype(int) 
df = pd.merge(A, B, how = 'outer', on = ['col1', 'col2'])
df.col2 = df.col2 / N
print (df)
   col1   col2  sub grade group ID
0     1  34.32    x     a     t  z
1     1  34.32    x     b     t  z
2     1  34.33    y     c     r  z
3     2  10.14    z     b     q  z
4     3  33.01    z     a     q  e
5     1  54.32  NaN   NaN     s  w
Answered By: jezrael

I had a similar problem where I needed to identify matching rows with thousands of float columns and no identifier. This case is difficult because values can vary slightly due to rounding.

In this case, I used scipy.spatial.distance.cosine to get the cosine similarity between rows.

from scipy import distance

threshold = 0.99999
similarity = 1 - spatial.distance.cosine(row1, row2)

if similarity >= threshold:
    # it's a match
else:
    # loop and check another row pair

This won’t work if you have duplicate or very similar rows, but when you have a large number of float columns and not too many of rows, it works well.

Answered By: Sesquipedalism

Assuming that the column (col2) has n decimal numbers.

A.col2 = np.round(A.col2, decimals=n)
B.col2 = np.round(B.col2, decimals=n)
df = A.merge(B, left_on=['col1', 'col2'], right_on=['col1', 'col2'])
Answered By: Stavros Tranidis
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.