Optimizing an Excel to Pandas import and transformation from wide to long data

Question:

I need to import and transform xlsx files. They are written in a wide format and I need to reproduce some of the cell information from each row and pair it up with information from all the other rows:

[Edit: changed format to represent the more complex requirements]

Source format

ID Property Activity1name Activity1timestamp Activity2name Activity2timestamp
1 A a 1.1.22 00:00 b 2.1.22 10:05
2 B a 1.1.22 03:00 b 5.1.22 20:16

Target format

ID Property Activity Timestamp
1 A a 1.1.22 00:00
1 A b 2.1.22 10:05
2 B a 1.1.22 03:00
2 B b 5.1.22 20:16

The following code works fine to transform the data, but the process is really, really slow:

def transform(data_in):
    data = pd.DataFrame(columns=columns)
    # Determine number of processes entered in a single row of the original file
    steps_per_row = int((data_in.shape[1] - (len(columns) - 2)) / len(process_matching) + 1)
    data_in = data_in.to_dict("records") # Convert to dict for speed optimization
    for row_dict in tqdm(data_in): # Iterate over each row of the original file
        new_row = {}
        # Set common columns for each process step
        for column in column_matching:
            new_row[column] = row_dict[column_matching[column]]
        for step in range(0, steps_per_row):
            rep = str(step+1) if step > 0 else ""
            # Iterate for as many times as there are process steps in one row of the original file and
            # set specific columns for each process step, keeping common column values identical for current row
            for column in process_matching:
                new_row[column] = row_dict[process_matching[column]+rep]
            data = data.append(new_row, ignore_index=True) # append dict of new_row to existing data
    data.index.name = "SortKey"
    data[timestamp].replace(r'.000', '', regex=True, inplace=True) # Remove trailing zeros from timestamp # TODO check if works as intended
    data.replace(r'^s*$', float('NaN'), regex=True, inplace=True) # Replace cells with only spaces with nan
    data.dropna(axis=0, how="all", inplace=True) # Remove empty rows
    data.dropna(axis=1, how="all", inplace=True) # Remove empty columns
    data.dropna(axis=0, subset=[timestamp], inplace=True) # Drop rows with empty Timestamp
    data.fillna('', inplace=True) # Replace NaN values with empty cells
    return data

Obviously, iterating over each row and then even each column is not at all how to use pandas the right way, but I don’t see how this kind of transformation can be vectorized.

I have tried using parallelization (modin) and played around with using dict or not, but it didn’t work / help. The rest of the script literally just opens and saves the files, so the problem lies here.

I would be very grateful for any ideas on how to improve the speed!

Asked By: Johannes Schöck

||

Answers:

The df.melt function should be able to do this type of operation much faster.

df = pd.DataFrame({'ID' : [1, 2],
                   'Property' : ['A', 'B'],
                   'Info1' : ['x', 'a'],
                   'Info2' : ['y', 'b'],
                   'Info3' : ['z', 'c'],
                   })

data=df.melt(id_vars=['ID','Property'], value_vars=['Info1', 'Info2', 'Info3'])

** Edit to address modified question **
Combine the df.melt with df.pivot operation.

# create data
df = pd.DataFrame({'ID' : [1, 2, 3],
                   'Property' : ['A', 'B', 'C'],
                   'Activity1name' : ['a', 'a', 'a'],
                   'Activity1timestamp' : ['1_1_22', '1_1_23', '1_1_24'],
                   'Activity2name' : ['b', 'b', 'b'],
                   'Activity2timestamp' : ['2_1_22', '2_1_23', '2_1_24'],
                   })

# melt dataframe
df_melted = df.melt(id_vars=['ID','Property'], 
             value_vars=['Activity1name', 'Activity1timestamp',
                         'Activity2name', 'Activity2timestamp',],
             )

# merge categories, i.e. Activity1name Activity2name become Activity
df_melted.loc[df_melted['variable'].str.contains('name'), 'variable'] = 'Activity'
df_melted.loc[df_melted['variable'].str.contains('timestamp'),'variable'] = 'Timestamp'

# add category ids (dataframe may need to be sorted before this operation)
u_category_ids = np.arange(1,len(df_melted.variable.unique())+1)
category_ids = np.repeat(u_category_ids,len(df)*2).astype(str)
df_melted.insert(0, 'unique_id', df_melted['ID'].astype(str) +'_'+ category_ids)

# pivot table 
table = df_melted.pivot_table(index=['unique_id','ID','Property',], 
                              columns='variable', values='value',
                                    aggfunc=lambda x: ' '.join(x))
table = table.reset_index().drop(['unique_id'], axis=1)
Answered By: Pantelis

Using pd.melt, as suggested by @Pantelis, I was able to speed up this transformation so extremely much, it’s unbelievable. Before, a file with ~13k rows took 4-5 hours on a brand-new ThinkPad X1 – now it takes less than 2 minutes! That’s a speed up by factor 150, just wow. 🙂

Here’s my new code, for inspiration / reference if anyone has a similar data structure:

def transform(data_in):
    # Determine number of processes entered in a single row of the original file
    steps_per_row = int((data_in.shape[1] - len(column_matching)) / len(process_matching) )
    # Specify columns for pd.melt, transforming wide data format to long format
    id_columns = column_matching.values()
    var_names = {"Erledigungstermin Auftragsschrittbeschreibung":data_in["Auftragsschrittbeschreibung"].replace(" ", np.nan).dropna().values[0]}
    var_columns = ["Erledigungstermin Auftragsschrittbeschreibung"]
    for _ in range(2, steps_per_row+1):
        try:
            var_names["Erledigungstermin Auftragsschrittbeschreibung" + str(_)] = data_in["Auftragsschrittbeschreibung" + str(_)].replace(" ", np.nan).dropna().values[0]
        except IndexError:
            var_names["Erledigungstermin Auftragsschrittbeschreibung" + str(_)] = data_in.loc[0,"Auftragsschrittbeschreibung" + str(_)]
        var_columns.append("Erledigungstermin Auftragsschrittbeschreibung" + str(_))
    data = pd.melt(data_in, id_vars=id_columns, value_vars=var_columns, var_name="ActivityName", value_name=timestamp)
    data.replace(var_names, inplace=True) # Replace "Erledigungstermin Auftragsschrittbeschreibung" with ActivityName
    data.sort_values(["Auftrags-npositionsnummer",timestamp], ascending=True, inplace=True)
    # Improve column names
    data.index.name = "SortKey"
    column_names = {v: k for k, v in column_matching.items()}
    data.rename(mapper=column_names, axis="columns", inplace=True)
    data[timestamp].replace(r'.000', '', regex=True, inplace=True) # Remove trailing zeros from timestamp
    data.replace(r'^s*$', float('NaN'), regex=True, inplace=True) # Replace cells with only spaces with nan
    data.dropna(axis=0, how="all", inplace=True) # Remove empty rows
    data.dropna(axis=1, how="all", inplace=True) # Remove empty columns
    data.dropna(axis=0, subset=[timestamp], inplace=True) # Drop rows with empty Timestamp
    data.fillna('', inplace=True) # Replace NaN values with empty cells
    return data
Answered By: Johannes Schöck