How to write pandas' merge_asof equivalence in PySpark Question: I am trying to write a merge_asof of pandas in Spark. Here is a sample example: from datetime import datetime df1 = spark.createDataFrame( [ (datetime(2019,2,3,13,30,0,23),"GOOG",720.5,720.93), (datetime(2019,2,3,13,30,0,23),"MSFT",51.95,51.96), (datetime(2019,2,3,13,30,0,20),"MSFT",51.97,51.98), (datetime(2019,2,3,13,30,0,41),"MSFT",51.99,52.0), (datetime(2019,2,3,13,30,0,48),"GOOG",720.5,720.93), (datetime(2019,2,3,13,30,0,49),"AAPL",97.99,98.01), (datetime(2019,2,3,13,30,0,72),"GOOG",720.5,720.88), (datetime(2019,2,3,13,30,0,75),"MSFT",52.1,52.03) ], ("time", "ticker", "bid", "ask") ) df2 = spark.createDataFrame( [ (datetime(2019,2,3,13,30,0,23),"MSFT",51.95,75), (datetime(2019,2,3,13,30,0,38),"MSFT",51.95,155), (datetime(2019,2,3,13,30,0,48),"GOOG",720.77,100), (datetime(2019,2,3,13,30,0,48),"GOOG",720.92,100), …