How can I extract all the instances of a regular expression pattern in PySpark dataframe?
Question:
I have a StringType()
column in a PySpark dataframe. I want to extract all the instances of a regexp pattern from that string and put them into a new column of ArrayType(StringType())
Suppose the regexp pattern is [a-z]*([0-9]*)
Input df:
+-----------+
|stringValue|
+-----------+
|a1234bc123 |
|av1tb12h18 |
|abcd |
+-----------+
Output df:
+-----------+-------------------+
|stringValue|output |
+-----------+-------------------+
|a1234bc123 |['1234', '123'] |
|av1tb12h18 |['1', '12', '18'] |
|abcd |[] |
+-----------+-------------------+
Answers:
Try use the split
and array_remove
from functions
in spark:
- create the test DataFrame
from pyspark.sql import functions as F
df = spark.createDataFrame([("a1234bc123",), ("av1tb12h18",), ("abcd",)],["stringValue"])
df.show()
The original DataFrame:
+-----------+
|stringValue|
+-----------+
| a1234bc123|
| av1tb12h18|
| abcd|
+-----------+
- Use
split
to separate the strings into numbers only
df = df.withColumn("mid", F.split('stringValue', r'[a-zA-Z]'))
df.show()
The output:
+-----------+-----------------+
|stringValue| mid|
+-----------+-----------------+
| a1234bc123| [, 1234, , 123]|
| av1tb12h18|[, , 1, , 12, 18]|
| abcd| [, , , , ]|
+-----------+-----------------+
- Finally, use
array_remove
to remove non-number elements
df = df.withColumn("output", F.array_remove('mid', ''))
df.show()
The final output:
+-----------+-----------------+-----------+
|stringValue| mid| output|
+-----------+-----------------+-----------+
| a1234bc123| [, 1234, , 123]|[1234, 123]|
| av1tb12h18|[, , 1, , 12, 18]|[1, 12, 18]|
| abcd| [, , , , ]| []|
+-----------+-----------------+-----------+
you can use the combination of regexp_replace and split api of function module
import pyspark.sql.types as t
import pyspark.sql.functions as f
l1 = [('anystring',),('a1234bc123',),('av1tb12h18',)]
df = spark.createDataFrame(l1).toDF('col')
df.show()
+----------+
| col|
+----------+
| anystring|
|a1234bc123|
|av1tb12h18|
+----------+
Now use replace matching regex and then split by “,”. Here $1 refers to what is replaced value so it will be blank for matching regex.
e.g replace('anystring')
$0 = anystring
$1 = ""
dfl1 = df.withColumn('temp', f.split(f.regexp_replace("col", "[a-z]*([0-9]*)", "$1,"), ","))
dfl1.show()
+----------+---------------+
| col| temp|
+----------+---------------+
| anystring| [, , ]|
|a1234bc123|[1234, 123, , ]|
|av1tb12h18|[1, 12, 18, , ]|
+----------+---------------+
Spark <2.4
Use UDF to replace empty values of array
def func_drop_from_array(arr):
return [x for x in arr if x != '']
drop_from_array = f.udf(func_drop_from_array, t.ArrayType(t.StringType()))
dfl1.withColumn('final', drop_from_array('temp')).show()
+----------+---------------+-----------+
| col| temp| final|
+----------+---------------+-----------+
| anystring| [, , ]| []|
|a1234bc123|[1234, 123, , ]|[1234, 123]|
|av1tb12h18|[1, 12, 18, , ]|[1, 12, 18]|
+----------+---------------+-----------+
Spark >=2.4
Use array_remove
dfl1.withColumn('final', f.array_remove('temp','')).show()
+----------+---------------+-----------+
| col| temp| final|
+----------+---------------+-----------+
| anystring| [, , ]| []|
|a1234bc123|[1234, 123, , ]|[1234, 123]|
|av1tb12h18|[1, 12, 18, , ]|[1, 12, 18]|
+----------+---------------+-----------+
In Spark 3.1+ regexp_extract_all
is available.
regexp_extract_all(str, regexp[, idx])
– Extract all strings in the str
that match the regexp
expression and corresponding to the regex group index.
df = df.withColumn('output', F.expr(r"regexp_extract_all(stringValue, '[a-z]*(\d+)', 1)"))
df.show()
#+-----------+-----------+
#|stringValue| output|
#+-----------+-----------+
#| a1234bc123|[1234, 123]|
#| av1tb12h18|[1, 12, 18]|
#| abcd| []|
#+-----------+-----------+
I have a StringType()
column in a PySpark dataframe. I want to extract all the instances of a regexp pattern from that string and put them into a new column of ArrayType(StringType())
Suppose the regexp pattern is [a-z]*([0-9]*)
Input df:
+-----------+
|stringValue|
+-----------+
|a1234bc123 |
|av1tb12h18 |
|abcd |
+-----------+
Output df:
+-----------+-------------------+
|stringValue|output |
+-----------+-------------------+
|a1234bc123 |['1234', '123'] |
|av1tb12h18 |['1', '12', '18'] |
|abcd |[] |
+-----------+-------------------+
Try use the split
and array_remove
from functions
in spark:
- create the test DataFrame
from pyspark.sql import functions as F
df = spark.createDataFrame([("a1234bc123",), ("av1tb12h18",), ("abcd",)],["stringValue"])
df.show()
The original DataFrame:
+-----------+
|stringValue|
+-----------+
| a1234bc123|
| av1tb12h18|
| abcd|
+-----------+
- Use
split
to separate the strings into numbers only
df = df.withColumn("mid", F.split('stringValue', r'[a-zA-Z]'))
df.show()
The output:
+-----------+-----------------+
|stringValue| mid|
+-----------+-----------------+
| a1234bc123| [, 1234, , 123]|
| av1tb12h18|[, , 1, , 12, 18]|
| abcd| [, , , , ]|
+-----------+-----------------+
- Finally, use
array_remove
to remove non-number elements
df = df.withColumn("output", F.array_remove('mid', ''))
df.show()
The final output:
+-----------+-----------------+-----------+
|stringValue| mid| output|
+-----------+-----------------+-----------+
| a1234bc123| [, 1234, , 123]|[1234, 123]|
| av1tb12h18|[, , 1, , 12, 18]|[1, 12, 18]|
| abcd| [, , , , ]| []|
+-----------+-----------------+-----------+
you can use the combination of regexp_replace and split api of function module
import pyspark.sql.types as t
import pyspark.sql.functions as f
l1 = [('anystring',),('a1234bc123',),('av1tb12h18',)]
df = spark.createDataFrame(l1).toDF('col')
df.show()
+----------+
| col|
+----------+
| anystring|
|a1234bc123|
|av1tb12h18|
+----------+
Now use replace matching regex and then split by “,”. Here $1 refers to what is replaced value so it will be blank for matching regex.
e.g replace('anystring')
$0 = anystring
$1 = ""
dfl1 = df.withColumn('temp', f.split(f.regexp_replace("col", "[a-z]*([0-9]*)", "$1,"), ","))
dfl1.show()
+----------+---------------+
| col| temp|
+----------+---------------+
| anystring| [, , ]|
|a1234bc123|[1234, 123, , ]|
|av1tb12h18|[1, 12, 18, , ]|
+----------+---------------+
Spark <2.4
Use UDF to replace empty values of array
def func_drop_from_array(arr):
return [x for x in arr if x != '']
drop_from_array = f.udf(func_drop_from_array, t.ArrayType(t.StringType()))
dfl1.withColumn('final', drop_from_array('temp')).show()
+----------+---------------+-----------+
| col| temp| final|
+----------+---------------+-----------+
| anystring| [, , ]| []|
|a1234bc123|[1234, 123, , ]|[1234, 123]|
|av1tb12h18|[1, 12, 18, , ]|[1, 12, 18]|
+----------+---------------+-----------+
Spark >=2.4
Use array_remove
dfl1.withColumn('final', f.array_remove('temp','')).show()
+----------+---------------+-----------+
| col| temp| final|
+----------+---------------+-----------+
| anystring| [, , ]| []|
|a1234bc123|[1234, 123, , ]|[1234, 123]|
|av1tb12h18|[1, 12, 18, , ]|[1, 12, 18]|
+----------+---------------+-----------+
In Spark 3.1+ regexp_extract_all
is available.
regexp_extract_all(str, regexp[, idx])
– Extract all strings in thestr
that match theregexp
expression and corresponding to the regex group index.
df = df.withColumn('output', F.expr(r"regexp_extract_all(stringValue, '[a-z]*(\d+)', 1)"))
df.show()
#+-----------+-----------+
#|stringValue| output|
#+-----------+-----------+
#| a1234bc123|[1234, 123]|
#| av1tb12h18|[1, 12, 18]|
#| abcd| []|
#+-----------+-----------+