Removing Duplicate Domain URLs From the Text File Using Bash
Removing Duplicate Domain URLs From the Text File Using Bash Question: Text file https://www.google.com/1/ https://www.google.com/2/ https://www.google.com https://www.bing.com https://www.bing.com/2/ https://www.bing.com/3/ Expected Output: https://www.google.com/1/ https://www.bing.com What I Tried awk -F’/’ ‘!a[$3]++’ $file; Output https://www.google.com/1/ https://www.google.com https://www.bing.com https://www.bing.com/2/ I already tried various codes and none of them work as expected. I just want to pick only one unique …