Python: Scrape href from td – can't get it to work correctly
Question:
I’m very new to python and have gone through previous questions on SO but could not solve it. Here is my code:
import requests
import pandas as pd
from bs4 import BeautifulSoup
from urllib.parse import urlparse
url = "https://en.wikipedia.org/wiki/List_of_curling_clubs_in_the_United_States"
data = requests.get(url).text
soup = BeautifulSoup(data, 'lxml')
table = soup.find('table', class_='wikitable sortable')
df = pd.DataFrame(columns=['Club Name', 'City/Town', 'State', 'Type', 'Sheets', 'Memberships', 'Year Founded', 'Notes', 'URL'])
for row in table.tbody.find_all('tr'):
# Find all data for each column
columns = row.find_all('td')
if(columns != []):
club_name = columns[0].text.strip()
city = columns[1].text.strip()
state = columns[2].text.strip()
type_arena = columns[3].text.strip()
sheets = columns[4].text.strip()
memberships = columns[5].text.strip()
year_founded = columns[6].text.strip()
notes = columns[7].text.strip()
club_url = columns[0].find('a').get('href')
df = df.append({'Club Name': club_name, 'City/Town': city, 'State': state, 'Type': type_arena, 'Sheets': sheets, 'Memberships': memberships, 'Year Founded': year_founded, 'Notes': notes, 'URL': club_url}, ignore_index=True)
My DF works except for the final column. It returns "None" when the first column obviously contains a link. How do I resolve this?
I’ve successfully scraped HREF from websites without tables, but am struggling to find a solution inside the table. Thanks in advance!
Answers:
There is a typo in your script:
club_url = cols[0].find('a').get('href')
cols
should be columns
and you should check if element exists before apply a method:
club_url = columns[0].find('a').get('href') if columns[0].find('a') else None
I’m very new to python and have gone through previous questions on SO but could not solve it. Here is my code:
import requests
import pandas as pd
from bs4 import BeautifulSoup
from urllib.parse import urlparse
url = "https://en.wikipedia.org/wiki/List_of_curling_clubs_in_the_United_States"
data = requests.get(url).text
soup = BeautifulSoup(data, 'lxml')
table = soup.find('table', class_='wikitable sortable')
df = pd.DataFrame(columns=['Club Name', 'City/Town', 'State', 'Type', 'Sheets', 'Memberships', 'Year Founded', 'Notes', 'URL'])
for row in table.tbody.find_all('tr'):
# Find all data for each column
columns = row.find_all('td')
if(columns != []):
club_name = columns[0].text.strip()
city = columns[1].text.strip()
state = columns[2].text.strip()
type_arena = columns[3].text.strip()
sheets = columns[4].text.strip()
memberships = columns[5].text.strip()
year_founded = columns[6].text.strip()
notes = columns[7].text.strip()
club_url = columns[0].find('a').get('href')
df = df.append({'Club Name': club_name, 'City/Town': city, 'State': state, 'Type': type_arena, 'Sheets': sheets, 'Memberships': memberships, 'Year Founded': year_founded, 'Notes': notes, 'URL': club_url}, ignore_index=True)
My DF works except for the final column. It returns "None" when the first column obviously contains a link. How do I resolve this?
I’ve successfully scraped HREF from websites without tables, but am struggling to find a solution inside the table. Thanks in advance!
There is a typo in your script:
club_url = cols[0].find('a').get('href')
cols
should be columns
and you should check if element exists before apply a method:
club_url = columns[0].find('a').get('href') if columns[0].find('a') else None