Link scraping errors

Question:

url = "https://www.cnn.com/"

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

links = []

for link in soup(response).find_all("a", href=True):
    links.append(link["href"])

for link in links:
    print(links)

AttributeError: ResultSet object has no attribute ‘find_all’. You’re probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?

I’m not too sure why I’m getting this error, I’m trying to scrape all href / links from this website.

Asked By: Jack9992

||

Answers:

You don’t need to call soup(response), just call find_all directly on soup soup. Soup already has the response information from line 5, so it’s redundant.

# Replace this:
for link in soup(response).find_all("a", href=True):

# With this
for link in soup.find_all("a", href=True):
Answered By: Ethansocal