How to use scrapy for links after "Next" Button?


I am relatively new to Python and Scrapy. I’m trying to scrap the links in “Customers who bought this item also bought”.
For example: There are 17 pages for “Customers who bought this item also bought”. If I ask scrapy to scrap that url, it only scraps the first page (6 items). How do I ask scrapy to press the “Next Button” to scrap all the items in the 17 pages? A sample code (just the part that matters in the will be greatly appreciated. Thank you for your time!

Ok. Here is my code. As I said I am new to Python so the code might look quite stupid but it works to scrap the first page (6 items). I work mostly with Fortran or Matlab. I would love to learn Python systematically If I have time though.

# Code of my

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from beta.items import BetaItem

class AlphaSpider(CrawlSpider):

    name = 'alpha'
    allowed_domains = ['']
    start_urls = ['']
    rules = (Rule(SgmlLinkExtractor(restrict_xpaths=('//h3/a',)), callback='parse_item'), )

    def parse_item(self, response):
        sel = Selector(response)

        stuff = BetaItem()
    isbn10R = sel.xpath('//li[b[contains(text(),"ISBN-10:")]]/text()').extract()
    isbn10 = []
    if len(isbn10R) > 0:
       isbn10 = [(isbn10R[0].split(' '))[1]]
    stuff['isbn10'] = isbn10

        starsR = sel.xpath('//div[contains(@id,"averageCustomerReviews")]/span/@title').extract()
    stars = []
    if len(starsR) > 0:
       stars = [(starsR[0].split(' '))[0]]
    stuff['stars'] = stars

    reviewsR = sel.xpath('//div[contains(@id,"averageCustomerReviews")]/a[contains(@href,"showViewpoints=1")]/text()').extract()
    reviews = []
    if len(reviewsR) > 0:
       reviews = [(reviewsR[0].split(' '))[0]]
    stuff['reviews'] = reviews

    copsR = sel.xpath('//a[@class="sim-img-title"]/@href').extract()
    ncops = len(copsR)
    cops = [None] * ncops
    if ncops > 0:
       for idx, cop in enumerate(copsR):
    stuff['cops'] = cops       

    return stuff
Asked By: maxwell



I would recommend you to avoid scrapy especially since you’re a beginner.
Use awesome Requests module for downloading pages

and BeautifulSoup for parsing webpages.

Answered By: Goranek

So I understand you were able to scrape these “Customers Who Bought This Item Also Bought” product details. As you probably saw, these are within a ul in a div with class “shoveler-content”:

<div id="purchaseButtonWrapper" class="shoveler-button-wrapper">
    <a class="back-button" onclick="return false;" style="" href="#Back">
    <div class="shoveler-content">
        <ul tabindex="-1">
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">
                <div id="purchase_B003LSTK8G" class="new-faceout p13nimp" data-ref="pd_sim_kstore_1" data-asin="B003LSTK8G">
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">...</li>
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">...</li>
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">...</li>
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">...</li>
            <li class="shoveler-cell" style="margin-left: 16px; margin-right: 16px;">...</li>
    <a class="next-button" onclick="return false;" style="" href="#Next">
        <span class="auiTestSprite s_shvlNext">...</span>

When you inspect your browser of choice’s network activity (via Firebug or Chrome Inspect tool), when you click on the “next” button for next suggested products, you’ll see an AJAX query to this sort of URL:

(I’m using this product page:

What’s in the id query argument is a list of ASINs, which are the next suggested products. 12 ASINs for 6 displayed? probably some in-page caching for the next “next” click a user will probably make.

What do you get back from this AJAX query? Still within your browser’s inspect tool, you’ll see the response is of type application/json, and the response data is a JSON array of 12 elements, each elements being some HTML snippet, similar to:

<div class="new-faceout p13nimp" id="purchase_B00261OOWQ" data-asin="B00261OOWQ" data-ref="pd_sim_kstore_7">
    <a href="/Home-Game-Accidental-Guide-Fatherhood-ebook/dp/B00261OOWQ/ref=pd_sim_kstore_7" class="sim-img-title" >
        <div class="product-image">
            <img src=",TopRight,35,-73_OU01_SS100_.jpg" width="100" alt="" height="100" border="0" /> 
        </div> Home Game: An Accidental Guide to Fatherhood
    <div class="byline">
        <span class="carat">&#8250</span> 
        <a href="">Michael Lewis</a> 

    <div class="rating-price"> 
        <span class="rating-stars">
            <span class="crAvgStars" style="white-space:no-wrap;">
                <span class="asinReviewsSummary" name="B00261OOWQ">
                    <a href="">
                        <span class="auiTestSprite s_star_4_0 " title="4.1 out of 5 stars" >
                            <span>4.1 out of 5 stars</span>
                (<a href="">99</a>)
    <div class="binding-platform"> Kindle Edition </div> 
    <div class="pricetext"><span class="price" style="margin-right:5px">$11.36</span></div> 

So you basically get what was in the original page section for suggested products earlier, in each <li> from <div class="shoveler-content"><ul>

But how do you get those ASINs codes to append to the AJAX query’s id parameter?

Well, in the product page, you’ll notice this section

<div id="purchaseSimsData" 
    class="sims-data" style="display:none" 
    data-baseAsin="B005CRQ2OE" data-featureId="pd_sim" 
    data-pageId="B005CRQ2OEr_sim_2" data-reftag="pd_sim_kstore"
    data-wdg="ebooks_display_on_website" data-widgetName="purchase">

which looks like all the suggested products ASINs.

Therefore, I suggest you emulate successive AJAX queries to get suggested products, 12 ASINs at a time, decode the response using json package, and then parse each HTML snippet to extract product info you want.

Answered By: paul trmbrth
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.