我正在使用scrapy来爬行网站上的多个页面。该变量start_urls用于定义要爬网的页面。我最初将从第一页开始,从而start_urls = [1st page]在文件中进行定义example_spider.py
start_urls
start_urls = [1st page]
example_spider.py
从第一页获得更多信息后,我将确定要抓取的下一页,然后进行相应分配start_urls。因此,我必须使用更改覆盖example_spider.py之上start_urls = [1st page, 2nd page, ..., Kth page],然后再次运行scrapy crawl。
start_urls = [1st page, 2nd page, ..., Kth page]
这是最好的方法,还是有更好的方法来start_urls使用scrapy API 动态分配而不必覆盖example_splider.py?谢谢。
example_splider.py
start_urlsclass属性包含起始网址-仅此而已。如果你要提取其他网页的网址,parse请使用[another]回调从相应的回调请求中获取收益:
start_urlsclass
parse
[another]
class Spider(BaseSpider): name = 'my_spider' start_urls = [ 'http://www.domain.com/' ] allowed_domains = ['domain.com'] def parse(self, response): '''Parse main page and extract categories links.''' hxs = HtmlXPathSelector(response) urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract() for url in urls: url = urlparse.urljoin(response.url, url) self.log('Found category url: %s' % url) yield Request(url, callback = self.parseCategory) def parseCategory(self, response): '''Parse category page and extract links of the items.''' hxs = HtmlXPathSelector(response) links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract() for link in links: itemLink = urlparse.urljoin(response.url, link) self.log('Found item link: %s' % itemLink, log.DEBUG) yield Request(itemLink, callback = self.parseItem) def parseItem(self, response):