我知道那里有几个相关的线程,它们对我有很大帮助,但是我仍然无法一路走下去。我到了运行代码不会导致错误的地步,但是我的csv文件中什么也没有。我有以下Scrapyspider,它从一个网页开始,然后跟随一个超链接,并抓取链接的页面:
csv
Scrapy
from scrapy.http import Request from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.item import Item, Field class bbrItem(Item): Year = Field() AppraisalDate = Field() PropertyValue = Field() LandValue = Field() Usage = Field() LandSize = Field() Address = Field() class spiderBBRTest(BaseSpider): name = 'spiderBBRTest' allowed_domains = ["http://boliga.dk"] start_urls = ['http://www.boliga.dk/bbr/resultater?sort=hus_nr_sort-a,etage-a,side-a&gade=Septembervej&hus_nr=29&ipostnr=2730'] def parse2(self, response): hxs = HtmlXPathSelector(response) bbrs2 = hxs.select("id('evaluationControl')/div[2]/div") bbrs = iter(bbrs2) next(bbrs) for bbr in bbrs: item = bbrItem() item['Year'] = bbr.select("table/tbody/tr[1]/td[2]/text()").extract() item['AppraisalDate'] = bbr.select("table/tbody/tr[2]/td[2]/text()").extract() item['PropertyValue'] = bbr.select("table/tbody/tr[3]/td[2]/text()").extract() item['LandValue'] = bbr.select("table/tbody/tr[4]/td[2]/text()").extract() item['Usage'] = bbr.select("table/tbody/tr[5]/td[2]/text()").extract() item['LandSize'] = bbr.select("table/tbody/tr[6]/td[2]/text()").extract() item['Address'] = response.meta['address'] yield item def parse(self, response): hxs = HtmlXPathSelector(response) PartUrl = ''.join(hxs.select("id('searchresult')/tr/td[1]/a/@href").extract()) url2 = ''.join(["http://www.boliga.dk", PartUrl]) yield Request(url=url2, meta={'address': hxs.select("id('searchresult')/tr/td[1]/a[@href]/text()").extract()}, callback=self.parse2)
我正在尝试将结果导出到一个csv文件,但没有任何文件。但是,运行代码不会导致任何错误。我知道这是一个只有一个URL的简单示例,但这说明了我的问题。
我认为我的问题可能是我没有告诉Scrapy我要在Parse2方法中保存数据。
顺便说一句,我以 scrapy crawl spiderBBR -o scraped_data.csv -t csv
scrapy crawl spiderBBR -o scraped_data.csv -t csv
你需要修改自己的yield Request in parse以parse2用作其回调。
Request
parse2
编辑:allowed_domains不应包含http前缀,例如:
allowed_domains
allowed_domains = ["boliga.dk"]
尝试一下,看看你的spider是否仍然可以正常运行,而不是allowed_domains留空