Python BeautifulSoup 硒刮刀

Posted

技术标签:

【中文标题】Python BeautifulSoup 硒刮刀【英文标题】:Python BeautifulSoup selenium scraper 【发布时间】:2019-08-31 16:55:58 【问题描述】:

我正在使用以下 python 脚本从 Amazon pages 抓取信息。

在某些时候,它停止返回页面结果。脚本正在启动,浏览关键字/页面,但我只得到标题作为输出:

关键词排名标题 ASIN 评分评论 Prime Date

我怀疑问题出在下面一行,因为这个标签不再存在并且results var 没有得到任何值:

results = soup.findAll('div', attrs='class': 's-item-container')

这是完整的代码:

from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re
import datetime
from collections import deque
import logging
import csv


class AmazonScaper(object):

    def __init__(self,keywords, output_file='example.csv',sleep=2):

        self.browser = webdriver.Chrome(executable_path='/Users/willcecil/Dropbox/Python/chromedriver')  #Add path to your Chromedriver
        self.keyword_queue = deque(keywords)  #Add the start URL to our list of URLs to crawl
        self.output_file = output_file
        self.sleep = sleep
        self.results = []


    def get_page(self, keyword):
        try:
            self.browser.get('https://www.amazon.co.uk/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=a'.format(a=keyword))
            return self.browser.page_source
        except Exception as e:
            logging.exception(e)
            return

    def get_soup(self, html):
        if html is not None:
            soup = BeautifulSoup(html, 'lxml')
            return soup
        else:
            return

    def get_data(self,soup,keyword):

        try:
            results = soup.findAll('div', attrs='class': 's-item-container')
            for a, b in enumerate(results):
                soup = b
                header = soup.find('h2')
                result = a + 1
                title = header.text
                try:
                    link = soup.find('a', attrs='class': 'a-link-normal a-text-normal')
                    url = link['href']
                    url = re.sub(r'/ref=.*', '', str(url))
                except:
                    url = "None"

                # Extract the ASIN from the URL - ASIN is the breaking point to filter out if the position is sponsored

                ASIN = re.sub(r'.*amazon.co.uk.*/dp/', '', str(url))

                # Extract Score Data using ASIN number to find the span class

                score = soup.find('span', attrs='name': ASIN)
                try:
                    score = score.text
                    score = score.strip('\n')
                    score = re.sub(r' .*', '', str(score))
                except:
                    score = "None"

                # Extract Number of Reviews in the same way
                reviews = soup.find('a', href=re.compile(r'.*#customerReviews'))
                try:
                    reviews = reviews.text
                except:
                    reviews = "None"

                # And again for Prime

                PRIME = soup.find('i', attrs='aria-label': 'Prime')
                try:
                    PRIME = PRIME.text
                except:
                    PRIME = "None"

                data = keyword:[keyword,str(result),title,ASIN,score,reviews,PRIME,datetime.datetime.today().strftime("%B %d, %Y")]
                self.results.append(data)

        except Exception as e:
            print(e)

        return 1

    def csv_output(self):
        keys = ['Keyword','Rank','Title','ASIN','Score','Reviews','Prime','Date']
        print(self.results)
        with open(self.output_file, 'a', encoding='utf-8') as outputfile:
            dict_writer = csv.DictWriter(outputfile, keys)
            dict_writer.writeheader()
            for item in self.results:
                for key,value in item.items():
                    print(".".join(value))
                    outputfile.write(",".join('"' + item + '"' for item in value)+"\n") # Add "" quote character so the CSV accepts commas

    def run_crawler(self):
        while len(self.keyword_queue): #If we have keywords to check
            keyword = self.keyword_queue.popleft() #We grab a keyword from the left of the list
            html = self.get_page(keyword)
            soup = self.get_soup(html)
            time.sleep(self.sleep) # Wait for the specified time
            if soup is not None:  #If we have soup - parse and save data
                self.get_data(soup,keyword)
        self.browser.quit()
        self.csv_output() # Save the object data to csv


    if __name__ == "__main__":
        keywords = [str.replace(line.rstrip('\n'),' ','+') for line in 
    open('keywords.txt')] # Use our file of keywords & replaces spaces with +
    ranker = AmazonScaper(keywords) # Create the object
    ranker.run_crawler() # Run the rank checker

输出应该是这样的(为了清楚起见,我修剪了标题)。

关键词排名标题 ASIN 评分评论 Prime Date

Blue+Skateboard 3 Osprey 完成 开始 B00IL1JMF4 3.7 40 Prime 2019 年 2 月 21 日 蓝色+滑板 4 ENKEEO Complete Mini C B078J9Y1DG 4.5 42 Prime February 21, 2019 Blue+Skateboard 5 skatro - Mini Cruiser B00K93PIXM 4.8 223 Prime 2019 年 2 月 21 日 蓝色+滑板 7 Vinsani 复古巡洋舰 B00CSV72AK 4.4 8 Prime 2019 年 2 月 21 日 Blue+Skateboard 8 Ridge 复古巡洋舰 Bo B00CA33ISQ 4.1 207 Prime 2019 年 2 月 21 日 Blue+Skateboard 9 Xootz Kids Complete Be B01B2YNSJM 3.6 32 Prime 2019 年 2 月 21 日 Blue+Skateboard 10 Enuff Pyro II Skateboa B00MGRGX2Y 4.3 68 Prime 2019 年 2 月 21 日

【问题讨论】:

首先要检查的是返回的原始页面。尝试在soup = BeautifulSoup(html, 'lxml')之前插入import pdb; pdb.set_trace(),并手动检查html以查看数据是否存在。在您正在抓取的同一台机器上执行此检查非常重要。 【参考方案1】:

以下显示了您可以进行的一些更改。我在某些时候改用 css 选择器。

要循环的主要结果集由soup.select('.s-result-list [data-asin]') 检索。这指定类名称为 .s-result-list 的元素具有属性为 data-asin 的子元素。这与页面上的 60 个(当前)项目匹配。

我将 PRIME 选择替换为使用属性 = 值选择器

标题现在是h5,即header = soup.select_one('h5')


soup.select_one('[aria-label="Amazon Prime"]

示例代码:

import datetime
from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re

keyword = 'blue+skateboard'
driver = webdriver.Chrome()

url = 'https://www.amazon.co.uk/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords='

driver.get(url.format(keyword))
soup = BeautifulSoup(driver.page_source, 'lxml')
results = soup.select('.s-result-list [data-asin]')

for a, b in enumerate(results):
    soup = b
    header = soup.select_one('h5')
    result = a + 1
    title = header.text.strip()

    try:
        link = soup.select_one('h5 > a')
        url = link['href']
        url = re.sub(r'/ref=.*', '', str(url))
    except:
        url = "None"

    if url !='/gp/slredirect/picassoRedirect.html':
        ASIN = re.sub(r'.*/dp/', '', str(url))
        #print(ASIN)

        try:
            score = soup.select_one('.a-icon-alt')
            score = score.text
            score = score.strip('\n')
            score = re.sub(r' .*', '', str(score))
        except:
            score = "None"

        try:
            reviews = soup.select_one("href*='#customerReviews']")
            reviews = reviews.text.strip()
        except:
            reviews = "None"

        try:
            PRIME = soup.select_one('[aria-label="Amazon Prime"]')
            PRIME = PRIME['aria-label']
        except:
            PRIME = "None"
        data = keyword:[keyword,str(result),title,ASIN,score,reviews,PRIME,datetime.datetime.today().strftime("%B %d, %Y")]
        print(data)

示例输出:

【讨论】:

QHarr 这是一个不错的方法。我一定会用这个。

以上是关于Python BeautifulSoup 硒刮刀的主要内容,如果未能解决你的问题,请参考以下文章

使用beautifulsoup在html中查找文本

python 简单的PDF表格刮刀的示例Python代码

python 创建刮刀并使用它们

python 链接刮刀

python Beezid.com - 拍卖刮刀

python 可笑的简单刮刀(过时)