抓取的网站数据未写入 CSV

Posted

技术标签:

【中文标题】抓取的网站数据未写入 CSV【英文标题】:Scraped website data is not being written to a CSV 【发布时间】:2021-03-26 23:24:04 【问题描述】:

我正在尝试抓取网站以获取信息并将其输出到 CSV 文件。对于我尝试提取的数据,终端有一个输出,但我需要将其保存在 CSV 文件中。

我尝试了几种不同的方法,但找不到解决方案。 CSV 文件已创建,但它只是空的。可能有一些非常简单的事情。

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import csv
import time
from bs4 import BeautifulSoup

DRIVER_PATH = '/Users/jasonbeedle/Desktop/snaviescraper/chromedriver'

options = Options()
options.page_load_strategy = 'normal'

# Navigate to url
driver = webdriver.Chrome(options=options, executable_path=DRIVER_PATH)
driver.get("http://best4sport.tv/2hd/2020-12-10/")
options.add_argument("--window-size=1920x1080")
results = driver.find_element_by_class_name('program1_content_container')
soup = BeautifulSoup(results.text, 'html.parser')

# results = driver.find_element_by_class_name('program1_content_container')
p_data1 = soup.find_all("div", "class_name": "program1_content_container")
p_data2 = soup.find_all("div", "class_name": "program_time")
p_data3 = soup.find_all("div", "class_name": "sport")
p_data4 = soup.find_all("div", "class": "program_text")

print("Here is your data, I am off ot sleep now see ya ")
print(results.text)
# Create csv
programme_list = []
# Programme List
for item in p_data1:
    try:
        name = item.contents[1].find_all(
            "div", "class": "program1_content_container")[0].text
    except:
        name = ''

    p_data1 = [time]
    programme_list.append(p_data1)

# Programme Time
for item in p_data2:
    try:
        time = item.contents[1].find_all(
            "div", "class": "program_time")[0].text
    except:
        time = ''

    p_data2 = [time]
    programme_list.append(p_data2)

# Which sport
for item in p_data3:
    try:
        time = item.contents[1].find_all(
            "div", "class": "sport")[0].text
    except:
        time = ''

    p_data3 = [time]
    programme_list.append(p_data3)

with open('sport.csv', 'w') as file:
    writer = csv.writer(file)
    for row in programme_list:
        writer.writerow(row)

我刚刚尝试添加一个名为data_output 的对象,然后我尝试打印data_output

data_output = [p_data1, p_data2, p_data3, p_data4]
...
print(data_output)

终端中的输出是

【问题讨论】:

programme_list 变量被值填充后的样子? 19:55 MOTORU SPORTS Motoru sporta "5 minūte" Iknedēļas Alda Putniņa veidots apskats par motoru sportu 20:00 BASKETBOLS CSKA pret Zielona Gora VTB Vienotās līgas 2020./2021. gada regulārās sezonas spēle (08.12.2020.) 22:00 BASKETBOLS 如果是公开的,你能分享你的网址吗?您可以使用pandas 将数据加载到dataframe,然后导出到csv 文件。 best4sport.tv/2hd/2020-12-10没用过熊猫会研究一下 【参考方案1】:

您可以尝试将 wb 更改为 w,而不是编写二进制文件吗? 改变

with open('sport.csv', 'wb') as file:

with open('sport.csv', 'w') as file:

已编辑:

抱歉来晚了。这是根据您的原始代码修改的代码仅供参考。

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import csv
import time
from bs4 import BeautifulSoup

from selenium.webdriver.chrome.options import Options

DRIVER_PATH = '/Users/jasonbeedle/Desktop/snaviescraper/chromedriver'

options = Options()
options.page_load_strategy = 'normal'

# Navigate to url
driver = webdriver.Chrome(options=options, executable_path=DRIVER_PATH)
driver.get("http://best4sport.tv/2hd/2020-12-10/")
options.add_argument("--window-size=1920x1080")
results = driver.find_element_by_class_name('program1_content_container')
page = driver.page_source
soup = BeautifulSoup(page, 'html.parser')

# results = driver.find_element_by_class_name('program1_content_container')
p_data1 = soup.find_all("p", "class": "program_info")
p_data2 = soup.find_all("p", "class": "program_time")
p_data3 = soup.find_all("p", "class": "sport")
p_data4 = soup.find_all("p", "class": "program_text")

# Create csv
programme_list = []
# Programme List
for i in range(len(p_data1)):
    programme_list.append([p_data1[i].text.strip(), p_data2[i].text.strip(), p_data3[i].text.strip(), p_data4[i].text.strip()])
    
with open('sport.csv', 'w', encoding='utf-8') as file:
    writer = csv.writer(file)
    writer.writerow(["program_info", "program_time", "sport", "program_text"])
    for row in programme_list:
        writer.writerow(row)

此处为 Excel 屏幕截图

【讨论】:

您能否与我们分享该程序,以便我们帮助您识别错误? 将代码添加到问题内的代码块中【参考方案2】:

将数据加载到 pandas dataframe 并导出到 csv。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
from bs4 import BeautifulSoup

DRIVER_PATH = '/Users/jasonbeedle/Desktop/snaviescraper/chromedriver'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
driver.get("http://best4sport.tv/2hd/2020-12-10/")
results =WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,".program1_content_container")))
soup = BeautifulSoup(results.get_attribute("outerHTML"), 'html.parser')
program_time=[]
sport=[]
program_text=[]
program_info=[]
for item in soup.select(".program_details "):
    if item.find_next(class_='program_time'):
        program_time.append(item.find_next(class_='program_time').text.strip())
    else:
        program_time.append("Nan")
    if item.find_next(class_='sport'):
        sport.append(item.find_next(class_='sport').text.strip())
    else:
        sport.append("Nan")
    if item.find_next(class_='program_text'):
        program_text.append(item.find_next(class_='program_text').text.strip())
    else:
        program_text.append("Nan")
    if item.find_next(class_='program_info'):
        program_info.append(item.find_next(class_='program_info').text.strip())
    else:
        program_info.append("Nan")

df=pd.DataFrame("program_time":program_time,"sport":sport,"program_text":program_text,"program_info":program_info)
print(df)
df.to_csv("sport.csv")

创建后的 csv 快照

如果你没有 pandas,那么你需要安装它。

pip 安装熊猫

【讨论】:

我想我爱你。太感谢了!!! SO 应该有像 Reddit 这样的奖项。 @JasonBeedle :很高兴能够帮助您。【参考方案3】:

正如 Blue Fishy 所说,您可以尝试仅更改为 w 模式,但您可能会遇到编码错误。

适用于您数据的解决方案

import csv 

programme_list = ['19:55','MOTORU SPORTS','Motoru sporta "5 minūte"','Iknedēļas Alda Putniņa veidots apskats par motoru sportu','20:00','BASKETBOLS','...']

with open('sport.csv', 'w', encoding='utf-8') as file:
    writer = csv.writer(file, delimiter=',', lineterminator='\n')
    for row in programme_list:
        print(row)
        writer.writerow([row])

输出

19:55
MOTORU SPORTS
"Motoru sporta ""5 minūte"""
Iknedēļas Alda Putniņa veidots apskats par motoru sportu
20:00
BASKETBOLS
...

【讨论】:

是的,这正是我想要的。虽然我需要提取提取的数据并填充 program_list = [ ]

以上是关于抓取的网站数据未写入 CSV的主要内容,如果未能解决你的问题,请参考以下文章

使用 python 和 Beautifulsoup4 从抓取数据中写入和保存 CSV 文件

网页抓取 - Python;写入 CSV

使用循环进行 Web 抓取并写入 csv

在 Python CSV Writer 循环中写入标题一次

无法将输出写入 csv bs4 python

如何使用python和beautifulsoup4循环抓取网站中多个页面的数据