从数组中下载多个文件,并使用 Python3 放入所需目录
Posted
技术标签:
【中文标题】从数组中下载多个文件,并使用 Python3 放入所需目录【英文标题】:Download multiple files from array, and place into desired directory using Python3 【发布时间】:2021-12-16 16:08:27 【问题描述】:#Import desired libarary's -- Make HTTP Requests / Query DOM ELEMENTS
import requests
from bs4 import BeautifulSoup as bs
import zipfile
# Make requests to NGA site to, response stored in r (DOM)
r = requests.get('https://earth-info.nga.mil/index.php?dir=coordsys&action=gars-20x20-dloads')
# Parse data using Beautiful soup libarary, and the default html parser
soup = bs(r.content, 'html.parser')
# Output is pure RAW HTML DOM
# print(soup)
# Scan Dom tree and places desired href zip files into an array for future downloading -- Files array
files = ['https://earth-info.nga.mil/' + i['href'] for i in soup.select('area')]
# print(files)
#Download Single file from Array
# firstUrl = files[0]
# Download multiple files from Array
for file in files:
r = requests.get(file, stream=True)
save_path = '/Users/iga0779/Downloads/%s.zip'%r
filex = open(save_path, 'wb')
filex.write(downloadedfile.content)
filex.close()
我目前对接下来的步骤有点犹豫,我选择了下载目录作为我想要文件去的地方,但我有点新,不知道如何正确写入目录。
【问题讨论】:
【参考方案1】:你可以去with open()
也可以分块下载你的文件:
for file in files:
with requests.get(file, stream=True) as r:
r.raise_for_status()
with open(f'tmpZip/file.split("/")[-1].zip', 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
示例
import requests
from bs4 import BeautifulSoup as bs
import zipfile
# Make requests to NGA site to, response stored in r (DOM)
r = requests.get('https://earth-info.nga.mil/index.php?dir=coordsys&action=gars-20x20-dloads')
# Parse data using Beautiful soup libarary, and the default HTML parser
soup = bs(r.content, 'html.parser')
# Output is pure RAW HTML DOM
# print(soup)
# Scan Dom tree and places desired href zip files into an array for future downloading -- Files array
files = ['https://earth-info.nga.mil/' + i['href'] for i in soup.select('area')]
# print(files)
def download_file(file):
with requests.get(file, stream=True) as r:
r.raise_for_status()
with open(f'tmpZip/file.split("/")[-1].zip', 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
return f'File: file.split("/")[-1].zip -> downloaded'
#files sliced to first three urls from result, delet [:3] to get all
for file in files[:3]:
print(download_file(file))
输出
File: 180W60N.zip -> downloaded
File: 180W40N.zip -> downloaded
File: 180W20N.zip -> downloaded
【讨论】:
【参考方案2】:你也可以试试这个
#Import desired libarary's -- Make HTTP Requests / Query DOM ELEMENTS
import requests
from bs4 import BeautifulSoup as bs
import zipfile
import os
from zipfile import ZipFile
from io import BytesIO
headers =
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0",
"Accept-Encoding": "*",
"Connection": "keep-alive"
# Make requests to NGA site to, response stored in r (DOM)
r = requests.get('https://earth-info.nga.mil/index.php?dir=coordsys&action=gars-20x20-dloads')
# Parse data using Beautiful soup libarary, and the default HTML parser
soup = bs(r.content, 'html.parser')
# Output is pure RAW HTML DOM
# print(soup)
# Scan Dom tree and places desired href zip files into an array for future downloading -- Files array
files = ['https://earth-info.nga.mil/' + i['href'] for i in soup.select('area')]
# print(files)
mydirname = r'C:\\Users\\User\\Documents\\Downloads'
for url in files:
r = requests.get(url, headers=headers,stream=True)
if r.status_code == 200:
newfoldername = r.url.split('/')[-1]
if not os.path.exists(newfoldername):
os.mkdir(newfoldername)
path_ = os.path.join(mydirname, newfoldername )
zipfile.ZipFile(BytesIO(r.content)).extractall(path_)
print('Finished...')
【讨论】:
以上是关于从数组中下载多个文件,并使用 Python3 放入所需目录的主要内容,如果未能解决你的问题,请参考以下文章