不能在公共 Instagram 帐户上抓取超过 12 个帖子

Posted

技术标签:

【中文标题】不能在公共 Instagram 帐户上抓取超过 12 个帖子【英文标题】:Can't scrape more than 12 posts on public Instagram account 【发布时间】:2020-01-02 11:50:47 【问题描述】:

我想使用 Python 从公共 Instagram 帐户中抓取所有帖子,以用于我在大学进行的一项研究。然而,我开始感到沮丧,因为我无法从 Instagram 中提取超过 12 条帖子。

Selenium 完成了滚动页面的工作,我得到了 beautifulsoup 来以适当的方式解析我想要的数据,尽管只针对前 12 个帖子。到目前为止,我已经尝试了几种不同的方法,但开始感到卡住了。我在这里查看了几个教程和线程,例如:

How do I scrape a full instagram page in python?

Web Scraping with Selenium Python [Twitter + Instagram]

https://michaeljsanders.com/2017/05/12/scrapin-and-scrollin.html

https://edmundmartin.com/scraping-instagram-with-python/

感谢所有回复!

最好的问候, 卡勒。

我试过的代码。 示例 1:

from bs4 import BeautifulSoup
import ssl
import json
import time

from selenium import webdriver
from datetime import datetime


class Insta_Image_Links_Scraper:

def getlinks(self, user, url):
    print('[+] Downloading:\n')
    c = webdriver.Chrome()
    c.get("https://www.instagram.com/frank_the_carden/")
    lenOfPage = c.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
    match=False
    while(match==False):
            lastCount = lenOfPage
            time.sleep(2)
            lenOfPage = c.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
            if lastCount==lenOfPage:
                    match=True



    soup = BeautifulSoup(c.page_source, 'lxml')
    body = soup.find('body')
    script = body.find('script')
    page_json = script.text.strip().replace('window._sharedData =', '').replace(';', '')

    data = json.loads(page_json)
    print('Scraping posts for user ' + user+"...........")
    for post in data['entry_data']['ProfilePage'][0]['graphql']['user']['edge_owner_to_timeline_media']['edges']:
        timestamp = post['node']['taken_at_timestamp']
        likedby = post['node']['edge_liked_by']['count']
        comments = post['node']['edge_media_to_comment']['count']
        isVideo = post['node']['is_video']
        caption = post['node']['edge_media_to_caption']

        print('Post on :',datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S'))
        print('Liked by :',likedby)
        print('comments :',comments)
        print('caption :',caption)

def main(self):
    self.ctx = ssl.create_default_context()
    self.ctx.check_hostname = False
    self.ctx.verify_mode = ssl.CERT_NONE

    with open("accounts.txt") as f:
        self.content = f.readlines()
    self.content = [x.strip() for x in self.content]
    for user in self.content:
        self.getlinks(user,
                      'https://www.instagram.com/'
                      + user + '/')


if __name__ == '__main__':
    obj = Insta_Image_Links_Scraper()
    obj.main()

示例 2:

import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import json
from datetime import datetime

c = webdriver.Chrome()

c.get("https://www.instagram.com/frank_the_carden/")
time.sleep(1)

elem = c.find_element_by_tag_name("body")

no_of_pagedowns = 20

while no_of_pagedowns:
    elem.send_keys(Keys.PAGE_DOWN)
    time.sleep(0.2)
    no_of_pagedowns-=1

soup = BeautifulSoup(c.page_source, 'html.parser')
body = soup.find('body')
script = body.find('script')
page_json = script.text.strip().replace('window._sharedData =', '').replace(';', '')

data = json.loads(page_json)
for post in data['entry_data']['ProfilePage'][0]['graphql']['user']['edge_owner_to_timeline_media']['edges']:
            timestamp = post['node']['taken_at_timestamp']
            likedby = post['node']['edge_liked_by']['count']
            comments = post['node']['edge_media_to_comment']['count']
            isVideo = post['node']['is_video']
            caption = post['node']['edge_media_to_caption']

            print('Post on :',datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S'))
            print('Liked by :',likedby)
            print('comments :',comments)
            print('caption :',caption)

示例 3:

import time
import json
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
from datetime import datetime
import requests
import urllib3


browser = webdriver.Chrome()

media_url = 'https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables="id":"%s","first":50,"after":"%s"'
browser = webdriver.Chrome()

# first get https://instagram.com to obtain cookies
browser.get('https://www.instagram.com/frank_the_carden/')
browser_cookies = browser.get_cookies()

# set a session with cookies
session = requests.Session()
for cookie in browser_cookies:
    c = cookie['name']: cookie['value']
    session.cookies.update(c)

# get response as JSON
response = session.get(media_url % ('5719699176', ''), verify=False).json()
time.sleep(1)

elem = browser.find_element_by_tag_name("body")

no_of_pagedowns = 20

while no_of_pagedowns:
    elem.send_keys(Keys.PAGE_DOWN)
    time.sleep(0.2)
    no_of_pagedowns-=1

soup = BeautifulSoup(browser.page_source, 'html.parser')
body = soup.find('body')
script = body.find('script')
page_json = script.text.strip().replace('window._sharedData =', '').replace(';', '')
data = json.loads(page_json)
for post in data['entry_data']['ProfilePage'][0]['graphql']['user']['edge_owner_to_timeline_media']['edges']:
            timestamp = post['node']['taken_at_timestamp']
            likedby = post['node']['edge_liked_by']['count']
            comments = post['node']['edge_media_to_comment']['count']
            isVideo = post['node']['is_video']
            caption = post['node']['edge_media_to_caption']

            print('Post on :',datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S'))
            print('Liked by :',likedby)
            print('comments :',comments)
            print('caption :',caption)

示例 4:

from random import choice
import json
import time
import requests
from bs4 import BeautifulSoup
from selenium import webdriver

browser = webdriver.Chrome()

browser.get("https://www.instagram.com/frank_the_carden/")

# Selenium script to scroll to the bottom
lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
match=False
while(match==False):
                lastCount = lenOfPage
                time.sleep(1)
                lenOfPage = browser.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
                if lastCount==lenOfPage:
                    match=True

_user_agents = [
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
]

class InstagramScraper:

    def __init__(self, user_agents=None, proxy=None):
        self.user_agents = user_agents
        self.proxy = proxy

    def __random_agent(self):
        if self.user_agents and isinstance(self.user_agents, list):
            return choice(self.user_agents)
        return choice(_user_agents)

    def __request_url(self, url):
        try:
            response = requests.get(url, headers='User-Agent': self.__random_agent(), proxies='http': self.proxy,
                                                                                                 'https': self.proxy)
            response.raise_for_status()
        except requests.HTTPError:
            raise requests.HTTPError('Received non 200 status code from Instagram')
        except requests.RequestException:
            raise requests.RequestException
        else:
            return response.text

    @staticmethod
    def extract_json_data(html):
        soup = BeautifulSoup(html, 'html.parser')
        body = soup.find('body')
        script_tag = body.find('script')
        raw_string = script_tag.text.strip().replace('window._sharedData =', '').replace(';', '')
        return json.loads(raw_string)

    def profile_page_metrics(self, profile_url):
        results = 
        try:
            response = self.__request_url(profile_url)
            json_data = self.extract_json_data(response)
            metrics = json_data['entry_data']['ProfilePage'][0]['graphql']['user']
        except Exception as e:
            raise e
        else:
            for key, value in metrics.items():
                if key != 'edge_owner_to_timeline_media':
                    if value and isinstance(value, dict):
                        value = value['count']
                        results[key] = value
                    elif value:
                        results[key] = value
        return results

    def profile_page_recent_posts(self, profile_url):
        results = []
        try:
            response = self.__request_url(profile_url)
            json_data = self.extract_json_data(response)
            metrics = json_data['entry_data']['ProfilePage'][0]['graphql']['user']['edge_owner_to_timeline_media']["edges"]
        except Exception as e:
            raise e
        else:
            for node in metrics:
                node = node.get('node')
                if node and isinstance(node, dict):
                    results.append(node)
        return results

from pprint import pprint

k = InstagramScraper()
results = k.profile_page_recent_posts('https://www.instagram.com/frank_the_carden/')
pprint(results)

【问题讨论】:

不完全是一个解决方案,但可能会有所启发;查看this blog post 我在从乔·拜登的 Instagram 以及the associated code 上抓取 500 条帖子时写道;诚然,这有点 hacky,但我基本上使用 Selenium 来滚动页面,并且在每次滚动时,我收集了整个 HTML,然后最后我比较了所有 HTML 并解析了 URL 的单独帖子短代码 【参考方案1】:

我会直接调用 instagram graph ql api,就像您在“示例 3”中所做的那样。 我有一个工作代码,但他们改变了 query_hash 的生成方式,我无法让它工作,但你可能也面临同样的问题。

除此之外,我目前正在使用 python client 抓取 Instagram 数据。但您需要提供 Instagram 凭据才能使其正常工作。

【讨论】:

感谢您的回复!是的,这实际上是我最近的尝试,我认为这是有道理的。也无法让 query_hash 工作。我尝试从 network-xhr-query 字符串参数中获取值并尝试在代码中输入它们的不同方式,但我得到这些错误:“我的目录”,第 27 行,在 response = session.get(media_url % ('5719699176', ''), verify=False).json() TypeError: 字符串格式化期间并非所有参数都转换 ValueError: 索引 97 处不支持格式字符 'B' (0x42) 根据 graphql 文档,query_hash 应该将 sha256 作为字符串应用于查询。但我可能遗漏了一些东西,我从未设法获得与 instagram 相同的哈希,所以我不断收到“无效查询哈希”【参考方案2】:

您可以使用此查询模板获取包含用户帖子的 json www.instagram.com/graphql/query/?query_id=17888483320059182&variables=%7B%22id%22%3A%22%22%2C%22first%22%3A%7D

查看更多信息。认为它可能会有所帮助 https://github.com/MohanSha/InstagramResearch

【讨论】:

【参考方案3】:

我和你一样一直在寻找答案,我发现最好的方法是使用以下步骤:

首先使用请求库并从 Instagram 查询中粘贴

https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables=%22id%22:%22%22,%22first%22:,%22after%22:%22%22

:您的 Instagram 个人资料 ID。您可以在个人资料链接末尾使用 /?__a=1 来抓取它。并寻找这个数据目录:

['data']['user']['edge_owner_to_timeline_media']['edges'][0]['node']['owner']['id']

:您希望在每个 JSON 查询中显示多少帖子。最多50。如果您想获得更多,请使用第二步

:这种散列表示帖子是否有下一页。目录是:

['data']['user']['edge_owner_to_timeline_media']['page_info']['end_cursor']

那么当您成功获取所有需要的数据后,您可以使用此代码保留 JSON 格式

import json
import request
profilq = request.get('https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables=%22id%22:%22<profile_id>%22,%22first%22:<num_ofpost>,%22after%22:%22<end_cursor>%22')
data = profilq.json()

第二使用递归来帮助你获得帖子。由于一个查询只能承受 50 个帖子,因此您需要创建某种递归函数来重新请求 JSON 并将其放入相应的表中。

注意事项 有时,空白标题会导致索引错误。您可以使用 try 和 except 来消除这种情况。我喜欢使用 IndexError 的异常并将标题替换为字符串

try:
  your code 
except IndexError:
  caption = '*NO CAPTION PROVIDED*'

我在 2020 年 12 月 7 日之前测试了查询链接。如果你想粘贴我的做法,可以查看我的 GitHub 链接 here。

【讨论】:

以上是关于不能在公共 Instagram 帐户上抓取超过 12 个帖子的主要内容,如果未能解决你的问题,请参考以下文章

抓取点赞数最高的 Instagram 帐户图片

如何在 python 中抓取 Instagram 帐户信息

Instagram API 仅搜索公共帐户

如何在 Instagram 上向下滚动到末尾

INSTAGRAM:授予公共内容范围的权限,但仍然访问令牌给出未授权的错误

如何在没有 instagram API 的情况下从 instagram 获取公共用户的所有帖子