Limited number of scraped data?

时光总嘲笑我的痴心妄想 提交于 2021-01-07 02:51:06

问题


I am scraping a website and everything seems work fine from today's news until news published in 2015/2016. After these years, I am not able to scrape news. Could you please tell me if anything has changed? I should get 672 pages getting titles and snippets from this page:

https://catania.liveuniversity.it/attualita/

but I have got approx. 158.

The code that I am using is:

import bs4, requests
import pandas as pd
import re

headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}

page_num=1
website="https://catania.liveuniversity.it/attualita/"

while True:
    r = requests.get(website, headers=headers)
    soup = bs4.BeautifulSoup(r.text, 'html')


    title=soup.find_all('h2')
    date=soup.find_all('span', attrs={'class':'updated'})

    if soup.find_all('a', attrs={'class':'page-numbers'}):
      website = f"https://catania.liveuniversity.it/attualita/page/{page_num}"
      page_num +=1
      print(page_num)
    else:
      break


df = pd.DataFrame(list(zip(dates, titles)), 
               columns =['Date', 'Titles']) 

I think there has been some changes in tags (for example in next page button, or just in the date/title tag).


回答1:


import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor
import pandas as pd


def main(req, num):
    r = req.get(
        "https://catania.liveuniversity.it/attualita/page/{}/".format(num))
    soup = BeautifulSoup(r.content, 'html.parser')
    try:
        data = [(x.select_one("span.updated").text, x.findAll("a")[1].text, x.select_one("div.entry-content").get_text(strip=True)) for x in soup.select(
            "div.col-lg-8.col-md-8.col-sm-8")]
        return data
    except AttributeError:
        print(r.url)
        return False


with ThreadPoolExecutor(max_workers=30) as executor:
    with requests.Session() as req:
        fs = [executor.submit(main, req, num) for num in range(1, 673)]
        allin = []
        for f in fs:
            f = f.result()
            if f:
                allin.extend(f)
        df = pd.DataFrame.from_records(
            allin, columns=["Date", "Title", "Content"])
        print(df)
        df.to_csv("result.csv", index=False)


来源:https://stackoverflow.com/questions/64601712/limited-number-of-scraped-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!