+关注
已关注

分类  

暂无分类

标签  

暂无标签

日期归档  

2019-07(6)

2019-08(116)

2019-09(98)

2019-10(17)

2019-11(5)

python爬虫,爬取起点网站小说

发布于2020-09-09 23:41     阅读(507)     评论(0)     点赞(17)     收藏(0)


0

1

2

3

4

使用python再来做一次爬虫:主要抓取玄幻类型的小说
目标网址:起点
使用模块:bs4,os模块
基本思路:
获取需求页面的元素代码,装到bs4容器里面,然后进行操作

首先获取接口:https://www.qidian.com/xuanhuan,可以看到,亲求方法是get
在这里插入图片描述
首先获取玄幻小说的所有页面元素代码,然后装到bs4容器里进行操作:

url = "https://www.qidian.com/xuanhuan"
method = 'get'
headers = {"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64)","Referer":"https://www.qidian.com"}
res = requests.get(url,headers=headers)
res.encoding = 'utf-8'
# print(res.text)
soup = BeautifulSoup(res.text,'html.parser')
xuanhuan = soup.select('.book-list')
print('book-list:',xuanhuan)
number = 0

headers是对一些防爬机制的简单处理
因为有很多的页面和链接。所有建议把 BeautifulSoup直接封装:

from bs4 import BeautifulSoup
import requests
class soupx:
    def soup(self,method,url):
        headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)", "Referer": "https://www.qidian.com"}
        res = requests.request(method,url,headers=headers)
        res.encoding = 'utf-8'
        soup = BeautifulSoup(res.text,'html.parser')
        return soup

完整代码块:

import os
from reptile.soup4 import soupx
import time

path = 'D:/xiaoshuo/'
#windows不能创建自带的目录,添加逻辑判断
if os.path.exists(path):
    print('目录已经存在')
    flag = 1
else:
    os.makedirs(path)
    flag = 0


url = "https://www.qidian.com/xuanhuan"
method = 'get'
# headers = {"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64)","Referer":"https://www.qidian.com"}
soup = soupx().soup(method=method,url=url)
#把bs操作模块封装成一个类,后面直接调用这个模块
# res = requests.get(url,headers=headers)
# res.encoding = 'utf-8'
# print(res.text)
# soup = BeautifulSoup(res.text,'html.parser')
xuanhuan = soup.select('.book-list')
print('book-list:',xuanhuan)
number = 0
for book in xuanhuan:
    #获取所有玄幻日周月前十的内容
    print('book:',book)
    soup1 = book.select('a')
    soup1.pop(1)
    soup1.pop(1)
    soup1.pop(1)
    number += 1
    print('soup1:',soup1)
    time.sleep(0.5)

    for article in soup1:
        #获取书名和链接
        print('article:',article)
        name = article.text
        herf = article['href']
        herf_article = 'https:' + herf  # 文章链接加上https
        print(name,":",herf_article)
        file = os.path.join(path,name)
        print(file)

        # 获取章节链接
        article_soup = soupx().soup(method=method,url=herf_article)
        chapter = article_soup.select('.volume')
        print('chapter:',chapter)
        data_list = chapter[0]('li')
        # print(data_list)

        #打开或创建文件
        file_name = open(file + '.txt', 'w+', encoding='utf-8')
        article_soup = soupx().soup(method=method, url=herf_article)
        for data in data_list:
            # 获取章节名和内容
            chapter_href = "https:" + data.select('a')[0]['href']
            # print(article_href)
            soup = soupx().soup(method='get', url=chapter_href)
            # print(soup)
            chapter_name = soup.select('.content-wrap')[0].text
            # print(chapter_name)
            chapter_text = soup.select('.read-content')[0].text
            # print(chapter_text)
            file_name.writelines(chapter_name)
            print(chapter_name)
            file_name.writelines(chapter_text + '\n')
            time.sleep(0.5)
        file_name.close()

部分结果截图:
在这里插入图片描述

0

1

2

3

4

5



所属网站分类: 技术文章 > 博客

作者:python是我的菜

链接: https://www.pythonheidong.com/blog/article/514917/66d84a0ea23a19d105c3/

来源: python黑洞网

任何形式的转载都请注明出处,如有侵权 一经发现 必将追究其法律责任

17 0
收藏该文
已收藏

评论内容:(最多支持255个字符)