发布于2021-02-28 16:50 阅读(318) 评论(0) 点赞(8) 收藏(3)
0
1
2
3
4
When using Scrapy, I like to experiment with the scrapy response object inside a Jupyter notebook like this:
import requests
from scrapy.http import TextResponse
res = requests.get(my_url)
response = TextResponse(res.url, body=res.text, encoding='utf-8')
For my current project I must use the scrapy-selenium middleware, which allow you to replace the built-in scrapy request like this:
from scrapy_selenium import SeleniumRequest
yield SeleniumRequest(url=url, callback=self.parse_result)
I am wondering if there is any way to interact with the scrapy-selinium response object in a Jupyter notebook, like I have done with regular scrapy above.
0
1
2
3
4
5
6
7
作者:黑洞官方问答小能手
链接: https://www.pythonheidong.com/blog/article/863992/a82fc5d26f7a4cc5e6e7/
来源: python黑洞网
任何形式的转载都请注明出处,如有侵权 一经发现 必将追究其法律责任
昵称:
评论内容:(最多支持255个字符)
Copyright © 2018-2021 python黑洞网 All Rights Reserved 版权所有,并保留所有权利。 京ICP备18063182号-1
投诉与举报,广告合作请联系vgs_info@163.com或QQ3083709327
免责声明:网站文章均由用户上传,仅供读者学习交流使用,禁止用做商业用途。若文章涉及色情,反动,侵权等违法信息,请向我们举报,一经核实我们会立即删除!