scrapy的pipeline是一个非常重要的模块,主要作用是将return的items写入到数据库、文件等持久化模块,下面我们就简单的了解一下pipelines的用法。

案例一:

  

items池

class ZhihuuserItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
id = scrapy.Field()
name = scrapy.Field()
avatar_url = scrapy.Field()
headline = scrapy.Field()
description = scrapy.Field()
url = scrapy.Field()
url_token = scrapy.Field()
gender = scrapy.Field()
cover_url = scrapy.Field()
type = scrapy.Field()
badge = scrapy.Field() answer_count = scrapy.Field()
articles_count = scrapy.Field()
commercial_question = scrapy.Field()
favorite_count = scrapy.Field()
favorited_count = scrapy.Field()
follower_count = scrapy.Field()
following_columns_count = scrapy.Field()
following_count = scrapy.Field()
pins_count = scrapy.Field()
question_count = scrapy.Field()
thank_from_count = scrapy.Field()
thank_to_count = scrapy.Field()
thanked_count = scrapy.Field()
vote_from_count = scrapy.Field()
vote_to_count = scrapy.Field()
voteup_count = scrapy.Field()
following_favlists_count = scrapy.Field()
following_question_count = scrapy.Field()
following_topic_count = scrapy.Field()
marked_answers_count = scrapy.Field()
mutual_followees_count = scrapy.Field()
participated_live_count = scrapy.Field() locations = scrapy.Field()
educations = scrapy.Field()
employments = scrapy.Field()

items

写入MongoDB数据库的基本配置

#配置MongoDB数据库的连接信息
MONGO_URL = '172.16.5.239'
MONGO_PORT = 27017
MONGO_DB = 'zhihuuser' #参数等于False,就等于告诉你这个网站你想取什么就取什么,不会读取每个网站的根目录下的禁止爬取列表(例如:www.baidu.com/robots.txt)
ROBOTSTXT_OBEY = False 执行pipelines下的写入操作
ITEM_PIPELINES = {
'zhihuuser.pipelines.MongoDBPipeline': 300,
}

settings.py

pipelines.py:
  1、首先我们要从settings文件中读取数据的地址、端口、数据库名称(没有会自动创建)。
  2、拿到数据库的基本信息后进行连接。
  3、将数据写入数据库
  4、关闭数据库
  注意:只有打开和关闭是只执行一次,而写入操作会根据具体的写入次数而定。
import pymongo

class MongoDBPipeline(object):
"""
1、连接数据库操作
"""
def __init__(self,mongourl,mongoport,mongodb):
'''
初始化mongodb数据的url、端口号、数据库名称
:param mongourl:
:param mongoport:
:param mongodb:
'''
self.mongourl = mongourl
self.mongoport = mongoport
self.mongodb = mongodb @classmethod
def from_crawler(cls,crawler):
"""
1、读取settings里面的mongodb数据的url、port、DB。
:param crawler:
:return:
"""
return cls(
mongourl = crawler.settings.get("MONGO_URL"),
mongoport = crawler.settings.get("MONGO_PORT"),
mongodb = crawler.settings.get("MONGO_DB")
) def open_spider(self,spider):
'''
1、连接mongodb数据
:param spider:
:return:
'''
self.client = pymongo.MongoClient(self.mongourl,self.mongoport)
self.db = self.client[self.mongodb] def process_item(self,item,spider):
'''
1、将数据写入数据库
:param item:
:param spider:
:return:
'''
name = item.__class__.__name__
# self.db[name].insert(dict(item))
self.db['user'].update({'url_token':item['url_token']},{'$set':item},True)
return item def close_spider(self,spider):
'''
1、关闭数据库连接
:param spider:
:return:
'''
self.client.close()

  

最新文章

  1. Fragment碎片
  2. C#线程模型脉络
  3. leetcode 127. Word Ladder ----- java
  4. erlang和go之间桥接库相关
  5. UVa 10003 (可用四边形不等式优化) Cutting Sticks
  6. 关于Patter类和Match类
  7. MongoDB 权限认证
  8. Linux下find指令
  9. iPhone分辨率
  10. oracle 界面分页
  11. openfire研究之部署连接管理器(connection manager)
  12. 支持https的压力测试工具
  13. 修改cms版权等等信息
  14. 用C#实现DES加密解密解决URL参数明文的问题
  15. Qt---Xml文件解析
  16. 强化学习Q-Learning算法详解
  17. tomcat和iis共用80端口的简明手册
  18. Scala链式编程内幕
  19. LeetCode题解之 Subtree of Another Tree
  20. C#使用Ado.Net读写数据库

热门文章

  1. CMD命令行netsh添加防火墙规则
  2. [Python] 练习代码
  3. 03 python 初学(字符格式化输出)
  4. Linux中查看你的用户是否为root用户
  5. yaml的简单学习
  6. hyperledge工具-configtxgen
  7. GoldenGate OGG-01032 There Is a Problem in Network Communication Error in Writing to Rmt Remote Trail Rmttrail (Doc ID 1446621.1)
  8. oracle 11G direct path read 很美也很伤人
  9. Matconvnet笔记(一)
  10. 理解Shadow DOM(一)