• Disable any triggers on the table
  • Drop indexes before starting the import, re-create them afterwards. (It takes much less time to build an index in one pass than it does to add the same data to it progressively, and the resulting index is much more compact).
  • Change table to  UNLOGGED table without indexes, then change it to logged and add the indexes.Unfortunately in PostgreSQL 9.4 there's no support for changing tables from UNLOGGED to logged. 9.5 adds ALTER TABLE ... SET LOGGED to permit you to do this.
  • Remove Foreign Key Constraints
  • If doing the import within a single transaction, it's safe to drop foreign key constraints, do the import, and re-create the constraints before committing. Do not do this if the import is split across multiple transactions as you might introduce invalid data.
  • If possible, use COPY instead of INSERTs
  • If you can't use COPY consider using multi-valued INSERTs if practical. Don't try to list too many values in a single VALUES though; those values have to fit in memory a couple of times over, so keep it to a few hundred per statement.
  • Batch your inserts into explicit transactions, doing hundreds of thousands or millions of inserts per transaction. There's no practical limit AFAIK, but batching will let you recover from an error by marking the start of each batch in your input data.
  • Increase maintenance_work_mem:This will help to speed up CREATE INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. It won't do much for COPY itself, so this advice is only useful when you are using one or both of the above techniques.
  • Use synchronous_commit=off and a huge commit_delay to reduce fsync() costs. This won't help much if you've batched your work into big transactions, though.
  • INSERT or COPY in parallel from several connections. How many depends on your hardware's disk subsystem; as a rule of thumb, you want one connection per physical hard drive if using direct attached storage.
  • Set a high checkpoint_segments value and enable log_checkpoints. Look at the PostgreSQL logs and make sure it's not complaining about checkpoints occurring too frequently.
  • If and only if you don't mind losing your entire PostgreSQL cluster (your database and any others on the same cluster) to catastrophic corruption if the system crashes during the import, you can stop Pg, set fsync=off, start Pg, do your import, then (vitally) stop Pg and set fsync=on again. See WAL configurationDo not do this if there is already any data you care about in any database on your PostgreSQL install. If you set fsync=off you can also set full_page_writes=off; again, just remember to turn it back on after your import to prevent database corruption and data loss. See non-durable settings in the Pg manual.
  • Run ANALYZE Afterwards

参考:

http://stackoverflow.com/questions/12206600/how-to-speed-up-insertion-performance-in-postgresql

https://www.postgresql.org/docs/9.4/static/populate.html

最新文章

  1. GUI学习中错误Exception in thread "main" java.lang.NullPointerException
  2. GPS部标平台的架构设计(三) 基于struts+spring+hibernate+ibatis+quartz+mina框架开发GPS平台
  3. [ZigBee] 5、ZigBee基础实验——图文与代码详解定时器1(16位定时器)(长文)
  4. 对于placeholder浏览器兼容性(包括密码输入框)解决办法
  5. [css3]搜索框focus时变长
  6. 免安装Oracle客户端使用PLSQL Developer 7/8 连接Oracle10/11g
  7. Cloudinsight Agent install script
  8. LeetCode_N-Queens II
  9. Intellij Idea 配置database 连接SQL Server 2012
  10. Python的字符串与数字
  11. sqlite数据库读写在linux下的权限问题
  12. java线程(一)
  13. Spring定时器实现(一)
  14. Mac_Homebrew
  15. for/in 循环遍历对象的属性
  16. 找出数组中最大值and索引
  17. jquery easyui的应用-2
  18. 把旧系统迁移到.Net Core 2.0 日记(8) - EASYUI datagrid+ Dapper+ 导出Excel
  19. 前端安全 -- XSS攻击
  20. 关于RPG游戏结构撰写的相关探索下篇

热门文章

  1. 【isJson( jsonObj )】判断是否是JSON实例
  2. cronolog:日志分割工具
  3. kubernetes相关
  4. vector:动态数组
  5. usb_modeswitch移植
  6. C Program进阶-数组
  7. 《C++面试知识点》
  8. 【转】c++面试基础
  9. 数据库索引(结合B-树和B+树)
  10. 搭建github