有时,我必须为一个项目重新导入数据,从而将大约 360 万行读入 MySQL 表(目前是 InnoDB,但我实际上并不仅限于此引擎).加载数据文件..."已被证明是最快的解决方案,但它有一个权衡:- 在没有密钥的情况下导入时,导入本身大约需要 45 秒,但密钥创建需要很长时间(已经运行了 20 分钟......).- 使用表上的键进行导入会使导入速度变慢
sometimes, I have to re-import data for a project, thus reading about 3.6 million rows into a MySQL table (currently InnoDB, but I am actually not really limited to this engine). "Load data infile..." has proved to be the fastest solution, however it has a tradeoff: - when importing without keys, the import itself takes about 45 seconds, but the key creation takes ages (already running for 20 minutes...). - doing import with keys on the table makes the import much slower
表的 3 个字段上有键,引用数字字段.有什么办法可以加速吗?
There are keys over 3 fields of the table, referencing numeric fields. Is there any way to accelerate this?
另一个问题是:当我终止启动慢查询的进程时,它继续在数据库上运行.有没有什么办法不用重启mysqld就可以终止查询?
Another issue is: when I terminate the process which has started a slow query, it continues running on the database. Is there any way to terminate the query without restarting mysqld?
非常感谢数据库
如果您使用的是 innodb 和批量加载,这里有一些提示:
if you're using innodb and bulk loading here are a few tips:
将你的 csv 文件按目标表的主键顺序排序:记住 innodb 使用聚集的主键,所以如果它被排序,它会加载得更快!
sort your csv file into the primary key order of the target table : remember innodb uses clustered primary keys so it will load faster if it's sorted !
我使用的典型加载数据文件:
typical load data infile i use:
truncate <table>;
set autocommit = 0;
load data infile <path> into table <table>...
commit;
可用于加快加载时间的其他优化:
other optimisations you can use to boost load times:
set unique_checks = 0;
set foreign_key_checks = 0;
set sql_log_bin=0;
将 csv 文件拆分成更小的块
split the csv file into smaller chunks
我在批量加载期间观察到的典型导入统计数据:
typical import stats i have observed during bulk loads:
3.5 - 6.5 million rows imported per min
210 - 400 million rows per hour
这篇关于MySQL 加载数据 infile - 加速?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!