我有一个我无法控制的相当大的 csv 文件(至少对于网络而言).它有大约 10 万行,而且只会变得更大.
I have a fairly large csv file (at least for the web) that I don't have control of. It has about 100k rows in it, and will only grow larger.
我使用 Drupal 模块提要根据这些数据创建节点,它们的解析器以 50 行为一组对解析进行批处理.但是,他们的解析器没有正确处理引号,并且无法解析大约 60% 的 csv 文件.fgetcsv 可以工作,但据我所知不能批量处理.
I'm using the Drupal Module Feeds to create nodes based on this data, and their parser batches the parsing in groups of 50 lines. However, their parser doesn't handle quotation marks properly, and fails to parse about 60% of the csv file. fgetcsv works but doesn't batch things as far as I can tell.
在尝试使用 fgetcsv 读取整个文件时,PHP 最终会耗尽内存.因此,我希望能够将事情分解成更小的块.这可能吗?
While trying to read the entire file with fgetcsv, PHP eventually runs out of memory. Therefore I would like to be able to break things up into smaller chunks. Is this possible?
fgetcsv()
通过从给定的文件指针一次读取一行来工作.如果 PHP 内存不足,也许您正在尝试一次解析整个文件,将其全部放入一个巨大的数组中.解决方案是逐行处理,而不将其存储在大数组中.
fgetcsv()
works by reading one line at a time from a given file pointer. If PHP is running out of memory, perhaps you are trying to parse the whole file at once, putting it all into a giant array. The solution would be to process it line by line without storing it in a big array.
要更直接地回答批处理问题,请从文件中读取 n 行,然后使用 ftell()
在文件中找到结束的位置.记下这一点,然后您可以在将来的某个时候通过在 fgetcsv()
之前调用 fseek()
来返回它.
To answer the batching question more directly, read n lines from the file, then use ftell()
to find the location in the file where you ended. Make a note of this point, and then you can return to it at some point in the future by calling fseek()
before fgetcsv()
.
这篇关于批处理 php 的 fgetcsv的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!