关于如何有效地使用 PHP 下载文件而不是允许直接 HTTP 请求(以确保文件安全、跟踪下载等),我看到了许多问题.
I've seen many questions about how to efficiently use PHP to download files rather than allowing direct HTTP requests (to keep files secure, to track downloads, etc.).
答案几乎总是PHP readfile().
但是,虽然它在测试大文件时效果很好,但当它在有数百个用户的实时站点上时,下载开始挂起并且 PHP 内存限制用尽.
BUT, although it works great during testing with huge files, when it's on a live site with hundreds of users, downloads start to hang and PHP memory limits are exhausted.
那么 readfile()
的工作原理是什么导致内存在流量高时爆炸如此严重?我认为它应该通过直接写入输出缓冲区来绕过大量使用 PHP 内存?
So what is it about how readfile()
works that causes memory to blow up so bad when traffic is high? I thought it's supposed to bypass heavy use of PHP memory by writing directly to the output buffer?
(为了澄清,我正在寻找为什么",而不是我能做什么".我认为 Apache 的 mod_xsendfile 是最好的规避方法)
(To clarify, I'm looking for a "why", not "what can I do". I think that Apache's mod_xsendfile is the best way to circumvent)
Description
int readfile ( string $filename [, bool $use_include_path = false [, resource $context ]] )
Reads a file and writes it to the output buffer*.
PHP 必须读取文件并将其写入输出缓冲区.因此,对于 300Mb 的文件,无论您编写什么实现(通过许多小段或 1 个大块),PHP 最终都必须读取 300Mb 的文件.
PHP has to read the file and it writes to the output buffer. So, for 300Mb file, no matter what the implementation you wrote (by many small segments, or by 1 big chunk) PHP has to read through 300Mb of file eventually.
如果需要多个用户下载文件,就会出现问题.(在一台服务器中,托管服务提供商会限制分配给每个托管用户的内存.由于内存如此有限,使用缓冲区不是一个好主意.)
If multiple user has to download the file, there will be a problem. (In one server, hosting providers will limit memory given to each hosting user. With such limited memory, using buffer is not going to be a good idea. )
我认为使用直接链接下载文件对于大文件来说是一种更好的方法.
I think using the direct link to download a file is a much better approach for big files.
这篇关于为什么 readfile() 会耗尽 PHP 内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!