我正在尝试跟踪处理 URL 的脚本的内存使用情况.基本思想是在将另一个 URL 添加到 cURL 多处理程序之前检查是否存在合理的缓冲区.我正在使用滚动 cURL"概念,该概念在多处理程序运行时处理 URL 数据.这意味着每次现有 URL 处理和删除时,我都可以通过从池中添加一个新 URL 来保持 N 个连接处于活动状态.
I'm trying to track the memory usage of a script that processes URLs. The basic idea is to check that there's a reasonable buffer before adding another URL to a cURL multi handler. I'm using a 'rolling cURL' concept that processes a URLs data as the multi handler is running. This means I can keep N connections active by adding a new URL from a pool each time an existing URL processes and is removed.
我使用了 memory_get_usage()
并获得了一些积极的结果.添加 real_usage
标志有帮助(不太清楚系统"内存和emalloc"内存之间的区别,但系统显示更大的数字).memory_get_usage()
确实会随着 URL 的添加而上升,然后随着 URL 集的耗尽而下降.然而,我刚刚超过了 32M 的限制,我上次的内存检查是 ~18M.
I've used memory_get_usage()
with some positive results. Adding the real_usage
flag helped (not really clear on the difference between 'system' memory and 'emalloc' memory, but system shows larger numbers). memory_get_usage()
does ramp up as URLs are added then down as the URL set is depleted. However, I just exceeded the 32M limit with my last memory check being ~18M.
每次 cURL multi 表示请求返回时,我都会轮询内存使用情况.由于多个请求可能同时返回,因此有可能一堆URL同时返回数据并实际跳过14M的内存使用.但是,如果 memory_get_usage()
是准确的,我想这就是正在发生的事情.
I poll the memory usage each time cURL multi signals a request has returned. Since multiple requests may return at the same time, there's a chance a bunch of URLs returned data at the same time and actually jumped the memory usage that 14M. However, if memory_get_usage()
is accurate, I guess that's what's happening.
[更新:在询问我猜之前应该运行更多测试,增加了 php 的内存限制(但在脚本中保留了相同的安全"数量)并且报告的内存使用量确实从低于我自己施加的 25M 到超过 32M 的限制.然后,正如预期的那样,随着未添加的 URL 逐渐减少.但我会留下这个问题:这是正确的方法吗?]
[Update: Should have run more tests before asking I guess, increased php's memory limit (but left the 'safe' amount the same in the script) and the memory usage as reported did jump from below my self imposed limit of 25M to over 32M. Then, as expected slowly ramped down as URLs where not added. But I'll leave the question up: Is this the right way to do this?]
我可以以这种方式信任 memory_get_usage()
吗?是否有更好的替代方法来获取内存使用情况(我见过一些脚本解析 shell 命令的输出)?
Can I trust memory_get_usage()
in this way? Are there better alternative methods for getting memory usage (I've seen some scripts parse the output of shell commands)?
real_usage
是这样工作的:
Zend 的内存管理器不会为它需要的每个块使用系统 malloc.相反,它会分配一大块系统内存(以 256K 为增量,可以通过设置环境变量 ZEND_MM_SEG_SIZE
来更改)并在内部对其进行管理.所以,有两种内存使用:
Zend's memory manager does not use system malloc for every block it needs. Instead, it allocates a big block of system memory (in increments of 256K, can be changed by setting environment variable ZEND_MM_SEG_SIZE
) and manages it internally. So, there are two kinds of memory usage:
其中之一可以由 memory_get_usage()
返回.哪一个对您更有用取决于您正在研究什么.如果您正在考虑优化特定部分的代码,内部"可能对您更有用.如果您要全局跟踪内存使用情况,真实"会更有用.memory_limit
限制了真实"数量,因此只要限制允许的所有块都从系统中取出并且内存管理器无法分配请求的块,分配就会失败.请注意,这种情况下的内部"使用量可能小于限制,但由于碎片化,分配仍然可能失败.
Either one of these can be returned by memory_get_usage()
. Which one is more useful for you depends on what you are looking into. If you're looking into optimizing your code in specific parts, "internal" might be more useful for you. If you're tracking memory usage globally, "real" would be of more use. memory_limit
limits the "real" number, so as soon as all blocks that are permitted by the limit are taken from the system and the memory manager can't allocate a requested block, there the allocation fails. Note that "internal" usage in this case might be less than the limit, but the allocation still could fail because of fragmentation.
另外,如果你正在使用一些外部内存跟踪工具,你可以设置这个环境变量 USE_ZEND_ALLOC=0
将禁用上述机制并使引擎始终使用 malloc()
.这将有更差的性能,但允许您使用 malloc 跟踪工具.
Also, if you are using some external memory tracking tool, you can set this
environment variable USE_ZEND_ALLOC=0
which would disable the above mechanism and make the engine always use malloc()
. This would have much worse performance but allows you to use malloc-tracking tools.
另请参阅一篇关于此内存管理器的文章,它也有一些代码示例.
See also an article about this memory manager, it has some code examples too.
这篇关于在 PHP 中跟踪内存使用情况的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!