我正在寻找详细描述 python 垃圾收集如何工作的文档.
I'm looking for documents that describes in details how python garbage collection works.
我对在哪一步做了什么感兴趣.这 3 个集合中有哪些对象?每个步骤中删除了哪些类型的对象?参考循环查找使用什么算法?
I'm interested what is done in which step. What objects are in these 3 collections? What kinds of objects are deleted in each step? What algorithm is used for reference cycles finding?
背景:我正在实施一些必须在短时间内完成的搜索.当垃圾收集器开始收集最老的一代时,它比其他情况慢得多".它花费的时间超过了搜索的预期时间.我正在寻找如何预测它何时会收集最老一代以及需要多长时间.
Background: I'm implementing some searches that have to finish in small amount of time. When the garbage collector starts collecting the oldest generation, it is "much" slower than in other cases. It took more time than it is intended for searches. I'm looking how to predict when it will collect oldest generation and how long it will take.
使用 get_count()
和 get_threshold()
很容易预测它何时会收集最老一代.这也可以使用 set_threshold()
进行操作.但是我看不出是通过强制进行 collect()
还是等待预定的收集更好.
It is easy to predict when it will collect oldest generation with get_count()
and get_threshold()
. That also can be manipulated with set_threshold()
. But I don't see how easy to decide is it better to make collect()
by force or wait for scheduled collection.
gc
模块文档没有关于 Python 如何进行垃圾收集的明确资源(除了源代码本身),但是这 3 个链接应该会给你一个很好的想法.
There's no definitive resource on how Python does its garbage collection (other than the source code itself), but those 3 links should give you a pretty good idea.
该来源实际上很有帮助.你能从中得到多少取决于你对 C 的阅读程度,但这些评论实际上非常有帮助.跳到 collect()
功能 并且评论很好地解释了这个过程(尽管是非常技术性的术语).
The source is actually pretty helpful. How much you get out of it depends on how well you read C, but the comments are actually very helpful. Skip down to the collect()
function and the comments explain the process well (albeit in very technical terms).
这篇关于Python 垃圾收集器文档的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!