我一直在尝试应用一种算法来根据特定标准将 python 列表缩减为更小的列表.由于原始列表的体积很大,大约有 100k 个元素,我尝试使用 itertools 来避免多次内存分配,所以我想出了这个:
I've been trying to apply an algorithm to reduce a python list into a smaller one based on a certain criteria. Due to the large volume of the original list, in the order of 100k elements, I tried to itertools for avoiding multiple memory allocations so I came up with this:
reducedVec = [ 'F' if sum( 1 for x in islice(vec, i, i+ratio) if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
当 vec 有大约 100k 个元素时,执行此操作需要几分钟的时间,令人担忧.当我尝试改为:
Execution time for this takes a worryingly long time in the order of a few minutes, when vec has around 100k elements. When I tried instead:
reducedVec = [ 'F' if sum( 1 for x in vec[i:i+ratio] if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
本质上用切片替换islice,执行是即时的.
in essence replace islice with a slice the execution is instantaneous.
你能想出一个合理的解释吗?我原以为避免重复分配包含大量元素的新列表实际上会节省我几个计算周期,而不是削弱整个执行.
Can you think of a plausible explanation for this? I would have thought that avoiding to repeatedly allocate a new list with a substantial number of elements, would actually save me a few computational cycles instead of crippling the whole execution.
干杯,忒弥斯
islice
适用于任意迭代.为此,它必须遍历第一个 n-1 元素,而不是直接跳到第 n 个元素,将它们扔掉,然后生成你想要的元素.
islice
works with arbitrary iterables. To do this, rather than jumping straight to the nth element, it has to iterate over the first n-1, throwing them away, then yield the ones you want.
从 itertools 文档中查看纯 Python 实现:
def islice(iterable, *args):
# islice('ABCDEFG', 2) --> A B
# islice('ABCDEFG', 2, 4) --> C D
# islice('ABCDEFG', 2, None) --> C D E F G
# islice('ABCDEFG', 0, None, 2) --> A C E G
s = slice(*args)
it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1))
nexti = next(it)
for i, element in enumerate(iterable):
if i == nexti:
yield element
nexti = next(it)
<小时>
谈到 itertools 文档,如果我尝试执行此操作,我可能会使用 grouper
配方.它实际上不会为您节省任何内存,但如果您将其重写为更懒,它可以,这并不难.
Speaking of the itertools documentation, if I was trying to do this operation, I'd probably use the grouper
recipe. It won't actually save you any memory, but it could if you rewrote it to be lazier, which wouldn't be tough.
from __future__ import division
from itertools import izip_longest
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
reducedVec = []
for chunk in grouper(ratio, vec):
if sum(1 for x in chunk if x == 'F') > ratio / 3:
reducedVec.append('F')
else:
reducedVec.append('T')
我喜欢使用 grouper
来抽象出连续的切片,发现这段代码比原来的代码更容易阅读
I like using grouper
to abstract away the consecutive slices and find this code a lot easier to read than the original
这篇关于itertools.islice 与列表切片相比的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!