Python多处理:限制使用的核心数量

时间:2023-03-13
本文介绍了Python多处理:限制使用的核心数量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我想知道如何将 N 个独立任务分配给具有 L 个内核的机器上的恰好 M 个处理器,其中 L>M.我不想使用所有处理器,因为我仍然希望 I/O 可用.我尝试过的解决方案似乎被分发到所有处理器,从而使系统陷入困境.

I want to know how to distribute N independent tasks to exactly M processors on a machine that has L cores, where L>M. I don't want to use all the processors because I still want to have I/O available. The solutions I've tried seem to get distributed to all processors, bogging down the system.

我认为多处理模块是可行的方法.

I assume the multiprocessing module is the way to go.

我做数值模拟.我的背景是物理学,而不是计算机科学,所以很遗憾,我经常不能完全理解涉及标准任务模型(如服务器/客户端、生产者/消费者等)的讨论.

I do numerical simulations. My background is in physics, not computer science, so unfortunately, I often don't fully understand discussions involving standard tasking models like server/client, producer/consumer, etc.

以下是我尝试过的一些简化模型:

Here are some simplified models that I've tried:

假设我有一个运行模拟的函数 run_sim(**kwargs)(见下文),以及一长串用于模拟的 kwarg,并且我有一台 8 核机器.

Suppose I have a function run_sim(**kwargs) (see that further below) that runs a simulation, and a long list of kwargs for the simulations, and I have an 8 core machine.

from multiprocessing import Pool, Process

#using pool
p = Pool(4)
p.map(run_sim, kwargs)

# using process
number_of_live_jobs=0
all_jobs=[]
sim_index=0
while sim_index < len(kwargs)+1:
   number_of_live_jobs = len([1 for job in all_jobs if job.is_alive()])
   if number_of_live_jobs <= 4:
      p = Process(target=run_sim, args=[], kwargs=kwargs[sim_index])
      print "starting job", kwargs[sim_index]["data_file_name"]
      print "number of live jobs: ", number_of_live_jobs
      p.start()
      p.join()
      all_jobs.append(p)
      sim_index += 1

当我用top"和1"查看处理器使用情况时,无论哪种情况,所有处理器似乎都被使用了.我误解了top"的输出并不是不可能的,但是如果 run_simulation() 是处理器密集型的,那么机器就会严重陷入困境.

When I look at the processor usage with "top" and then "1", All processors seem to get used anyway in either case. It is not out of the question that I am misinterpreting the output of "top", but if the run_simulation() is processor intensive, the machine bogs down heavily.

假设的模拟和数据:

# simulation kwargs
numbers_of_steps = range(0,10000000, 1000000)
sigmas = [x for x in range(11)]
kwargs = []
for number_of_steps in numbers_of_steps:
   for sigma in sigmas:
      kwargs.append(
         dict(
            number_of_steps=number_of_steps,
            sigma=sigma,
            # why do I need to cast to int?
            data_file_name="walk_steps=%i_sigma=%i" % (number_of_steps, sigma),
            )
         )

import random, time
random.seed(time.time())

# simulation of random walk
def run_sim(kwargs):
   number_of_steps = kwargs["number_of_steps"]
   sigma = kwargs["sigma"]
   data_file_name = kwargs["data_file_name"]
   data_file = open(data_file_name+".dat", "w")
   current_position = 0
   print "running simulation", data_file_name
   for n in range(int(number_of_steps)+1):
      data_file.write("step number %i   position=%f
" % (n, current_position))
      random_step = random.gauss(0,sigma)
      current_position += random_step

   data_file.close()

推荐答案

如果你是linux,启动程序时使用taskset

If you are on linux, use taskset when you launch the program

通过 fork(2) 创建的子代会继承其父代的 CPU 关联掩码.关联掩码在 execve(2) 中保留.

A child created via fork(2) inherits its parent’s CPU affinity mask. The affinity mask is preserved across an execve(2).

任务集(1)
Linux 用户手册
任务集(1)

TASKSET(1)
Linux User’s Manual
TASKSET(1)

名称taskset - 检索或设置进程的 CPU 亲和性

NAME taskset - retrieve or set a process’s CPU affinity

概要任务集 [选项] 掩码命令 [arg] ...taskset [选项] -p [掩码] pid

SYNOPSIS taskset [options] mask command [arg]... taskset [options] -p [mask] pid

描述taskset 用于设置或检索正在运行的 CPU 亲和性进程给定它的 PID 或启动一个新的具有给定 CPU 亲和性的命令.CPU 亲和性是一个调度程序将进程绑定"到给定系统上的一组 CPU.Linux 调度程序将遵守给定 CPU 亲和性和进程将不会在任何其他 CPU 上运行.请注意,Linux 调度程序也支持自然 CPU 亲和性:调度器尽量将进程保持在同一个 CPU 上性能原因.所以,强迫特定的 CPU 亲和性仅在某些应用程序中有用.

DESCRIPTION taskset is used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications.

CPU 亲和性表示为位掩码,具有最低顺序对应于第一个位合乎逻辑的CPU 和对应于最后一个逻辑 CPU 的最高位.并非所有 CPU 都可能存在于给定系统上tem 但掩码可能指定比现有更多的 CPU.检索到的掩码将仅反映更正对系统上的 CPU 做出物理响应.如果一个无效的掩码是给定的(即,对应于不当前系统上的有效 CPU)返回错误.这面具通常在十六进制.

The CPU affinity is represented as a bitmask, with the lowest order bit corresponding to the first logical CPU and the highest order bit corresponding to the last logical CPU. Not all CPUs may exist on a given sys‐ tem but a mask may specify more CPUs than are present. A retrieved mask will reflect only the bits that cor‐ respond to CPUs physically on the system. If an invalid mask is given (i.e., one that corresponds to no valid CPUs on the current system) an error is returned. The masks are typically given in hexadecimal.

这篇关于Python多处理:限制使用的核心数量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:我的 Python 进程在哪些 CPU 内核上运行? 下一篇:将 multiprocessing.Queue 转储到列表中

相关文章