使用 redis/php-resque 优化并发 ImageMagick 请求

时间:2023-05-06
本文介绍了使用 redis/php-resque 优化并发 ImageMagick 请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我在一个使用 ImageMagick 生成图像的网站上工作.该站点每分钟将收到数百个请求,使用 ImageMagick 执行此操作会导致站点崩溃.

I am working on a site that uses ImageMagick to generate images. The site will get hundreds of request every minute and using ImageMagick to do this causes the site to crash.

所以我们实现了 Redis 和 Php-resque 来在单独的服务器上在后台生成 ImageMagick,这样它就不会使我们的主要服务器崩溃.问题是完成图像仍然需要很长时间.用户可能希望等待最多 2-3 分钟的图像请求,因为服务器正忙于处理这些图像.

So we implemented Redis and Php-resque to do the ImageMagick generating in the background on a seperate server so that it doesn't crash our main one. The problem is that it's still taking a very long time to get images done. A user might expect to wait up to 2-3 minutes for an image request because the server is so busy processing these images.

我不知道该给你什么信息,但我更多的是在寻求建议.我认为如果我们可以减少 ImageMagick 请求的初始处理时间,那么显然这将有助于加快我们可以处理的图像数量.

I am not sure what information to give you, but I'm more looking for advice. I think if we can cut down the initial process time for the ImageMagick request, then obviously this will help speed up the amount of images we can process.

以下是我们使用的 ImageMagick 脚本示例:

Below is a sample of the ImageMagick script that we use:

convert -size 600x400 xc:none ( ".$path."assets/images/bases/base_image_69509021433289153_8_0.png -fill rgb(255,15,127) -colorize 100% ) -composite ( ".$path."assets/images/bases/eye_image_60444011438514404_8_0.png -fill rgb(15,107,255) -colorize 100% ) -composite ( ".$path."assets/images/markings/marking_clan_8_marking_10_1433289499.png -fill rgb(255,79,79) -colorize 100% ) -composite ( ".$path."assets/images/bases/shading_image_893252771433289153_8_0.png -fill rgb(135,159,255) -colorize 100% ) -compose Multiply -composite ( ".$path."assets/images/highlight_image_629750231433289153_8_0.png -fill rgb(27,35,36) -colorize 100% ) -compose Overlay -composite ( ".$path."assets/images/lineart_image_433715161433289153_8_0.png -fill rgb(0,0,0) -colorize 100% ) -compose Over -composite ".$path."assets/generated/queue/tempt_preview_27992_userid_0_".$filename."_file.png

我的理论是,这需要很长时间的原因是对图像进行着色的过程.有没有办法优化这个过程?

My theory is that the reason this takes quite a long time is due to the process of colouring the images. Is there a way to optimise this process at all?

如果您有处理大量 imagemagick 进程的经验,或者可以看到一些非常简单的方法来优化我们的请求,我会非常感激.

Anyone who has some experience with handling heavy loads of imagemagick processes or can see some glaringly easy ways to optimise our requests, I'd be very greatful.

谢谢:)

推荐答案

你的命令实际上可以归结为:

Your command actually boils down to this:

convert -size 600x400 xc:none                                 
    ( 1.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 2.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 3.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 4.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 5.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 6.png -fill rgb(x,y,z) -colorize 100% ) -composite  
    result.png

我的想法如下:

第 1 点:

在空白画布上的第一个 -composite 似乎毫无意义 - 大概 1.png 是具有透明度的 600x400 PNG,因此您的第一行可以避免合成操作并保存16% 的处理时间改为:

The first -composite onto a blank canvas seems pointless - presumably 1.png is a 600x400 PNG with transparency, so your first line can avoid the compositing operation and save 16% of the processing time by changing to:

convert -background none 1.png -fill ... -colorize 100% 
   ( 2.png ..
   ( 3.png ...

第 2 点

我将与您的命令等效的命令放入循环中并进行了 100 次迭代,耗时 15 秒.然后我将您对 PNG 文件的所有读取更改为对 MPC 文件或 Magick Pixel Cache 文件的读取.这将处理时间缩短到不到 10 秒,即减少了 33%.Magic Pixel Cache 只是一个预解压、预解码的文件,可以直接读入内存而无需任何 CPU 工作.您可以在目录更改时预先创建它们并将它们与 PNG 文件一起存储.做一个你做的

I put the equivalent of your command into a loop and did 100 iterations and it takes 15 seconds. I then changed all your reads of PNG files into reads of MPC files - or Magick Pixel Cache files. That reduced the processing time to just under 10 seconds, i.e. by 33%. A Magic Pixel Cache is just a pre-decompressed, pre-decoded file that can be read directly into memory without any CPU effort. You could pre-create them whenever your catalogue changes and store them alongside the PNG files. To make one you do

convert image.png image.mpc

你会得到image.mpcimage.cache.然后您只需将代码更改为如下所示:

and you will get out image.mpc and image.cache. Then you would simply change your code to look like this:

convert -size 600x400 xc:none                                 
    ( 1.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 2.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 3.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 4.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 5.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    ( 6.mpc -fill rgb(x,y,z) -colorize 100% ) -composite  
    result.png

第 3 点

很遗憾,您还没有回答我的问题,但如果您的资产目录不是太大,您可以在系统启动时将其(或上面的 MPC 等效项)放到 RAM 磁盘上.

Unfortunately you haven't answered my questions yet, but if your assets catalogue is not too big, you could put that (or the MPC equivalents above) onto a RAM disk at system startup.

第 4 点

您绝对应该并行运行 - 这将产生最大的收益.使用 GNU Parallel 非常简单 - 此处的示例.

You should definitely run in parallel - that will yield the biggest gains of all. It is very simple with GNU Parallel - example here.

如果您使用的是 REDIS,实际上比这更容易.只需 LPUSH 将您的 MIME 编码图像放入一个 REDIS 列表中,如下所示:

If you are using REDIS, it is actually easier than that. Just LPUSH your MIME-encoded images into a REDIS list like this:

#!/usr/bin/perl
################################################################################
# generator.pl <number of images> <image size in bytes>
# Mark Setchell
# Base64 encodes and sends "images" of specified size to REDIS
################################################################################
use strict;
use warnings FATAL => 'all';
use Redis;
use MIME::Base64;
use Time::HiRes qw(time);

my $Debug=0;    # set to 1 for debug messages

my $nargs = $#ARGV + 1;
if ($nargs != 2) {
    print "Usage: generator.pl <number of images> <image size in bytes>
";
    exit 1;
}

my $nimages=$ARGV[0];
my $imsize=$ARGV[1];

# Our "image"
my $image="x"x$imsize;

printf "DEBUG($$): images: $nimages, size: $imsize
" if $Debug;

# Connection to REDIS
my $redis = Redis->new;
my $start=time;

for(my $i=0;$i<$nimages;$i++){
   my $encoded=encode_base64($image,'');
   $redis->rpush('images'=>$encoded);
   print "DEBUG($$): Sending image $i
" if $Debug;
}
my $elapsed=time-$start;
printf "DEBUG($$): Sent $nimages images of $imsize bytes in %.3f seconds, %d images/s
",$elapsed,int($nimages/$elapsed);

然后运行多个工人,他们都坐在那里做大量的工作

and then run multiple workers that all sit there doing BLPOPs of jobs to do

#!/usr/bin/perl
################################################################################
# worker.pl
# Mark Setchell
# Reads "images" from REDIS and uudecodes them as fast as possible
################################################################################
use strict;
use warnings FATAL => 'all';
use Redis;
use MIME::Base64;
use Time::HiRes qw(time);

my $Debug=0;    # set to 1 for debug messages
my $timeout=1;  # number of seconds to wait for an image
my $i=0;

# Connection to REDIS
my $redis = Redis->new;

my $start=time;

while(1){
   #my $encoded=encode_base64($image,'');
   my (undef,$encoded)=$redis->blpop('images',$timeout);
   last if !defined $encoded;
   my $image=decode_base64($encoded);
   my $l=length($image);
   $i++; 
   print "DEBUG($$): Received image:$i, $l bytes
" if $Debug;
}

my $elapsed=time-$start-$timeout; # since we waited that long for the last one
printf "DEBUG($$): Received $i images in %.3f seconds, %d images/s
",$elapsed,int($i/$elapsed);

如果我像上面一样运行一个生成器进程并让它生成 100,000 张图像,每个图像 200kB,并在我的合理规格 iMac 上用 4 个工作进程读取它们,需要 59 秒,或者大约 1,700 张图像/秒可以通过 REDIS.

If I run one generator process as above and have it generate 100,000 images of 200kB each, and read them out with 4 worker processes on my reasonable spec iMac, it takes 59 seconds, or around 1,700 images/s can pass through REDIS.

这篇关于使用 redis/php-resque 优化并发 ImageMagick 请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:在 Laravel 中取消作业 下一篇:如何让 Laravel 与 AWS 上的 Redis 集群一起工作

相关文章