We have a queue system for async works, it is build by Pyres
with Redis
. It works fine if your application is small, pyres worker forks a new process before doing job, and then terminate working process, worker just waits for new job.
With out application is more and more complex, every new working process spends much time on loading, the package must be loaded, the package's dependencies must be loaded, e.g. working process only spends 1s on executing job code, but it needs 10s for loading.
We think if worker process
could load everything it needs, new process don't need load because of new process is forked by worker process
, python knows which module is loaded, if you load a loaded module, it just returns, very fast. Maybe Worker
class can be extened and implement before_fork
method for loading something, loaded modules are available for new forked process.
Another methods is that, if working process
could exists for a long time, it don't need load modules again for some time. There is a pyres_manager
, it creates a manager
, manager
creates some minion
, manager
put minion
into pool
. If you want to use pyres_manager
, don't expect resweb
works fine, it can't display worker
because it don't know minion
.
Top comments (0)