Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel texture generation, simpler 'dispatcher', in-memory caching for tilesets #1115

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

gregoire-astruc
Copy link

Hello,

I enjoy the quality of the tiles produced, but I noticed something odd on my server: I have to nice the process by a huge factor to still be able to use it, and it looks like the process deadlocks often (I have to kill it and relaunch by hand).

So, I browsed through the code, and I did a couple adjustements:

  • I tweaked how textures are registered: block, billboard and sprite are now class-based decorators which inherits the material decorator. I am working towards a system in which people will be able to register block without modifying the project's sources. I saw the issues on that matter, but since many of us using mods are stuck with 1.6.x, it's worth my while :)
  • I disregarded the dispatcher entirely and used something a lot simpler based on synchronized queues (courtesy of python multiprocessing): since you have to generate work items at level N before processing work items at level N-1, the items are queued in that order with synchronization before going up a level. It also should avoid the deadlocks I'm seeing :)
  • I used your LRUCache for tiling processing: generated images are added to the cache, so it avoids hitting the filesystem on compositetile generation. I don't not if there's a big gain here, but it was bothering me.

Hopefully you'll find some of the modifications useful. I did some of them kind of quick, so I'm open for discussion, refactoring and creating a proper PR branch afterwards.

Thanks again for your product,

Greg*

@agrif
Copy link
Member

agrif commented Jun 26, 2014

...huh. Deadlocks are not something I've ever seen, so that's pretty weird. What OS and python versions are you using?

Also, phew! This is a lot of stuff. It'll take me some time to process, but let's start with...

Textures

Why did you choose to move to class-based decorators for the texture gen? In particular, the ones that ignore the passed function entirely. I tried to read the code to work this out but at the moment I'm tired, and I figure you'd know the reason better anyway.

There has already been some work in this direction. The oil branch has (among other things) a massively modified renderer, and hence a rewritten block definitions system that is vastly more extensible than the one in master. At the moment, it loads blocks from json definition files, which might also reference obj models. Since Mojang began making an official block model format, we've been sort of waiting for that to finalize, then switching to that. As a result, any reorganization we do to textures.py now probably won't make it into the oil branch; however, if they're backwards-compatible I'd love to merge them.

Tileset Stuff

Our default render order is a sort of z-order curve, where higher-level tiles are rendered immediately after their lower-level components are available. This is nice for watching a map as it renders, since the base level takes about 3/4 of the total render time, and zooming in all the way just to see the new stuff is annoying.

We spent a fair amount of time deciding on this render order, so we probably won't merge anything that changes it.

(Not using the dispatcher is fine, though. It was designed around two goals that we never ended up using. Current plan for the oil branch is to use Python3 and concurrent.futures, possibly with a thin layer in between to allow for transparently using multiple machines.)

Caching Tiles

PNG loading and saving is a pretty large chunk of total run time, in both master and oil branches, so it'd be neat to know if this helps. My guess would be no, since the caches are not shared between render processes, so most of the time the best we can hope for is 1/4 subtiles to be cached.

@gregoire-astruc
Copy link
Author

Hello,

thanks for your (oh my god so dense) reply.

I'll try to be succint :)

Textures

Switching from function-based decorators to class-based has several intentions to it. First, I believe it is easier to get around for a newcomer: one should identify more easily which parts are attributes, or inherited methods. As I understood, you had "is-a" relationships between materials, blocks, sprites and all - but I may have misread, so class-based seemed a more natural approach to me.

On the long run, it would probably enable you to remove global variables. Well, at least to turn them into class attributes.

I didn't know that the OIL branch was still WIP... I thought it got merged at some point, so sorry about that ;)

Tileset rendering

Oh! Gotcha now! I couldn't quite figure it out on my own. I tried to read the comments, but never seemed to found one that matches this simple description.

Anyway, I really believe that the system should send tiles that needs to be rendered. This way, changing your computational backend would be much easier since it wouldn't need any intelligence regarding this issue (just my 2cts).

Caching

Yeah, I'm no longer so sure about that either after I ran a profiler on the code.

Deadlocks

Well, actually... please don't merge. I ran your test-suite against the code (works ok), but I seem to have even worst synchronization issues. I may have rushed my PR a bit.
Please see #1116 regarding those.

My configuration is a debian 7, 2x bi-core cpu (Intel Atom N2800), 2Go RAM, python 2.7.3. I usually run rendering with 4 threads (so one per core). Hope that helps.

@agrif
Copy link
Member

agrif commented Jun 30, 2014

It sounds like multiprocessing itself might be having issues on whatever system you're running on, if even your new PR has issues. Maybe I'm being optimistic, but I think we'd have heard about it by now if there were deadlock issues in the current code, since it hasn't been changed for about two years now.

You'll be happy to know the oil approach to block definitions does away with globals entirely. Wooo! The oil design philosophy in general is for maximum composability, and globals don't help that. Now to just, uh... finish oil...

The tileset code is really nasty now, I know. Lots of accumulated cruft. There's a part in the design docs about it, but apparently we left out the specific render order.

I look forward to hearing what you discover about the deadlocks!

@Morketh
Copy link

Morketh commented Jul 3, 2014

As a thought for the rest of the dev team for parallel processing or generating incorporating the Message Passing Interface python library i believe could potentially give a more powerful and robust back end for distributing the render operation between several parallel computers splitting loads and resources into a cluster and gaining performance in the long run. I'm not sure how popular parallel system rendering would be but under the right circumstances you could theoretically cut render times by the number of Nodes in a Cluster for any sized map however only noticeable performance increases would be noticed on HUDGE maps with several nodes in a cluster. Ive already started looking into the MPI library to see about making a proof of concept to see how well it performs

@gregoire-astruc
Copy link
Author

At work we use celery (celeryproject.org) for one of our applications. It's
easy enough to setup, handles tasks dependencies... You may want to look it
up.

@Morketh
Copy link

Morketh commented Jul 3, 2014

Thank you for the additional information Gregoire-astruc i will have to sit down and do a full scale comparison of these to determine which one would be more flexible to the project at hand i only did a quick look at Celery until i can delve deeper into the project files for a closer look but from what i saw initially it may be easier to implement then MPI also heres a link to the Celery github page if anyone wants a look https://github.com/celery/celery

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants