Backburner takes forever to submit a rendering job

Juha_P
Contributor

Backburner takes forever to submit a rendering job

Juha_P
Contributor
Contributor

I wonder, what's happening here. Sending a rendering job to backburner takes forever. I just terminated a task, I started yesterday to submit a new entry to bakburner. Locally rendering of this scene works fine.

 

I have already created a fresh new scene, where I merged the geometry in order to reset the project settings. The symptoms started after I got really huge tyFlow cache file. However now it's impossible to send the job to rendering queue even without the tyFlow.

 

Edit: There is an entry in backburner queue, but it's empty. The size of the job is 0 bits

0 Likes
Reply
198 Views
6 Replies
Replies (6)

Diffus3d
Advisor
Advisor

It's hard to say, but if memory serves backburner is 32-bit.  So if you're sending assets with the job, there may be a limit to how much it can process.  Easy way around this is to make sure you're using network paths for all your asset links, including TyFlow, then there's no need to send any assets to backburner with the job.  It could be other things like your network switch timing you out or reaching max connections if the upload is taking too long.  

 

If it's not that, then it might be worth checking out Deadline.  It's free and actively updated unlike backburner.  

Alfred (AJ) DeFlaminis

EESignature


Did you find this post helpful? Feel free to Like this post.
Did your question get successfully answered? Then click on the ACCEPT SOLUTION button.

View Max Tips/Tricks Megathread
0 Likes

MartinBeh
Collaborator
Collaborator

How heavy are your scene assets? IIRC, backburner will try to copy them to the render nodes...?

Martin B   EESignature
→ please 'Like' posts that are helpful; if a post answers your question please click the "Accept Solution" button.
0 Likes

Diffus3d
Advisor
Advisor

It depends on if you have the checkbox for transferring assets, IIRC.  I believe it will also attempt to zip them, and since it uses a 32-bit version of .zip...  a file bigger than 4gb is likely corrupted.  I can't say for sure that is what is happening, but it used to happen on autosaves with "Compress on Save", causing some problems 5-6 years ago.  The clue in the OP about a file size of 0 makes me think it can't complete the zip operation or something.  I can only guess as to why that might be.

 

That doesn't really explain why the job cannot be further sent without tyFlow though, unless the process used to zip it is stalled out or something and it won't allow an overwrite.  Maybe try sending the job with a different name to see if that works.  

 

Best Regards,

Alfred (AJ) DeFlaminis

EESignature


Did you find this post helpful? Feel free to Like this post.
Did your question get successfully answered? Then click on the ACCEPT SOLUTION button.

View Max Tips/Tricks Megathread
0 Likes

Juha_P
Contributor
Contributor

Thanks,

 

I think the issue is tyFlow related. I deleted everything related to tyFlow in that scene and backburner is working normally. Creating new tyFlow cache didn't help nor deleting tyflow itself leaving only the cache object. 

0 Likes

Diffus3d
Advisor
Advisor

Was the asset linked in as a network location or locally?  (Just curious.)

Best Regards,

Alfred (AJ) DeFlaminis

EESignature


Did you find this post helpful? Feel free to Like this post.
Did your question get successfully answered? Then click on the ACCEPT SOLUTION button.

View Max Tips/Tricks Megathread
0 Likes

Juha_P
Contributor
Contributor

The assets were linked correctly. Think the cause was tyFlow fracture operator. Maybe the default values were too hight resulting huge datasets.