Hello,
I am trying to implement a script that loops through each body in the root component, filtered by a particular naming convention on each folder/node/body, and renders them each separately to a separate file. My question is twofold:
1. I cannot seem to query which group a body happens to be in - it seems the root component flattens all the bodies underneath it and ignores the hierarchy defined in the browser. Is there a way to walk the browser tree and retrieve bodies that way as opposed to retrieving a flat list under the root component?
2. I found some other posts two years ago that mentioned rendering support via the API is limited. We have tens of thousands of combinations to render (each body will have to have multiple combinations of textures / colors specified, and each one rendered separately). It appears the API has plenty of support for editing the design, but is there support for rendering as we would like? (We would like to render each separate combination 120 seconds, then save it to a file, then continue... this process is impossible to do by hand...)
Unfortunately, it looks like a lot of what you want to do is not currently exposed through the API.
The API doesn't provide access to the groups within the Bodies folder. That functionality was added after we initially exposed bodies through the API and it hasn't been updated to include it yet. It is something we need to do.
Regarding rendering, which type of rendering do you want? Ray tracing in the Rendering workspace or just the standard rendering you see in the Model workspace? The functionality in the Rendering workspace is not currently exposed at all through the API.
Brian-
we are indeed trying to batch render using the Rendering workspace. That is not good that it's not exposed at all... Are you positive there's no way to search the UI for the relevant commands and execute them (indirect API calls) rather than make direct API calls?
If it's not available, what is the alternative? Is there other software we can export to to accomplish a similar effect, or does the cloud offerings by AutoDesk enable something like this? If it's not possible, then this sinks our entire project.
Unfortunately there isn't a way to do this by directly driving the commands. I've tried it and there are still some things missing that don't allow you to work this way. For example, how do you know how long to wait for the rendering to finish and then how do you save the finished rendering? Believe me when I say I wish it was there too. I wanted it for something I was working on too. Part of the reason it's not there is there may be some changes to the Render workspace and we don't want to release and API that would potentially be broken when/if these changes are made.
As far as other workarounds, they would all be more work. One that seems like it might be feasible is to use the Forge Derivative Service API to extract an fbx file from your Fusion design and then use Max or Maya to render it. I don't have any personal experience with doing this so I can't say how easy it would be or it there are any issues to be aware of.
Hello Everyone,
Since this post is a little old, I was wondering if the rendering workspace had become more script-friendly in mid-2023 ? I have a lot of nice scripts ideas in the rendering workspace, but if it's still the way you describe it, I'd rather not even start.
Thank you,
Paul F
You're in luck. API functionality to automate renderings was added in the May 2023 release. I've played with it and used the API to create 400 images which I combined into the video below.
Amazing !
I'll start very soon then. I'll try not to use all my clouds credits on the first try of the script though 😂
The API only supports local rendering so you won't use any cloud credits. With local rendering, it queues up the render jobs and processes them locally. The renderer is multi-threaded and will fully consume your machine. You'll want to take care to not create higher-resolution images than you need to speed up the processing. As far as I'm aware, there aren't any plans to support cloud rendering with the API because it gets messy because of the cloud credit requirement.
Ok that makes sense. Video is not my goal so i guess it'll be alright. I have 10 products, I want 3 or 4 camera angles for each, and then for every new texture we develop (5 to 50 new texture every month), i'd like to render those forty shots automatically from a folder containing .jpeg of the new texture. And we don't need ultra high res. So It will be a maximum of 2000 render jobs per month that can run during the night time with the queue.
It sounds doable to me, but it will take time and effort to make it smooth.
I'll post the results !
How did you go achieving the above task? I am looking to do the same thing and am unsure where to start.
Hi @Rick2TJYK -San.
I will share what I tested last year.
# Fusion360API Python script
import traceback
import adsk
import adsk.core as core
import adsk.fusion as fusion
import math
import pathlib
RENDER_QUALITY = 25 # 25-100
SPLITS_COUNT = 3
EXPORT_DIR = 'C:/temp'
def run(context):
ui: core.UserInterface = None
try:
# check export folder
expDir: pathlib.Path = pathlib.Path(EXPORT_DIR)
if not expDir.exists():
ui.messageBox(f'Folder not found: {EXPORT_DIR}')
return
# get RenderManager
app: core.Application = core.Application.get()
ui = app.userInterface
des: fusion.Design = app.activeProduct
renderMgr: fusion.RenderManager = des.renderManager
renderMgr.activateRenderWorkspace()
# get camera
vp: core.Viewport = app.activeViewport
camera: core.Camera = vp.camera
target: core.Point3D = camera.target
unit = math.radians(360 / SPLITS_COUNT)
matLst = []
for idx in range(SPLITS_COUNT):
mat: core.Matrix3D = core.Matrix3D.create()
mat.setToRotation(
unit * idx,
core.Vector3D.create(0,0,1),
target
)
matLst.append(mat)
cameraLst = []
for mat in matLst:
camera: core.Camera = vp.camera
eye: core.Point3D = camera.eye.copy()
eye.transformBy(mat)
camera.eye = eye
upVec: core.Vector3D = camera.upVector.copy()
upVec.transformBy(mat)
upVec.normalize()
camera.upVector = upVec
cameraLst.append(camera)
# preview motion
backup: core.Camera = vp.camera
for camera in cameraLst:
vp.camera = camera
vp.refresh()
adsk.doEvents()
vp.camera = backup
# query
res: core.DialogResults = ui.messageBox(
'OK?',
"",
core.MessageBoxButtonTypes.OKCancelButtonType,
core.MessageBoxIconTypes.QuestionIconType
)
if not res == core.DialogResults.DialogOK: return
# execute
rendering: fusion.Rendering = renderMgr.rendering
rendering.renderQuality = RENDER_QUALITY
name = app.activeDocument.name
for idx, camera in enumerate(cameraLst):
expPath = str(expDir / '{}{:003}.jpg'.format(name, idx))
renderFeat: fusion.RenderFuture = rendering.startLocalRender(
expPath,
camera,
)
ui.messageBox('Done')
except:
if ui:
ui.messageBox('Failed:\n{}'.format(traceback.format_exc()))
The finished product is too large to attach, so please see the link.
@Rick2TJYK -San.
Since there is no button to execute the script in the rendering workspace, we have created this one and published it.
https://github.com/kantoku-code/Fusion360_AddScriptsManagerCommand
If you just want to run the script, this is also convenient.
Can't find what you're looking for? Ask the community or share your knowledge.