I copied + pasted the code for "Creating and rendering a scene" (https://docs.arnoldrenderer.com/display/A5ARP/Creating+and+Rendering+a+Scene) and converted to Python, but the results I'm getting aren't anything like the example render on the web page (see attached image for comparison and log). For the record, I'm not completely new to the API but I'm writing some tools in Python that take advantage of the Arnold API and trying to compare the results I get with my code vs the vanilla version, so I'm confused as to why this simple scene is just totally breaking on me.
The polymesh plane is missing completely, and the image is way too dark. It seems like an issue with color space transformation, but even if I render the image to EXR and change the color space in Nuke it still doesn't look right (ie, the specular highlight on the red ball is way too small in my render vs the example).
I'm using the 7.1.2.0 SDK on Windows 10 and have no custom OCIO environment variable set.
Gelöst! Gehe zur Lösung
Gelöst von n_rehberg. Gehe zur Lösung
That example would have be done with an older Arnold, so I wouldn't expect it to match.
You could export an ass file and kick it with different versions.
This is what it will look like when we update the docs:
Thanks for the heads up @Stephen.Blair. I can work through the color management issues I think, but I'm more concerned about constructing the polymesh - there's nothing in the example code that looks particularly off to me, but do you know if there's something different that has to be done to properly add the polymesh data to get the result you shared?
Try with this code:
#include <ai.h>
int main()
{
AiBegin();
// create anew universe to create objects in
AtUniverse *universe = AiUniverse();
// start an Arnold session, log to both a file and the console
AtRenderSession *session = AiRenderSession(universe, AI_SESSION_BATCH);
AiMsgSetLogFileName("scene1.log");
AiMsgSetConsoleFlags(universe, AI_LOG_ALL);
AiMsgSetLogFileFlags(universe, AI_LOG_ALL);
// create a sphere geometric primitive
AtNode *sph = AiNode(universe, AtString("sphere"), AtString("mysphere"));
AiNodeSetVec(sph, AtString("center"), 0.0f, 4.0f, 0.0f);
AiNodeSetFlt(sph, AtString("radius"), 4.0f);
// create a polymesh, with UV coordinates
AtNode *mesh = AiNode(universe, AtString("polymesh"), AtString("mymesh"));
AtArray* nsides_array = AiArray(1, 1, AI_TYPE_UINT, 4);
AiNodeSetArray(mesh, AtString("nsides"), nsides_array);
AtArray* vlist_array = AiArray(12, 1, AI_TYPE_FLOAT, -10.f, 0.f, 10.f, 10.f, 0.f, 10.f, -10.f, 0.f, -10.f, 10.f, 0.f, -10.f);
AiNodeSetArray(mesh, AtString("vlist"), vlist_array);
AtArray* vidxs_array = AiArray(4, 1, AI_TYPE_UINT, 0, 1, 3, 2);
AiNodeSetArray(mesh, AtString("vidxs"), vidxs_array);
AtArray* uvlist_array = AiArray(8, 1, AI_TYPE_FLOAT, 0.f, 0.f, 1.f, 0.f, 1.f, 1.f, 0.f, 1.f);
AiNodeSetArray(mesh, AtString("uvlist"), uvlist_array);
AtArray* uvidxs_array = AiArray(4, 1, AI_TYPE_UINT, 0, 1, 2, 3);
AiNodeSetArray(mesh, AtString("uvidxs"), uvidxs_array);
// create a red standard surface shader
AtNode *shader1 = AiNode(universe, AtString("standard_surface"), AtString("myshader1"));
AiNodeSetRGB(shader1, AtString("base_color"), 1.0f, 0.02f, 0.02f);
AiNodeSetFlt(shader1, AtString("specular"), 0.05f);
// create a textured standard surface shader
AtNode *shader2 = AiNode(universe, AtString("standard_surface"), AtString("myshader2"));
AiNodeSetRGB(shader2, AtString("base_color"), 1.0f, 0.0f, 0.0f);
// create an image shader for texture mapping
AtNode *image = AiNode(universe, AtString("image"), AtString("myimage"));
AiNodeSetStr(image, AtString("filename"), AtString("arnold.png"));
AiNodeSetFlt(image, AtString("sscale"), 4.f);
AiNodeSetFlt(image, AtString("tscale"), 4.f);
// link the output of the image shader to the color input of the surface shader
AiNodeLink(image, AtString("base_color"), shader2);
// assign the shaders to the geometric objects
AiNodeSetPtr(sph, AtString("shader"), shader1);
AiNodeSetPtr(mesh, AtString("shader"), shader2);
// create a perspective camera
AtNode *camera = AiNode(universe, AtString("persp_camera"), AtString("mycamera"));
// position the camera (alternatively you can set 'matrix')
AiNodeSetVec(camera, AtString("position"), 0.f, 10.f, 35.f);
AiNodeSetVec(camera, AtString("look_at"), 0.f, 3.f, 0.f);
AiNodeSetFlt(camera, AtString("fov"), 45.f);
// create a point light source
AtNode *light = AiNode(universe, AtString("point_light"), AtString("mylight"));
// position the light (alternatively use 'matrix')
AiNodeSetVec(light, AtString("position"), 15.f, 30.f, 15.f);
AiNodeSetFlt(light, AtString("intensity"), 4500.f); // alternatively, use 'exposure'
AiNodeSetFlt(light, AtString("radius"), 4.f); // for soft shadows
// get the global options node and set some options
AtNode *options = AiUniverseGetOptions(universe);
AiNodeSetInt(options, AtString("AA_samples"), 8);
AiNodeSetInt(options, AtString("xres"), 480);
AiNodeSetInt(options, AtString("yres"), 360);
AiNodeSetInt(options, AtString("GI_diffuse_depth"), 4);
// set the active camera (optional, since there is only one camera)
AiNodeSetPtr(options, AtString("camera"), camera);
// create an output driver node
AtNode *driver = AiNode(universe, AtString("driver_jpeg"), AtString("mydriver"));
AiNodeSetStr(driver, AtString("filename"), AtString("scene1.jpg"));
// create a gaussian filter node
AtNode *filter = AiNode(universe, AtString("gaussian_filter"), AtString("myfilter"));
// assign the driver and filter to the main (beauty) AOV,
// which is called "RGBA" and is of type RGBA
AtArray *outputs_array = AiArrayAllocate(1, 1, AI_TYPE_STRING);
AiArraySetStr(outputs_array, 0, AtString("RGBA RGBA myfilter mydriver"));
AiNodeSetArray(options, AtString("outputs"), outputs_array);
// finally, render the image!
AiRender(session, AI_RENDER_MODE_CAMERA);
// ... or you can write out an .ass file instead
// AtParamValueMap* params = AiParamValueMap();
// AiParamValueMapSetInt(params, AtString("mask"), AI_NODE_ALL);
// AiSceneWrite(universe, "scene1.ass", params);
// AiParamValueMapDestroy(params);
// Arnold session shutdown
AiRenderSessionDestroy(session);
AiUniverseDestroy(universe);
AiEnd();
return 0;
}
Thanks for the code. I can try to compile it later tonight, but the polymesh code doesn't look fundamentally different from the code in the original example as far as I can tell, so I'm not sure why it's simply not rendering the polymesh plane.
This is the code I have, converted from C to Python.
from arnold import *
# start an Arnold session, log to both a file and the console
AiBegin()
AiMsgSetLogFileName("scene1.log")
AiMsgSetConsoleFlags(AI_LOG_ALL)
# create a sphere geometric primitive
sph = AiNode("sphere")
AiNodeSetStr(sph, "name", "mysphere")
AiNodeSetVec(sph, "center", 0.0, 4.0, 0.0)
AiNodeSetFlt(sph, "radius", 4.0)
# create a polymesh, with UV coordinates
mesh = AiNode("polymesh")
AiNodeSetStr(mesh, "name", "mymesh")
nsides_array = AiArray(1, 1, AI_TYPE_UINT, 4)
AiNodeSetArray(mesh, "nsides", nsides_array)
vlist_array = AiArray(12, 1, AI_TYPE_FLOAT, -10, 0, 10, 10, 0, 10, -10, 0, -10, 10, 0, -10)
AiNodeSetArray(mesh, "vlist", vlist_array)
vidxs_array = AiArray(4, 1, AI_TYPE_UINT, 0, 1, 3, 2)
AiNodeSetArray(mesh, "vidxs", vidxs_array)
uvlist_array = AiArray(8, 1, AI_TYPE_FLOAT, 0, 0, 1, 0, 1, 1, 0, 1)
AiNodeSetArray(mesh, "uvlist", uvlist_array)
uvidxs_array = AiArray(4, 1, AI_TYPE_UINT, 0, 1, 2, 3)
AiNodeSetArray(mesh, "uvidxs", uvidxs_array)
# create a red standard surface shader
shader1 = AiNode("standard_surface")
AiNodeSetStr(shader1, "name", "myshader1")
AiNodeSetRGB(shader1, "base_color", 1.0, 0.02, 0.02)
AiNodeSetFlt(shader1, "specular", 0.05)
# create a textured standard surface shader
shader2 = AiNode("standard_surface")
AiNodeSetStr(shader2, "name", "myshader2")
AiNodeSetRGB(shader2, "base_color", 1.0, 1.0, 1.0)
# create an image shader for texture mapping
image = AiNode("image")
AiNodeSetStr(image, "name", "myimage")
AiNodeSetStr(image, "filename", "arnold_icon.png")
AiNodeSetFlt(image, "sscale", 4)
AiNodeSetFlt(image, "tscale", 4)
# link the output of the image shader to the color input of the surface shader
#AiNodeLink(image, "base_color", shader2)
# assign the shaders to the geometric objects
AiNodeSetPtr(sph, "shader", shader1)
AiNodeSetPtr(mesh, "shader", shader2)
# create a perspective camera
camera = AiNode("persp_camera")
AiNodeSetStr(camera, "name", "mycamera")
# position the camera (alternatively you can set 'matrix')
AiNodeSetVec(camera, "position", 0, 10, 35)
AiNodeSetVec(camera, "look_at", 0, 3, 0)
AiNodeSetFlt(camera, "fov", 45)
# create a point light source
light = AiNode("point_light")
AiNodeSetStr(light, "name", "mylight")
# position the light (alternatively use 'matrix')
AiNodeSetVec(light, "position", 15, 30, 15)
AiNodeSetFlt(light, "intensity", 4500)
AiNodeSetFlt(light, "radius", 4)
# get the global options node and set some options
options = AiUniverseGetOptions()
AiNodeSetInt(options, "AA_samples", 8)
AiNodeSetInt(options, "xres", 480)
AiNodeSetInt(options, "yres", 360)
AiNodeSetInt(options, "GI_diffuse_depth", 4)
# set the active camera (optional, since there is only one camera)
AiNodeSetPtr(options, "camera", camera)
# create an output driver node
driver = AiNode("driver_jpeg")
AiNodeSetStr(driver, "name", "mydriver")
AiNodeSetStr(driver, "filename", "scene1.jpg")
# create a gaussian filter node
_filter = AiNode("gaussian_filter")
AiNodeSetStr(_filter, "name", "myfilter")
# assign the driver and filter to the main (beauty) AOV,
# which is called "RGBA" and is of type RGBA
outputs_array = AiArrayAllocate(1, 1, AI_TYPE_STRING)
AiArraySetStr(outputs_array, 0, "RGBA RGBA myfilter mydriver")
AiNodeSetArray(options, "outputs", outputs_array)
# finally, render the image!
AiRender(AI_RENDER_MODE_CAMERA)
AiEnd()
I haven't tried your code but from quick glance it seems you are mising all the universe creation and handling. That is some newer Arnold stuff that also made me stumble with some simple scripts I had.
Look at Stephens code to see how he creates an AiUniverse and passes that through his code.
Nico
If a universe or session isn’t explicitly created Arnold will use the default ones. And it doesn’t fully explain why I can see the sphere, and the light, but the poly mesh is the only thing that doesn’t show up. The polymesh node is the only one not working as expected which is why I think I’m missing something with that specifically.
Ok. Sorry about that. It was just a first guess. My second guess is that Python and AiArrays combined suck.
If you look at the log it actually complains:
00:00:56 1567MB WARNING | resetting parameter vlist on (found: nans or infs)
If I create the mesh like this it renders for me:
# create a polymesh, with UV coordinates
mesh = AiNode("polymesh")
AiNodeSetStr(mesh, "name", "mymesh")
nsides = [4]
AiNodeSetArray(mesh, "nsides", AiArrayConvert(len(nsides), 1, AI_TYPE_UINT, (c_uint*len(nsides))(*nsides)))
vlist = [-10.0, 0.0, 10.0, 10.0, 0.0, 10.0, -10.0, 0.0, -10.0, 10.0, 0.0, -10.0]
AiNodeSetArray(mesh, "vlist", AiArrayConvert(len(vlist), 1, AI_TYPE_FLOAT, (c_float*len(vlist))(*vlist) ))
vidxs = [0, 1, 3, 2]
AiNodeSetArray(mesh, "vidxs", AiArrayConvert(len(vidxs), 1, AI_TYPE_UINT, (c_uint*len(vidxs))(*vidxs) ))
uvlist = [0, 0, 1, 0, 1, 1, 0, 1]
AiNodeSetArray(mesh, "uvlist", AiArrayConvert(len(uvlist), 1, AI_TYPE_FLOAT, (c_float*len(uvlist))(*uvlist) ))
uvidxs = [0, 1, 2, 3]
AiNodeSetArray(mesh, "uvidxs", AiArrayConvert(len(uvidxs), 1, AI_TYPE_UINT, (c_uint*len(uvidxs))(*uvidxs) ))
OH! That might be what I was missing. Let me test later tonight and I'll let you know if that solves the issue. Good eye, @n_rehberg.
Sie finden nicht, was Sie suchen? Fragen Sie die Community oder teilen Sie Ihr Wissen mit anderen.