Is there a way to Render target geometry in Maya directly to a texture

Is there a way to Render target geometry in Maya directly to a texture

kingsleycluttencg
Community Visitor Community Visitor
678 Views
2 Replies
Message 1 of 3

Is there a way to Render target geometry in Maya directly to a texture

kingsleycluttencg
Community Visitor
Community Visitor

I'm looking for a solution to output a collection of geometry to texture in Maya. Ideally, this is a way to get real-time flat facial features on a rig similar to those in the Lego movie or the Playmobil films. There is a really good talk here (PlayMobil Face System), that details a method, but I just don't have enough experience with OpenMaya in Python to figure it out effectively.

 

So far I've been working with OpenMaya API to get a solution. There have been a few solutions that write an image buffer to a file that have been promising, but I can't seem to get them to work as a node that updates to a texture output.

 

#Import api modules
import maya.api.OpenMaya as api
import maya.api.OpenMayaUI as apiUI
import maya.api.OpenMayaRender as omr

#Grab the last active 3d viewport
view = apiUI.M3dView.active3dView()

#read the color buffer from the view, and save the MImage to disk
image = api.MImage()
texture = omr.MTexture()
#view.readColorBuffer(image, True) #Doesn't work because of viewport2
if view.getRendererName() == view.kViewport2Renderer:
image.create(view.portWidth(), view.portHeight(), 4, api.MImage.kFloat)
view.readColorBuffer(image,True)
#image.convertPixelFormat(api.MImage.kByte)
print "viewPort2"
else:
view.readColorBuffer(image)
print "old viewport !"
image.writeToFile('F:/test.png', 'png')

0 Likes
679 Views
2 Replies
Replies (2)
Message 2 of 3

Kahylan
Advisor
Advisor

Hi!

 

"real-time flat facial features on a rig similar to those in the Lego movie or the Playmobil films."

 

You are talking about two veeeeeeeery different facial setups in this sentence.

 

The Facial Rig in the Lego Movie was compiled of layered textures that could change between different preset texture expressions for the eyes, mouth and eyebrows and moved around on the model and scaled using the 2D placement nodes that tell a texture where it is in UV Space. This kind of Setup is easily achieved using Node networks, but it is limited to a kind of "stopmotion feel" because the emotions can only change very rapidly and snappy, unless you draw tons of inbetween textures that can be triggered to blend them.

 

The Playmobil facial rig uses Ray tracing. Which basically means that there are rays cast from the bezier curves to the middle of the head and then they use the points where those rays intersect with the mesh in Quaternion space and then check what pixel in UV space corresponds to the intersecting point to build the texture pixel by pixel. Atleast that is how I understood the talk. So the texture was not reading out the viewport as your api script does.

This system probably took multiple Engineers Months to write, as it is very complex mathematically to figure out which pixels need to diplay what color and it involves custom nodes that handle the display of the texture. Unless someone from Dreamworks is here in the Forum and willing to share some secrets I don't think you'll find big help here to achieve this.

 

Honestly, I would probably go with the Lego movie method unless you have a big budget and some very skilled software Engineers at your disposal. It's more achievable and since you are already stylising your animation, having a bit of a stopmotion feel can be a good thing

Message 3 of 3

kingsleycluttencg
Community Visitor
Community Visitor

Thanks for the quick response, and insight 🙂

 

I don't believe it's that far out of our grasp. 

 

Overall the code snippet I added was to show that grabbing an image from the image buffer was possible, and the hopeful next steps would be to parse that info via a UV context into a texture or RGBA value that can be plugged into a material. Then hopefully the function gets run every frame or every time the geometry is moved in the "2D" facial rig.  It's frustrating because I can see the data I need is there, just not sure how to get it to go where I want it.😅

 

It's not a train smash to the project I'm working on, as I can just render the face geo out on an orthographic flat plane and apply it to the face from there. Currently using a wrap deform to stick the geo to the face during animation, but it leaves much to be desired. Ideally, this solution would bypass the need to render out the face to get real-time feedback on the model. 

 

The Playmobil reference is a long shot to be sure, I don't have any engineers in my back pocket sadly 😅 . It does however give some insight into how it was done and essentially the raytracing approach they use, which sounds kind of like a rasterizer(good explainer for that here). So a solution could be to collect the geometry, set a grid resolution along an axis (based on geo bounding box?), and fire off a bunch of rays from the grid that collect color data from points on the geometry. Sounds logical enough but my lack of experience in python is killing me here.

 

Was hoping on a hail mary that someone here had the magic code I was looking for, but beyond that think this could be a fun exercise to come back to after my current project wraps, and finally use it as an excuse to learn a little python.

0 Likes