Well Ive Just about Read every thread on here that may help me with using Mudbox in my workflow. I figured I would create a thread specifically for that reason as I dont think every MAYA user should have to go through what I have been doing and should find a place that caters to a MAYA -- > Mudbox workflow.
With that said. If the moderators or those that understand MUDBOX inside and out and understand MAYA inside and out can post in this section, that would be great!!
I currently run MAYA 7.0 and am trying to use MUDBOX 1.0.4 as my choice for displacements. With that said, most of us in the animation industry will need to know this basic workflow in great detail as a solid high poly model exported from MUDBOX just wont do, I and im sure many other maya users need a step by step example of creating a low poly mesh in maya, UV mapping a certain way and why, then exporting to .obj, then importing to mudbox and understanding the ratio of sizes between the two and if that has an effect on the final displacement map, to then showing basic mesh changed in MUDBOX, EX: show a long horn created on a sphere, and then showing the steps to get the best displacement map from Mudbox for MAYA, what setting people use and why, to the final creation of the map. And lastly how to apply that map onto the same exported sphere in maya showing the process needed to set that up in Mental Ray to get the best renderer. I know getting the displacement to fit and render out right in MAYA has been a real pain in the arse, so it would be great if someone could go through that process as well showing there steps in MAYA after they have a completed displacement in MUDBOX, to get the most out of the displacement as far as matching your mudbox creation on screen..
If all this can be summed up in either this discussion board with picture files showing everything like a detailed tutorial, that would be great. I know once this MAYA to Mudbox workflow is created and followed to a T, there will be many more amazing works coming from this great program. A great displacement is so key to a model going out to be rigged for animation. I would myself love to contribute where I can and will once I get all the info. I need and all my questions answered.
Thanks and I hope we can get more MAYA people involved with this process!
A simple sphere from MAYA would be a first good tut on the process I described above.....
Thank you for the important information Dave.
...
You had the following issues with the mesh that caused your problem.
-Model scale: The model was total height 4.8cm the lip area was 0.2cm with the bump around 0.02cm which may have lead to raytrace errors. Perhaps. Its good to work to scale.
-UVs: Did not give the right amount of space for each face most likely because the tools you used to layout the UVs were fooled by no evenly spaced edges in the mesh.
-Edge flow: Works against you when you subdivide. Huge faces on the forehead small ones on the nose. And you had star intersections in areas like the nostril corner which can cause issues. The goal is to try and have all the faces the same size/proportion within say 15%.(*you also need a lot more topology around the ears if you want to extract good maps from there. Build the ear completely in the lowRes )
-Exported the lowRes after sculpting and use that mesh to render with the 16bit displacement map I created.
Those are my suggestions. What I did was to scale your mode up by 10 and redo completely the UVs as well as delete un-needed polys under the model because they are not seen and take up UV space. I also added a few edge rings around the neck. Then I re-sculpted.
Results: No issues at all with the maps or render.
Thanks dave that could be the problem. But why do the ears also artifact if there are no open edges on them?
Hi Bateman,
This looks like simple mismatch between hiRes and lowRes. Always be careful of open edges. Either close out the back of the eye for extracting maps or if the mesh is hiRes enough I sometimes delete the last row of edges on the lowRes just before extraction. So your issues are because of rays being misfired off the back edge of the eye most likely.
We use a raytrace solution for extraction as it give the best quality maps and allows for the user to create creases and pinches in the mesh without problem and the resulting map wil not render with that crease opened up. It also allows for the user to extract between multiple objects and arbitrary objects for example when you have made a UV change or changed the topology of the lowRes. The only draw back is that you need to be slightly carefull on open edges and alignments between the lowRes and hiRes. But raytracing gives the best quality maps.
Can't find what you're looking for? Ask the community or share your knowledge.