Im getting zbrush .obj files, importing them in, angle collapsing them then I want to "project" the high poly model onto the low poly and store it as displacement offsets and object space normals in the texturemaps.
I know a few people here do this, and at the moment my system works but theres tonnes of bugs on the output of the model, theres missed texels and overwriting and it looks very messy.
if you look closely at this image you can see im making a lot of mess with the correct texels.
What is the correct term for what this is called so I can look it up on the internet, is there a pro pdf I could read?
Or Has anyone here got any advice of their own for me. If you want complete details of how I go about it I can give you that.
Yeah, this is a pretty common problem, and is caused by texture-seams without appropriate padding. Each edge in the mesh that is not continuous in texture-space needs padding relative to the maximum mip-map level you expect to be visible. This is not an exact science at all, and the level of padding is usually a compromise between texture size, artist work (uv-mapping slavery), and visual artifacts. One common technique is to "hide" most texture-seams at "less important" places; for a human model, this could be at the back of the head, under the arms etc. In other words, make sure that the mesh does not contain texture-seams in the middle of the face
Its all autowrapped at the moment so the seams are terrible.
Ive actually got a post filter after the renderings done that spreads the normals over the seams of the triangles to fix that, but its not working entirely.
But is there an educational site somewhere for this, when I look it up, ive used "normal map/displacement map baking" "hipoly compression" and all I got was how to do it in blender or maya, no do it yourself sites... do you know of anywhere I could go for that?
If I jump the texture size up to 4096x4096 I do get a big improvement in the looks, but at 1024x1024 (the picture is at that) its quite terrible, i should have posted a more upclose shot, its pretty bad.
Try an erode-filter that only writes to texels that doesn't yet have a valid normal, and renormalize in the end. I believe this is what 3D Studio MAX does.
that sounds good, its just whats an erode filter exactly?
> just whats an erode filter exactly?
I haven't heard that term before, but I would guess you would simply repeatedly scan the texture for black pixels, and give them the average color of any non-black neighbor.
Each pass would add a border of one pixel to the colored area. Repeat as long as there are more black pixels neighboring colored ones. (You need to terminate if all pixels are black or all pixels are colored.)
Normalize all pixels when you are done, or you might end up with garbage anyway.