Be aware that the video below shows some footage that would be a spoiler for someone who hasn't played the game yet. I apologize for that.
Long ago in the before time we didn't have a way to artistically control the footprints characters could leave on surfaces. The footprints in Uncharted 2 were smaller in scope and artists were limited in what they could do with them. Since then we have had some considerable support in a system that allows us to spawn effects based on collision data from objects in the game. Using joints in the game we can cast rays from joints that probe surfaces and use that information to test some conditions to spawn effects once those conditions are satisfied. We call this system the Splasher System. Originally it was just for creating splash effects for characters in water but it's a lot more intricate now. The splashers are something I'd like to go into more detail on at a later point if I'm allowed to. It's a great system but on the PS3 it became quite expensive and we had to scale back how many characters and objects used this system.
One of the greatest things we have is the ability to project materials onto objects using our particle systems to generate the projections. This sounds like a decal system and in it's most basic description, it is. Our Dedicated effects programmer Marshall Robin, during uncharted 3 developed for us the ability to spawn projected particle cubes that utilize a number of parameters to project materials that we author onto surfaces. For example we can use this system to write directly to our destination normal buffer. This buffer takes the normal value of every pixel in screen space and writes to this buffer. Out projected particles directly perturb this buffer. Our lead effects artist Keith Guerrette, used this technique when creating the sand footprints in Uncharted 3. They can animate and do all sorts of great stuff. We can also draw these projected particles to the same buffers we draw all of our other particles to. So we can draw projected color based materials onto surfaces which is very much like the decals you are used to seeing. There is a lot of overdraw cost in these projected cubes so we don't like to use them all over the place just where it will have the greatest impact visually and where surfaces tend to have a lot of deformation. In the video I put together you'll see the outline of some cubes and those are the projected particles.
We are lucky enough to have our own node based shader authoring tool. If you've ever used UDK's material editor it's a lot like that. Knowing we can only project a flag image onto the surface I used a technique that in UDK is referred to as bump offsetting. The idea is that you can use the camera direction to bias your original uv values modulated by some constants to create the illusion of parallax. The camera direction vector is a unit vector that ranges from -1 to 1. This is ideal because uv coordinates are commonly in tangent space which typically range from -1 to 1 as well. Of course you can push the values to whatever ranges you want but you would soon find the problem with that. The camera direction is a vec3 and I really only need 2 of the components from the camera direction. The x component and the y component. So I separate all three and recombine x and y back into another vector that is only x and y.
Using these vectors I multiplied them by a texture of a deep simple impression I made in Maya. The texture is the equivalent of a displacement map. By multiplying the camera direction vectors by the texture I create a more complicated and interesting shape of parallax per pixel. Now, what to use these crazy uv's on? I used the displacement map texture to create the illusion of shadowing inside the pit of the print. One of the unique things about our shader authoring tool is that we can define how our alpha is applied. This is similar to Photoshop layer stylings. You know things like multiplicative and additive and so on. The difference being that our options don't have those kinds of presets so we need to know how that alpha blending works. The simple version of what we have to work with is the source pixel (the pixel we are trying to draw) and the destination pixel (the pixel that is already there). Then we define some arithmetic on each of those alpha blending attributes and then by default the result of those two get added together. The majority of our materials multiply the source pixel by 1 so the alpha we are trying to draw just remains as is and then we multiply the destination pixel by one minus the source alpha. So the inverse of our alpha we are drawing. I didn't want to try to create any color of the footprints beyond what was already being drawn there. So the only color in my material is the shadow for the deeper parts of the print to make it look darker. For the alpha blending I set my source alpha to be multiplied by the destination pixels color. So now my alpha has within it the color value of the pixel that is already there. Then my destination alpha is multiplied by 1 minus the source alpha. Since my source alpha has within it the color of the destination pixel I don't want twice the color value when the two values are added together. That's a little complicated I know but totally worth it.
There are some other things I did to create the prints but the bulk is being done in that one material with the fake parallax. The video will quickly demonstrate how the material is being modulated by a float to control how deep the parallax goes. These prints were important because we couldn't create any dynamic displacement of the geo and the player needed to have something to clearly track the deer. I'm proud of how these prints turned out. We tried to create prints like this for Ellie and I spent a good deal of time on them but it was difficult to make it look like she was leaving long trails and tracks in the snow. So I opted for the prints that write to the destination normal buffer to make it look like long streaks in the snow. Anyway, enjoy the video.