How to add depth of field to your SL photos in post-production

Years after I originally discussed how to actually capture depth of field (DoF) on high resolution pictures in Second Life, that’s still one of the most popular texts on this blog. Yet, there are other ways of applying the DoF effect to a photo taken in SL, and this post will focus on one of those methods. It will require, let’s make it clear from the beginning, a software for post-processing the image. The idea came after Nalates Urriah prepared a tutorial on how to capture depth information on Black Dragon viewer, in order to use it while editing SL photos in Photoshop. I sometimes adopt a similar procedure, with some variations from her approach, and that is what we will talk about here. I will not focus on Black Dragon, for most viewers have what’s required for the process (nonetheless, I will mention, at the end, two features from Black Dragon that I consider particularly interesting for taking pictures in Second Life). Also, I will edit my image in Gimp, but I believe that the steps I will show here can be easily adapted to Photoshop.

Taking the picture and building its depth map

This first step in the whole process doesn’t differ much from Nalates’s, except for the fact that she was using Black Dragon, which offers some additional features that other viewers do not have. For this post, I used Catznip, which is the viewer that I generally go with for my daily usage of Second Life. Anyway, what I will describe here may be reproduced in most SL viewers.

Well, if you are going to work on a Second Life image, you’d better start with capturing an image in SL. In this process, we will work with a depth map, so you will actually need to take the same shot twice. Because of that, everything has to be really still. If avatars are involved, for instance, they should be frozen in a pose – not even their neck and head should move. Also, it is better if no moving objects are in the scene. In some cases, there is no problem, especially if they are too far from the virtual observer, such as birds flying high in a distant sky that will basically work as a background. Still, for those who are not familiar with this technique, I’d say it is better if nothing would move at all.

(There is the possibility of using the Freeze frame option on your Snapshot window. That option will work most of the time and it’s good alternative if there are moving objects on your scene – but it may be tricky sometimes.)

The ‘Freeze frame’ option

So, after choosing your angle and placing your camera, you can take the first and traditional shot – without using the DoF effect that you can find on your viewer’s preferences. For this post, I chose this shot, taken in Endless 58-58N, a beautiful region build by Sombre Nyx:

Click on the picture to enlarge it

This is a high resolution picture – originally, a 4204 x 2225 pixel one, that I cropped to 2911 x 1785 pixels just because I think it helps us to focus (no pun intended) on the effects of the technique. I activated shadows on Catznip, but not the DoF effect. On the viewer’s Snapshot window, I chose to capture colors.

‘Capture colors’

As I said, I had to take the shot twice. For the second shot, with everything, including my camera, in the same position as before, I chose to capture depth, instead of colors, on the Snapshot window.

‘Capture depth’

The result of the second shot was, then, this black and white image:

Click on the image to enlarge it

Notice that I used the same resolution as before and that, afterwards, I cropped the depth-based image in the very same way as I did with the color-based one. Cropping in advance is not necessary, but if you do it, you have to reproduce the same cropping in both images. So, now, I was ready for actually post-processing.

Preparing the image and understanding the depth map

After taking both shots, I opened them as layers of the same image on Gimp. One can do that in a variety of ways. This is not a tutorial on how to use Gimp, so I will assume that people know how to do it – but, in my case, I created a third image and opened both shots that I took in SL as layers in that new image. The color-based shot will be my primary layer, while the depth map (which is the second shot that I took) will mainly be the source for some masks that I will use. In this part of the process, it is interesting to notice that the depth map is different from, let’s say, the focus map.

Both the color shot and the depth map were added to the same image as layers: click to enlarge

The depth map provided by my viewer is an image with tones that vary from black to white, always in grey scale, according to the (virtual) distance from each object to the observer’s screen. The objects that are placed closer to your screen will be black and the ones that are further away will be white. Between them you will see different tones of grey. Intuitively, one can conclude that, if they duplicate the color-based layer, apply some blur to it and use the depth map as a mask on the blurred layer, one will have an image with DoF effect. Nonetheless, there are three basic problems that one has to face there: first, if one does that, probably the result will be somewhat subtle, and the person may want a more pronounced effect; second, the object borders will look a bit awkward; third, by this procedure, focus will always be on objects that are closer to the screen (for they will correspond to the black areas on the mask applied to the blurred layer), but one may want to focus on objects that are a bit more distant, at some point between the foreground and the background. In order to avoid those problems, one can build their own focus map, which is different from the original depth map.

Building a focus map

For building what I’m calling here focus map, you have to decide what should be your focal point on the image. On the photo that I chose to work on here, my own avatar is on the foreground, in front of a rock, and my husband, Randy, is a bit more distant, in front of a second rock. I decided that I would focus on Randy. Normally, that would mean that not only what is behind Randy should stay out of focus, but also what is closer than Randy is to the observer should be blurred. Now, if we use the depth map as it is for a mask, and considering that all the darker areas on the mask will be on focus, the result is that what’s closer to the screen will be in focus – and that’s not what I wanted.

In order to correct that, I duplicated my Depth map layer twice and hid the original one, just to keep it there in case I had to work on it again. Of the layer called Depth map 2, I inverted colors (on Gimp’s menu, I chose Colors and then Invert). With that, my foreground will now be white and, so, it will get blurred. The problem is that the area where my focal point should stay (the place where Randy is standing) will go lighter as well. Because of that, I had to adjust color curves (on the menu, Colors and Curves). On the image, I clicked on the area where Randy is located, and I found out to what line it corresponded on the graphic showing the curve that I would adjust. I intended to make that area black, so I created a point on its line on the graphic and pulled it all the way down. I still had to adjust the curve a bit, because the foreground had many grey tones and I wanted to reduce that. When I was happy with the result, I hit OK.

‘Depth map’ layer duplicated twice: click to enlarge the image
I inverted colors on ‘Depth map 2’ layer: click to enlarge the image
I clicked on the area where Randy is, and a vertical line appeared on the ‘Color Curves’ graphic: click to enlarge
I created a point on the vertical line, on the graphic, and pulled it all the way down: click to enlarge
I worked on the curve a bit longer until I felt happy with the result: click on the photo to enlarge it

After that, I hid Depth map 2 and worked on Depth map 1. There, the area where Randy is standing wasn’t totally black yet. So, again I decided to adjust curves. I clicked on the area where Randy should be, found it on the graphic, created a point there, pulled it down and clicked on “OK”.

On ‘Depth map 1’, I clicked on the are where Randy was, found it on the graphic, created a point there and pulled it all the way down: click on this picture to enlarge it

Now, I had to combine Depth map 1 and Depth map 2. So, I made the upper layer visible again and set its mode to lighten only. I could have merged Depth map 2 down, but for this post I created a new layer from visible, named it Depth map 3 and hid the other Depth map layers. Also, I knew from experience that, in the middle distances, the DoF effect would be subtle. It generally is. Then, in Depth map 3, I adjusted colors again, in order to increase contrast in the middle distances and make the lighter areas closer to white. Like that, Depth map 3 became my own custom focus map.

Changing the upper layer mode to ‘lighten only’: click on the image to enlarge it
The ‘Depth map 3’ layer before adjusting colors: click to enlarge
The focus map with the grey tones adjusted: click to enlarge

Blurring the right areas

With all that adjusted, I made Depth map 3 invisible as well. Then, I duplicated the Color shot layer, made sure it had an alpha channel added (just because) and named it as DoF layer.

Before Gimp’s 2.10.20 version, I would do a series of steps in order to have different levels of blur for each area of the image. Now, it has become easier. One just has to click on Filters, Blur, Lens blur. On the dialog box, I found another box named Aux Input and clicked on it. Then I selected the Depth map 3 layer (by double clicking on it) and observed the preview. Randy was in focus, and both what was closer than him and what was further than him got blurred. I just had to adjust the amount of blur – actually, blur radius, which I set to 8. Notice that there is something weird with the borders of different areas on the image – especially on the parts that should not be blurred at all. I would correct that in a minute, but, before, it is worth noticing that the problem gets worse if we choose high values for blur radius, so it’s better to keep them relatively low.

Activating ‘lens blur’ for the ‘DoF layer’: click on the image to enlarge it
Choosing the focus map for the blur effect: click on the image to enlarge it
Randy is in focus, other areas are blurred, but borders don’t look good: click on the image to enlarge it

So, in order to correct the border problem, I added a mask to the DoF layer and used the Depth map 3 layer as the actual mask. It fixed the borders, but also reduced too much the DoF effect in the mid-distance areas of the image. Again, I had to adjust color curves, pulling the central part of the curve up (one can experiment a bit there, by sliding it more to the left or to the right as well). Notice that I adjusted the curves on the mask, not on the main image of the layer – I mean, I selected the mask before clicking on the Colors menu and on Curves.

Adding a mask to the ‘DoF layer’ to correct the border problem: click to enlarge
Adjusting color curves on the mask for the ‘DoF layer’, so the blur effect is enhanced: click to enlarge

With that done, there was still something to correct: a problem with transition, something that one notices if they look closer into some areas of the image. It happens because sometimes the depth map generated in SL is not detailed enough to reflect, for instance, all the variations on objects like grass. That had to be manually fixed, using the blur/sharpen tool (set to blur, of course), which is what I did, by duplicating the Color shot layer and working directly on it.

Fixing the transition problem on the grass with the ‘blur’ tool: click on the image to enlarge it

Reasons for using the post-processing method

One can choose to add the DoF effect only on post production simply to get more control of the process. Keep in mind that, for high res shots, when dealing with the DoF tools directly found in the viewers, we generally have to blur the image to a higher extent than the intended result, and we don’t really know how much blur we got until we can open the image after saving it on our HD. Yet, if it were just because of that, probably I would always choose the native SL DoF tools. After all, by experience, one learns how to more or less predict how much blur has to be added to a scene in Second Life for their high res shot to look good.

Another reason for using the post-processing DoF effect is that, when we activate DoF directly in SL, focus tends to be on the center of the image. That tends to be so, because the object on which we click in order to set focus on will automatically move to the central area of our screen. There are ways to circumvent that, but with the post-processing method, at the time of adjusting your camera in SL, you don’t have to worry about clicking on the object that you want to focus on. In the case of my example here, the focus is on the central area of the image because I chose to set it on Randy, but I could easily have focused my own avatar, which is much more to the left.

It is also interesting to notice that this method may simplify the correction for the problem of applying DoF to transparencies when you use the viewer’s DoF settings. Still, if you are using Black Dragon, you can avoid that (more on this topic further down on this text).

Why use the DoF tools found on SL viewers

The DoF tools found in SL viewers are actually an enhancement that is interesting to explore. If you don’t mean to use any post-processing method beyond possibly resizing and cropping, the image is done if you use the DoF tools that come with your viewer.

Even if you are into post-processing your images, probably you want to do more than just adding some DoF effect to it. So, depending on what you will do later, you can save precious time and effort by skipping the post-processing DoF methods and using the SL viewers’ tools instead.

The final photo: when post processing, one may want to do more than just adding a depth-of-field effect to their image (click to enlarge)

Some considerations about Black Dragon

Nalates seems to have chosen to use Black Dragon because it has some interesting features that may help in adding the DoF effect in post-production. Nonetheless, it also enhances the viewer’s capacity to produce a good DoF effect by itself. Among its many controls, for instance, it allows us to include or exclude transparencies in its DoF effect. Generally, with other viewers, DoF gets messed up when there is a glass door, for instance, on the scene, or even a bonfire that uses alpha on its textures. With Black Dragon, if that’s the case, you can include or not, on the blurring effect, the transparency present in the glass or in the fire texture. In my experience, sometimes it is worth including it, sometimes it is not – you have to try it on each single case.

Furthermore, Black Dragon has an option to try to correct the amount of blur on high res shots. With it, blur on the picture saved on your HD looks similar to what you see on your screen when you take the shot. It has always worked well for me. It is, then, another tool that could help you if you would like to use the viewer’s DoF effect.

Other considerations

There are, certainly, different methods to apply the DoF effect to an image while post-processing it. The one that I described in this post seems, in my opinion, interesting for understanding what is actually going on with one’s picture on each step. But maybe you will find an easier technique out there, or even create your own. Anyway, I hope my observations here will help.

6 thoughts on “How to add depth of field to your SL photos in post-production

  1. First of all, thank you so much for the tutorial. Well-written tutorials for GIMP, especially w.r.t. an SL user’s use-case scenarios, are rather thin on the ground. Now, when you use the viewer’s DoF feature (which is a real GPU hog, so you should really turn it on only after you’ve framed your shot), the problem with alpha textures emerges with one of the two types of alpha textures: alpha-blended textures. This is because of the limitations of OpenGL. Personally, I almost always use the viewer’s DoF feature. Thankfully, my current machine can easily handle it.

    As for how I apply it, I generally prefer the selective focus that’s so often used in RL photography. In RL, we focus our lens to achieve perfect focus in the area that interests us (either manually or by placing our area of interest on an autofocus sensor) and then let the aperture setting take care of the rest. Before I continue, I have to say I use Firestorm with its Phototools:
    1. I set my focal length using the Ctrl+8 (wider) or Ctrl+0 (more telephoto-like);
    2. I turn on “Flycam” with my 3DConnexion SpaceNav (which has now been superseded by newer products);
    3. I frame the shot;
    4. I take a bunch of snapshots that I don’t save, just to make sure my framing is as I want it to be.
    5. With Phototools, I adjust shadows and then turn on DoF.
    6. I choose my F-number value, and I prefer to use RL photography-derived values (1.4, 1.8, 2, 2.8, 3.3, 4, 4.5, 5.6, 6.3, 8, 9.5, 11, 16, 22 – although I rarely use the latter two), always remembering that the effect will be much more subtle in the saved photo.
    7. With the cursor, I point my mouse to the place I want to keep in perfect focus.
    8. I hit Ctrl+Shift+S to take the snapshot and, if I’m happy with it, I save it.

    1. Hi, Mona, sorry that I hadn’t answered you before: I’d like to thank you very much for sharing with us the way you prefer to work. It’s great that there are always different ways to use the photo tools and features in SL, and one can always learn different and sometimes better ways to reach their desired effects. Thank you!

  2. I thought I could use a program called Meshroom which uses combines multiple photos to create a mesh model using images I captured in SL. However, the images appear to be missing photo related meta data like Focal Length. Would the approach you are suggestion help add the meta data needed for Meshroom?

    1. Hi, Andy, I’m sorry that I could not answer before. Actually, I have no experience with Meshroom, so I wouldn’t know if that would work. Maybe, if you tried it, you could share your experience with us.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.