Shutterbugs' Corner

Starting Out
In macro photography, excellent results can be obtained on a shoestring. It is of course easy to spend the thousands of dollars that are burning holes in your pocket on equipment, but the bulk of that money will go towards ease of use, not image quality. I highly recommend exploring cheaper options first that will help you find out how committed you are, what you want to photograph and how, and how much pain you are willing to suffer in the quest for higher magnification. The pain isn’t just metaphorical: I sometimes wish I had a back specialist among my friends.
Start with a clip-on macro lens for your smartphone. If you have access to a nifty-fifty or a moderate wide-angle lens with a manually operable aperture or to an old enlarger lens, reverse-mount it on your camera with a cheap adapter ring. Play with a set of extension rings. Put a Raynox close-up lens on one of your existing lenses. Buy macro bellows second-hand (make sure they are light-tight) and attach whatever manual aperture lens you can find (scavenged scanner lenses are very interesting). Use flashlights to illuminate your subjects. Make light modifiers from fast food containers and aluminum foil. The internet, unlike the metaverse, is a great resource. It offers libraries’ worth of information and a community eager to help.
My macro setup at this point, which works for me but needn’t work for you, is about as simple, sleek and lightweight as it gets, and it was not even terribly expensive, all things considered; I'll do some show and tell below. It took over a year of tinkering to get to this point, and now I have a ragbag of old camera and enlarger lenses, microscope objectives, bellows, rails, lighting equipment, and a plethora of adapters from China, Russia, and Portugal languishing in the closet. Ah, hindsight!
The following remarks may make it appear as if photographing bugs is a very complicated technical undertaking. To be fair, there are more technicalities involved than in other areas of photography, but they can all be worked out during practice so that they fade into the background on game night. 

Depth of Field
The cardinal problem in macro photography is depth of field, the distance between the closest and the farthest acceptably sharp object in a photograph. The depth of field shrinks with increasing magnification and becomes razor thin in the macro world, too thin for even the most stalwart bokeh aficionado. This is true regardless of how the magnification is achieved, whether it is by using a longer lens (= zooming in) or by moving the camera closer to the subject. In the first case, the decrease in depth of field is proportional to the increase in magnification; in the second case, it is even stronger and affects objects in front of the focal plane more than objects behind it. By contrast, the depth of field increases with smaller aperture settings. So the question is: can a satisfactory depth of field be achieved in macro photography by stopping down the aperture?
Let me begin by dispelling a common misconception. It is widely believed that camera systems with smaller sensors inherently produce a greater depth of field and are thus better suited for macro work. Any old smartphone would seem to prove the point: its pictures have such an extended depth of field that manufacturers feel compelled to devise computational tricks to selectively soften them for a more “professional” look. Take my phone, for example. It has a lens with a fixed f/1.8 aperture and a “28 mm equivalent” focal length. I cannot find information on the actual focal length, but the sensor is listed with a diagonal of 7.06 mm. The “28 mm equivalent” refers to the full frame (FF) a.k.a. 35 mm format with a sensor diagonal of 43 mm, which is 6.1 times as large. The focal length of the phone's lens scales equally and must be 28 mm ÷ 6.1, or 4.6 mm. Its aperture then works out to be 4.6 mm ÷ 1.8 (the f-number), or 2.6 mm. What if we stopped down a true 28 mm lens to 2.6 mm? A 2.6 mm aperture in a true 28 mm lens corresponds to f-number 11 (28 mm ÷ 2.6 mm ≈ 11). Photographs shot at f/11 with a 28 mm lens exhibit, it turns out, the same depth of field characteristics as my phone pictures. You can easily verify this with your phone and camera or by consulting depth-of-field formulae or calculators. Whatever advantages a system based on a smaller sensor may have, an inherently greater depth of field is not among them.
The lesson from this little exercise generalizes: depth of field is entirely determined by aperture size and vantage point. Neither sensor size nor focal length of the lens have anything to do with it; they only affect scale and angle of view. A telephoto lens produces the same depth of field as a wide-angle lens; it just magnifies a portion of the wide-angle image, warts and all, without affecting the relative sharpness of the objects in that portion. By the same token, wide-angle and telephoto lenses show the same perspective; their different characteristics are a matter of angle of view and scale. This is readily confirmed by comparing photos taken with different lenses and sensors from the same vantage point at constant aperture size – not f-stop – and scaled so that their features line up. 
Let us now look at a macro example. The following images show depth of field at 1:1 magnification and various aperture settings for a 20 MP Micro Four Thirds (MFT) sensor (Olympus RAW files with Lightroom development defaults). 1:1 magnification means that the subject in the plane of sharp focus appears at life size on the sensor. I focused on the hair at the base of the dead wasp’s nearer antenna because sharpness is easier to gauge there than in the compound eye. The images are uncropped and resized to 1800 × 2400 pixels; the insets are 200% crops. Click on the thumbnails to compare.
From f/2.8 to f/4, things progress nicely except for the moderate focus shift that many lenses exhibit. The depth of field increases and small detail is crisp. At f/5.6, micro detail begins to look at bit softer. At f/8, small detail is taking a hit, by f/11 it is quite blurred, f/16 looks bad, and  f/22 is a complete mess. The deterioration at f/8 is visible in 18" × 24" prints and obvious in 30" × 40" prints. The villain here is diffraction.

Diffraction
Diffraction happens when light is squeezed through a hole. A round aperture turns what should be a point on the sensor into a fuzzy pattern of concentric rings, the so-called Airy disk, named after the astronomer who gave the first theoretical account of the phenomenon in 1835. The Airy disk's diameter is inversely proportional to the diameter of the aperture. It is calculated by multiplying the f-number by two constants: a scale factor and a compromise number for “the” wavelength of visible light. There is one important wrinkle here: the f-number in question is the effective f-number, which can differ from the nominal f-number that is printed on the aperture dial or displayed in many viewfinders. The two numbers agree for a lens focused at infinity. Focusing closer decreases the amount of light falling on the sensor. This is easy to see in the simplest case of a single-element lens without any additional mechanical aperture. The lens acts as the light source illuminating the sensor. Focusing closer moves it away from the sensor and thereby decreases the sensor’s share of the light it sheds. At 1:1 magnification, the lens is twice the distance from the sensor as at infinity focus, and the light intensity at the sensor has dropped to one quarter. The story is more complicated for complex, non-symmetrical lenses, but the upshot is the same: the light intensity at 1:1 magnification is two stops lower than at infinity focus. In exposure and diffraction calculations, which need to take this two-stop difference into account, the effective f-number is twice the nominal f-number. In general, the two numbers are related by the equality
          effective f-number = nominal f-number × (1 + magnification ratio).
Nikon cameras display effective f-numbers in the viewfinder and save us the calculation, other brands like Olympus do not. You can check your camera by observing what the f-number in the viewfinder does as you change focus. If you are a Nikon user, don’t complain when your brand-new macro lens at its closest focus setting shows only f/5.6 wide open even though you paid for an f/2.8 lens. Rejoice.
The rest of us can use the equation to work out what nominal f-stops to dial in at various magnifications in order to maximize depth of field. We observed that diffraction on a 20-megapixel MFT sensor at 1:1 magnification begins to damage micro detail by nominal f/5.6. If we want to preserve most of the resolving power of the system, we shouldn’t stop down beyond that. The corresponding effective aperture is f/11. Its beauty is that it is good at any magnification. At 1:2, for example, effective f/11 corresponds to nominal f/7.1 ≈ 11 ÷ (1 + 1/2).  So that’s what I would dial in at the most.
Notice that this reasoning is based on sharpness judgments at the limit of what the system can resolve. There is an alternative approach where sharpness is judged in an image of some fixed size, and a diffraction limit is set accordingly. Whenever the final image is smaller than what would be required for showing the finest detail the system can resolve, one can adopt a smaller f-stop as the diffraction limit because the level of detail harmed by the increased blur would not show up in the image in the first place. On this approach, the acceptable diffraction limit can be extremely high if the final image is sufficiently small. This is another way of saying that the blurriest file looks great as long as it is displayed small enough. Witness Instagram. Since I don't want to be hamstrung at the time of exposure by a commitment to a certain maximum image size, I stick with the first approach that operates the system near its resolution limit.
By pixel peeping at system resolution, we have made the diffraction limit a function of effective aperture and pixel size. The effective aperture determines the diameter of the Airy disk, and the pixel size determines how many pixels are affected by the disk. This makes it easy to work out the diffraction limits for sensors of different size and pixel count, as follows.
My smallest “good” effective MFT aperture is f/11. The MFT sensor is 17.3 mm wide and holds 5184 pixels in that dimension; so one pixel is .0033 mm wide. The ratio D of effective f-number and pixel width codes my preference for the diffraction cutoff: D = 11 ÷ 0.0033 mm = 3333/mm. We can calculate the smallest “good” effective aperture for any other sensor by multiplying its pixel width by D:
          D × sensor width ÷ long edge pixel number = largest acceptable effective f-number
Here are my two Nikon cameras with 24 and 45 megapixel sensors:
          Z6: 3333/mm × 36 mm/6048 = 20
          Z7: 3333/mm × 36 mm/8256 = 14.5
For the Z7, the calculation matches my observations very nicely: inspecting files leads me indeed to choose 14 or 16 as the largest acceptable effective f-number. For the Z6, however, I find myself choosing 16 as well, not the calculated 20. Perhaps the discrepancy is due to the fact that the Z6, unlike the other two cameras, has an anti-aliasing filter which may exacerbate the effect of diffraction to the point where it becomes bothersome one stop earlier than expected. Whatever the cause, the moral is: trust your eyes more than you trust any calculation.
There is another takeaway here: if we want to exploit a system's higher resolving power, we must contend ourselves with a reduced depth of field. In order to achieve the same depth of field that a MFT camera produces at effective f/11, a FF camera like the Z7 has to be stopped down to effective f/22 because, as I pointed out earlier, it is the size of the aperture, not its relationship to the focal length of the lens or anything else, that accounts for the depth of field. We cannot, however, stop down to effective f/22 if we want to take full advantage of the higher resolution of the Z7; we can only stop down to f/14.5.
Moving up in sensor size from MFT to FF tilts the tradeoff between depth of field and diffraction in the direction of less depth of field, if at all. What if we went from MFT to a smaller sensor? Wouldn’t that tilt the tradeoff towards more depth of field? No, it wouldn’t, unless we were prepared to sacrifice system resolution. It would be like going from the diffraction limited Z7 to diffraction limited MFT – more depth of field but at a lower resolution –, only more punishing because MFT resolution is already not stellar in the larger scheme of things. Having more megapixel on a smaller sensor wouldn’t help because it would only make the diffraction blur more visible.
Back now to the original question: does stopping down solve the depth of field problem? You be the judge. The dead wasp illustrates the situation with MFT at 1:1. At lower magnifications, the problem is less acute, at higher magnifications, it is more acute, and changing camera systems doesn’t make a difference. I, for one, am bothered by the fact that in none of the pictures more than a part of the wasp’s eye is sharp. In fact, I would like to be able to magnify the wasp a bit more while also rendering its entire eye sharp.

Focus Stacking
So far, we have considered in-camera magnification up to 1:1. That’s where the fun begins. I often work at magnifications around 2:1. At that level, our adopted MFT diffraction cutoff of effective f/11 is already reached by stopping down nominally to f/4. This leaves only the merest sliver of the wasp’s eye is in focus. We have reached the limit of what is optically possible.
Enter computational photography. We can create a sequence of exposures where we shift the point of focus slightly from one exposure to the next. The sequence can then be digitally assembled into one overall sharp image. This focus stacking technique is what I used for the majority of photographs in Distant Relations. The number of exposures per stack depends on the choice of aperture, magnification, and desired look. My average seems to be around fifty, with a lot of variance. Under the right conditions, the process can work miracles, but be forewarned: the slightest movement during the exposure sequence can ruin the result.
Small effective apertures at higher magnifications make light a precious commodity because we need fast shutter speeds to stop motion and reasonably low ISO settings to keep noise in check. Many macro photographers therefore resort to flash. Shooting focus stacks with flash can yield superb results by allowing very low ISO settings and freezing motion (within frames, not across frames). The flash recycling time can become an issue if it slows down the sequence to the point where too much patience is asked from the bugs and the wind. Another concern is that flash exposure (in 2021) works only with a mechanical, not an electronic shutter. This puts a lot of strain on the shutter mechanism because a single stack tends to involve already dozens of exposures, and one session results in many stacks, not least because it is a good idea to overshoot in order to minimize unpleasant surprises down the line. I wouldn’t want to have to replace the shutter after just a few dozen bug portraits. Flash also spooks some bugs. Continuous light sources such as LED’s permit faster sequential shooting and the use of the indestructible electronic shutter. But many bugs hate continuous bright light in their faces even more than flash. Furthermore, any diffused artificial light source is bulky and thus difficult to bring close to subjects that sit anywhere but on the most exposed leaves and branches, which is to say, most of my subjects. I often find that my lens setup, which is comparatively slender, is already too bulky to get close enough, and that is before attaching any lighting paraphernalia. So natural light it is for me on most occasions. Luckily, I find it more appealing anyway.

Camera
Natural light requires the use of wide apertures in order to avoid excessively high ISO settings or too slow shutter speeds. A wider aperture causes a shallower depth of field, which means more frames per focus stack, which in turn calls for a higher frame rate for a fighting chance at catching the bugs while they sit still.
Stacking can be accomplished by moving the camera or, more quickly and conveniently, by refocusing the lens. The latter is automated in some cameras, and among these the MFT models do it fastest by far. This speed advantage over larger formats outweighs, for me, all theoretical disadvantages MFT might have. I have consistently been able to get better results from the small discontinued Olympus E-M1 II that I bought for the purpose than from my three and five times more expensive and otherwise superior FF Nikons. The main reason is that I can shoot a stack with the Olympus in a fraction of the time it takes with the Nikons. Besides, Nikon’s stacking implementation must have been designed with the express goal of aggravating the user. The sequence is initiated by selecting an item from a sub-submenu rather than by pressing the shutter button; it takes forever to commence; the lens aperture opens and closes between exposures which wastes time and introduces gratuitous vibrations; and there is no video feed that would indicate whether or not the bugs hold still.
A nice fringe benefit of the Olympus system is that its autofocus hunts much less in the macro range than Nikon’s. Autofocus is generally discouraged for macro work, but with today’s focus-by-wire lenses I find it useful for quickly getting the focus into the ballpark. Just remember to decouple the AF-activation from the shutter button so that the camera doesn't refocus when you start the sequence. The sundry camera settings required for focus stacking can be conveniently saved and recalled as a single user preference. The Olympus camera has a fully articulated screen, which is a godsend for work in awkward positions where the viewfinder is useless and a tilt-only screen is made illegible by sky reflections.
Not only does MFT have the edge on speed, its inherent resolution and noise shortcomings disappear under the real-world conditions in which I shoot. In principle, the smaller MFT sensor is at a disadvantage because it requires twice as much magnification as FF for a final image of the same size. The quality loss from this greater magnification is mitigated to some extent by better optics and a higher pixel density, but when each system operates at its base ISO, FF yields a cleaner image. When settings are equivalent rather than ideal, however, as they tend to be out in the wild – same shutter speed to stop motion, equivalent apertures for the same number of exposures in a stack –, the FF advantage vanishes. Equivalent settings mean a doubling of the effective f-number and a compensating 2-stop ISO increase for FF as compared to MFT. These settings make the results pretty much indistinguishable and sometimes even favor MFT because of its excellent noise management.

Lens
Olympus 60mm f/2.8 Macro. It is very sharp (although I had to go through three decentered copies before I found a good one); its bokeh is pleasing, its longitudinal chromatic aberrations are acceptable, it is astonishingly small and lightweight (185 g, one quarter the bulk of a comparable FF lens), it has autofocus, as is required for in-camera focus stacking, and it is half the price of comparable FF lenses. There are many other wonderful macro lenses for MFT, but their lack of autofocus makes stacking in the field a bit of a headache. I haven’t had much success with the ingenious “turbostacking” methods people have devised for manual-focus lenses, from turning the focus ring in burst-mode shooting to souping up a focusing slider with an electric screw driver.
Most so-called “macro” lenses achieve a maximum magnification of 1:1, irrespective of the size of the sensor for which they are designed. At their closest focusing distance, they all produce a life-size image of the subject in focus. This image is twice as large relative to the MFT sensor as it is relative the the FF sensor, a relationship that is sometimes misleadingly advertised as a 2:1 magnification advantage in favor of MFT. If the output of both sensors is enlarged to the same image size, then indeed the subject will be twice as large in the MFT image as in the FF image. But this is because the MFT image was enlarged electronically twice as much the FF image, which is hardly cause for gloating. There is an advantage here for MFT users, but it has to do with equipment size, weight, and ease of use, not magnification: filling the MFT frame is easier than filling the FF frame. To match the native frame-filling capability of the Olympus lens, a comparable FF lens needs to be heavily tricked out with the add-ons described in the next section. One can of course crop the FF image for a match, but a 2x linear crop from even the best FF sensor is visibly inferior to an uncropped MFT image.

Boosting Magnification
At 1:1 magnification, MFT renders a 17.3 mm wide area in focus. Most bugs are smaller than that. If we want to portray a bug from the front and give it prominence in the frame, we may be looking at an area less than 1/2” (13 mm) wide. This area must be stretched across the MFT sensor, which involves a magnification ratio of at least 1.3:1 (2.6:1 on FF). How can this be accomplished?
There are three ways to boost the maximum magnification of a lens while retaining its autofocus capability for in-camera stacking: by increasing its distance from the sensor, by outfitting it with the equivalent of reading glasses, and by enlarging its image before it reaches the sensor. The respective add-ons are: extension rings, close-up lenses, and teleconverters. They all degrade the quality of the image, but less so than cropping would. Close-up lenses vary dramatically in quality. The multi-element Raynox DCR-150 and DCR-250 are probably the best ones around and reasonably priced.
A standard set of MFT extension rings (16 + 10 mm) between the camera and the 60 mm macro lens reduces the width of the in-focus area from 17.3 to 10 mm, corresponding to a magnification ratio of 1.7:1. The popular DCR-250 lens yields a similar 1.6:1. A 1.4x teleconverter delivers, you guessed it, 1.4:1. Extension rings and teleconverters cut down on light in proportion to their magnification boost (one stop for the teleconverter, 1.5 stops for the extension rings); close-up lenses do not, which is a point in their favor. The various add-ons can be combined: I often use the Raynox lens in combination with one or both of the extension rings for up to 2.5:1 magnification, or an in-focus area 7 mm wide. Some people add a teleconverter to the mix. I haven’t tried this but I hear it gets a bit tricky. The converter isn’t meant to be used in conjunction with the macro lens but can be attached with the right brand of extension ring as a spacer.
I should note that the numbers here are measured and don't quite agree with those from various calculators on the web. The discrepancy comes, I suspect, from the fact that real lenses behave only approximately like the ideal lenses modeled by the calculators.

Tripod
Some people report that they can shoot focus stacks handheld. My hands are steady enough for stacks of maybe half a dozen frames, on a good day. For real stacks, I need a tripod. The tripod must meet the usual requirements of sturdiness, a good stiffness-to-weight ratio, no play, and positive locking. In addition, it should facilitate work in awkward positions, with the camera at some distance from its center axis or close to the ground. There is an almost mythical Swedish wooden tripod that is supposed to tick off all the boxes. The next best thing I could find stateside is the Leofoto LS-284CVL carbon fiber model with a center column that can be turned sideways to become an outrigger. It is a featherweight compared to my old Gitzos, but it is strong enough for the lightweight Olympus rig. The outrigger is really useful for sticking the lens deep into the shrubbery where the bugs hang out. One has to be careful, though, because everything gets a bit unstable due to cantilever loads and a shifted center of gravity. But, crucially, vibrations die down quickly. You can see some of the possible configurations below; I can shoot even closer to the ground than with a dedicated ground pod. I would not entrust the outrigger with heavier equipment such as a FF rig. How the tripod will hold up over the years remains to be seen.
Tripod Head
My first choice was a geared Arca-Swiss Cube from my architectural work. It is self-locking and allows very precise and smooth independent adjustments around three axes. It is a joy to use, but it wants to sit on a sturdy tripod with a vertical center column. It is much too heavy and its adjustments are too restricted on the Leofoto outrigger. For that, a strong yet lightweight ballhead is required. After some trials with RRS and FLM heads, I settled on the Novoflex Magic Ball Mini. Its range of motion dwarfs that of a conventional ballhead, which really helps with camera positioning. It locks and unlocks with less than a quarter turn of its single large knob, which means that I don’t have to change my grip and that I get good tactile feedback about the locking status. Smooth movements make fine adjustments easier, though not as easy as they are with a geared head. I use the head without a quick-release clamp in order to save weight. The Magic Ball has pretty much replaced the Arca Cube in my macro work because I want my equipment ready to go and not waste time switching heads while the bugs get restive.

Focusing Slider
With the Olympus, focus stacking happens in camera. So what’s the point of using a focusing slider? Three points, actually. First, the slider allows me, with the tripod already in place, to slowly advance the camera towards the subject without scaring it away. Second, in combination with lens focusing, it lets me fine-tune the magnification. Third, it lets me adjust the stack’s starting point without messing up the framing. I use a Novoflex Castel Mini II whose precise rack-and-pinion drive works smoothly and quickly for both coarse and fine adjustments, better than the more common and less expensive screw-driven sliders. Before it, I used a simple nodal rail in an Arca-style clamp which does the job but is much harder to adjust. It sags a little with the clamp loosened so that frame and focus point shift when the clamp is tightened. There is no play in the Novoflex, so this problem doesn’t arise. The drive mechanism is stiff enough to hold the camera in place unless the rail is tilted a lot. For that situation, there is a locking knob. The rail is light enough for the outrigger, unlike the beautiful units from the 1960s made by Minolta and other companies. Lest I start sounding like a Novoflex fanboy, let me say that the knurling on Novoflex knobs is so sharp-edged that it can take the skin off your fingers.

Remote Release
I don’t use one because it is yet another thing to lose. Instead, I dial in 1-2 seconds of shutter delay to allow vibrations to die down.

Clamps
Occasionally I use a crocodile clamp on a light stand or a small tripod to fixate an unruly twig or plant stem. Many bugs dislike this and immediately decamp. It didn’t work at all when I most needed it, with the minuscule gnat ogre robber fly perched atop slender iris blades swaying in the breeze. 

Post Processing
Focus stacking generates vast quantities of files. FastRaw Viewer is an inexpensive app that makes organizing them and deciding which ones to process much more efficient than any other file management system I have tried.
A small number of images can sometimes be sucessfully stacked in Photoshop, but for deeper stacks I know only two serious contenders: ZereneStacker and HeliconFocus. Zerene used to be touted as the gold standard but I ditched it because of its slow performance, inability to handle RAW files, and antediluvian interface that reflects a general lack of upkeep. Some people claim that Zerene gives them technically better results than Helicon, but since I haven’t seen it in my work I doubt the difference is dramatic, and anyway the other factors are decisive. Speed matters because often it isn’t clear which stack from among similar ones is the best or whether a stack with a little bit of movement is salvageable. So I have to process them all and compare. Having to wait for an hour or more for Zerene to complete a single stack is excruciating, especially in situations where I have a captive bug waiting to be released as soon as I have made sure that I have the photo I want. Raw processing matters because the stacking algorithms increase noise, but less so with RAW input files than with TIFFs or JPEGs. The DNG files that Helicon can output are more malleable in post processing than TIFFs or JPEGs. This makes a difference for those of us who don’t have enough storage space to keep the huge number of input files around for very long. When I need to go back to the drawing board because I made a mistake early in the editing chain, I prefer a DNG over a TIFF or JPEG that has white balance etc. baked in. I can see one point in favor of ZereneStacker, which is its batch functionality, especially in a studio setting where there are fewer stacks to deal with, image noise is well controlled, and motion problems have been eliminated (search for “slabbing” but be warned: I have found slabbing to cause more headaches than it cured).
Stacking results tend to require some retouching in the stacking software, most commonly because an opaque feature is rendered transparent. After that, they need attention in a standard image editor. They benefit from a good amount of deconvolution sharpening (the “Detail” slider in Lightroom). Often there are halos along well-defined edges that can be tedious to impossible to address in the stacking software. Some methods from portrait retouching (e.g. frequency separation) are helpful here to prevent a plasticky look. From there on it’s the same routine as with any other photograph, a balancing act in the many-dimensional parameter space of one’s editor of choice: a small magenta shift here, a little more highlight contrast there, a little less overall saturation, a little more midtone brightness, some unsharp masking...
Back to Top