Pulling Out All (f-)Stops

Michael Speaks, Dean of the School of Design at the University of Kentucky, invited me to photograph the Henderson plant with the goal of producing 8'x10' large prints for an exhibition in a cavernous art gallery. There would be half a December afternoon for orientation, and I would have the run of the plant (albeit under the watchful eyes of two chaperones) for one day in March from 8 a.m. to 4:30 p.m. Expectations of print quality, given the venue, were in inverse proportion to the budget.

I want to share my experiences trying to pull this off; it may help someone avoid my mistakes. I’ll actually say more about what worked than what didn’t.

dei_poster

Affordable mural-sized prints (in the neighborhood of $300 at the envisaged size) can only be made by shops specializing in advertisement printing, and generally these shops will need to print in two sections and sow the pieces together. For a substrate, the obvious choice is inexpensive vinyl material used for banners. Print samples I saw on more expensive substrates didn’t make the images look better, and the vinyl saved us the astronomical cost of mounting and framing more conventional prints: the banners can be outfitted with a pouch for a rod (e.g. aluminum) which can be suspended by thin steel wire. Another rod at the bottom, and the picture hangs reasonably flat. Grommets are another option. Note: vinyl isn’t vinyl, and banner printing can be very uneven. One shop produced good quality on pleasant enough material, another produced horribly blurry images on stinky sticky stuff.

For a decent appearance, the printer would need files with at least 150 dpi resolution, preferably a bit more. This translates into a file size of at least 260 megapixel.

The December reconnaissance mission with my Nikon D3 taught me that digital capture had difficulty coping with some of the mixed lighting (see exhibit 1) and that anyway 13-megapixel D3 files upsized twentyfold looked silly (see exhibit 2). I also learned that the light level inside the plant would make working with a large format film camera extremely difficult. For a while, I considered stitching a dozen or so D3 frames together but in the end decided against it because I already needed to layer 5 exposures to capture the ambient brightness range, and I shuddered at the idea of having to combine HDR layering techniques with panoramic stitching after having played around for a while with various software options. Besides, for the stitching to work with an industrial jungle, the camera must be pivoted impossibly precisely about the entrance pupil of the lens. Large format film looked like the better bet.

Exhibit 1: Color rendering. D3 first, then Fuji negative film. The pale green turbine was illuminated by diffuse window light, the interior of the hall mostly by a mix of mercury vapor, halogen, tungsten, and fluorescent.

D3
Fujifilm

Exhibit 2: Detail, shown at roughly the size at which it appears in the final print (=50% of file resolution). 4"x5" negative on the left, D3 on the right.

Detail comparison

Ideally, I would have liked to use my old 8"x10" monorail, but it is a beast to haul around on location, and I don’t own any wide angle lenses for it, which were de rigueur for the job. I thus had to fall back on my 4"x5" Arca Swiss. As for film, I settled on Fujicolor Pro 160S. I needed the dynamic range of a negative film, and the Fuji, as you can see, does an admirable job with mixed light. The lenses I took were a Nikkor 300/9, an Apo-Symmar 150/5.6, a Nikkor 90/8, and a Super-Angulon XL 58/5.6. I needed each of them.

We all know that view camera standards, even on highly touted all-metal cameras with solid zero-detents, are never exactly parallel. The 4"x5" negatives were to be enlarged 24 times, which would make any misalignment obvious. I addressed this problem by using a home-made double-mirror device (Zig-align on the cheap) to align the standards for each shot, before setting any deliberate swings or tilts. (This is worth the effort. If you have trusted your camera without ever trying such a contraption, you are in for a surprise, especially when you work with wide angle lenses.) I did, however, trust the factory-set positioning of the ground glass, which turned out to be a mistake, although a minor one. The Arca has a Fresnel lens in front of the ground glass, which apparently has funny effects on focusing: sometimes the focus is right on, sometimes it is too close to the camera. I have yet to find a rhyme or reason here; I doubt that it has anything to do with my eyesight. My Canham DLC doesn’t cause me any grief with focusing, but I didn’t use it because I had to cram many exposures into a short span of time, and the Canham is much slower to operate than the Arca because it is much more fiddly.

Insufficient film flatness is the other enemy of sharpness. Sheet film always curls up slightly along the long edges of the holder because the channels holding it are considerably wider than the thickness of the emulsion. I am now regretting that I didn’t buy one of the breathtakingly expensive vacuum backs that Schneider made many years ago. Much worse than the situation along the long sides of the holder is the situation opposite the loading flap. The holding groove there is wedge-shaped, which gives the film plenty of wiggle room unless it is wedged into the groove as far as it will go. Shoving it in all they way during loading doesn’t ensure proper positioning because the film has plenty of opportunity to move around the holders during transit, especially if the holders travel flap-down in a car—as they are likely to do. The solution is to give the holders a good whack on the end from which you pull the dark slide before inserting them into the camera. The film’s inertia suffices to seat it right. Sad to say, I produced a good number of left-blurry negatives before I had figured this out.

Framing and focusing were extremely difficult. The light level inside the plant was so low that I could discern only the barest outlines of things on the ground glass. I often needed to place a flashlight in the scene in order to eyeball the edges of the frame. Prints from the earlier shoot were very helpful here as a reference. Only the brightest highlights allowed confident focusing, but they usually weren’t where I wanted the focus to be. The flashlight had to help out again, now as a focusing target. Working out swings and tilts this way without an assistant gets old quickly, especially when there is only one flashlight at hand and the relevant focus points are separated by two flights of stairs. I wonder whether a reflex viewer would have made the task easier. I shot at optimum aperture (f/16 to f/22), emphasizing sharpness over depth of field.

I made two exposures of each scene, bracketed by 1.5 stops, because I wanted to have the choice later between different tradeoffs between deep shadow detail and good separation in the mid-tones and highlights. The film had no trouble coping with the brightness range, but the negatives with the best shadow detail tended to be more difficult in the lighter values. Sometimes the denser negative made for a better print, sometimes the thinner one did. I measure exposure for negative film by dialing twice the nominal ISO into my ambient meter and taking a reading in a shadow area in which I want good definition. The highlights then fall where they will. (This is what the late Phil Davis’s Beyond the Zone System boils down to if one doesn’t tinker with development.) Reciprocity failure was not a problem in exposures of up to a minute or so. With longer exposures, I added time intuitively. (Someone should talk Howard Bond into collecting data as good as those he collected for black-and-white film.) 5-minute exposures didn’t show any color shifts that weren’t easily correctible.

There were five weeks between the shoot and the delivery date of the final files to the print shop, giving the shop a window of ten days before the opening of the show. The development roundtrip to NYC took much of the first week because I didn’t want to risk shipping undeveloped film by air from a place that isn’t accustomed to sparing film the lethal x-ray dose for checked luggage. Low resolution scans for image selection took us into the second week. At this point, I would have liked to send out the selected negatives for drum scanning, but neither time nor money permitted that. I had to make do with my lowly Epson V750 desktop scanner.

A print resolution near 200 dpi in an 8'x10' print corresponds to around 4800 dots for each inch in the 4"x5" negative. Testing small image areas suggested that the best quality was to be had by scanning at 6400 dpi and down-sampling in Photoshop to 4800. For sufficient editing headroom, the scans needed to be 16-bit. Unfortunately, the scanning software (Silverfast Ai 6.6) balked at that request. It even refused to scan a whole frame at 4800 dpi and 16 bit. The most sensible option was to scan each negative in two sections at 4800 dpi and patch these together in Photoshop. I could have scanned in four sections at 6400 dpi, then down-sampled to 4800 dpi, and then patched. But this would have required so much more scanning and Photoshop time that I didn’t deem it worth the miniscule improvement in quality. I picked from among Siverfast’s canned film profiles the ones that yielded the most convincing initial rendering (different profiles for different images on the same emulsion—go figure), set the endpoints in the Expert panel so as to avoid clipping highlights or shadows, and adjusted the exposure slider. All other adjustments were done later because the SF interface and feedback are horrible beyond description. Looking at the negatives with a 100x microscope revealed that they contain a little more detail than the scanner managed to extract, but what do you expect?

I first tried to let the Photomerge command in CS3 do the patching. After grinding away for about two hours on one image on my dual-processor Intel-based 4GB recent-vintage iMac, Photoshop came back with unacceptable misalignments, regardless of the algorithm used. Trying to do the patching by hand by dragging around layers revealed that the scans never matched exactly, despite identical scanner settings. They neither lined up right, nor were they tonally identical. Applying a correction curve and resizing one of the two pieces took care of this relatively painlessly. I could then line up the two pieces by hand and smooth out the transition by erasing the hard edge of the upper layer with a soft brush. Then it was time to flatten and save. This last process took about 3/4 hours for the resulting 2.4 GB 400 megapixel Tiff files, with Photoshop being allowed to grab as much Ram as it wanted and no other programs running. I suppose a separate hard disk for scratch would have helped, but mine is tied up for backup. Total time to get one image to this stage: 4–6 hours, much of it unattended.

The subsequent editing wasn’t all that unusual, except for the fact that the files needed to be flattened before saving. Layers can be used and are fairly responsive, even when they contain lots of data, such as luminosity masks or image copies, but they bloat the files beyond manageable bounds when it comes to saving. Every editing move had to be considered very carefully because it might tie up the computer for hours. (Never unleash, for example, a sharpening filter on a whole file before having tuned and set its parameters on a small test file.) I boiled down my workflow to the following sequence of steps.

  1. Lens correction as needed (mainly for fine-tuning vertical convergence and leveling horizons where camera positioning hadn’t been precise enough—this did happen because, as noted, I couldn’t see a thing), then crop and save.
  2. Gentle use of the Shadow-Highlight tool in order to give extreme darks and lights a better chance at surviving subsequent contrast increases.
  3. Removal of orange-green color fringes along edges of sharp contrast generated by the scanner (it’s the scanner, not the lens, as one can tell from the look of dust and scratches). I desaturate the fringe colors on a hue-saturation layer with an inverted, contrast adjusted, 2–3 pixel Gaussian-blurred Find Edges mask (see the following illustration at 100%; it also shows the effect of subsequent processing):

    color fringes
  4. Global white-balance and color correction with curves layers.
  5. Global contrast correction with a curves layer in luminosity mode.
  6. Flattening and saving under a new name. New file names for every major save turned out to be a good idea. On two occasions, sharpening wreaked havoc in places where I discovered it only much later. I could use the earlier files to repair the latest versions without having to go through all the intermediate steps again.
  7. Quit Photoshop and import image into Lightroom 2. Open in the development module and leave the computer alone for about an hour until Lightroom has come back to life. From then on, the program is responsive, but don’t quit the development module until you are done or else you are in for another hour’s wait. I use Lightroom because I find some of its editing features more convenient than Photoshop’s, which makes exploring the space of possibilities much more efficient and pleasant. Make the image look as good as you can with the tools available.
  8. Return to the library module and quit Lightroom. Relaunch Lightroom and export the file with adjustments in Tiff format (about 1 hour). More direct attempts to get the file out of Lightroom produced nasty square gaps in the image. So much for the promise of version 2 to overcome the earlier version’s file handling limitations.
  9. Reopen file in Photoshop, tune contrast and lightness, dodge, burn, save, sharpen, save. Some sharpening needs to happen before the retouching because it emphasizes dust. I mostly proceeded in three steps: first I used the NeatImage plugin to sharpen fine detail without introducing noise (NeatImage trained on a suitable file fragment, run on that fragment, and then called on the large file by command-F—it will get stuck otherwise; noise reduction at zero), then I moved to Smart Sharpen for general crisping up, and finally I ran Noise Removal to suppress most of the color noise emphasized in step 2. I didn’t touch either grain or luminance noise. The 3-step process took about 3 hours of computer time per file.
  10. Retouch. Most of this was done by a team of elves from Kentucky’s School of Architecture. Architecture students are the best photographer’s helpers on the planet. They are visually intelligent, technically savvy, and work indefatigably under deadline pressure. Many thanks to Anton Bakerjian, Ian McHone, Robert Nack, and Amy Westermeyer for taking out time right before their final reviews. Each image needed to be cut up into four parts in order to make the files digestible for the students’ computers. Person-hours of retouching time per image: well north of eight.
  11. File reassembly (involving interminable flattening) and final very gentle output sharpening after having had a test snippet of a file printed on the banner material. Save.
  12. CMYK and 8-bit Tiff conversion for the printer.

Below are some crops at 100% before the final output sharpening. On a standard 100 dpi monitor, the details are twice as large as they appear in the 8'x10' prints.

detail 1
detail 2
detail 3

This should have been the end of my involvement. It turned out, though, that $500k industrial printing machines have different ideas about color than what we are accustomed to from our Epsons, Canons, or HP’s: the proofs came out too dark and showed a distinct yellow-cyan cast. With hardly any time left and panic welling up, I set out to recreate the proofs’ look on my calibrated Cinema Display as best I could (not on the 20" iMac screen which is not suited for image editing), then designed counteracting curves that made the image look normal again on screen. It was like guessing filtration values and exposure times in the old darkroom by looking at a paltry two miniature test prints. A second set of proofs with the counteracting curves applied came out much better, not perfect, but acceptable. I modified the corrections slightly and got the files for the entire job to the printer in the nick of time, eight images altogether, with eight working days left before the opening. Five days later, the shop’s IT department had finally coaxed their hardware into “ripping” the files (conversion of Tiff’s into printer instructions), maxing out in the process a 4-processor quad-core 16GB Ram machine for more than 4 hours per file with constant concerns about heat buildup. Had there been any need for more color correction at this point, there wouldn’t have been enough computer time left. Fortunately, the prints came out beautifully. If I had to redo them, I would lighten up the deep shadows a little more and maybe tone down the extreme highlights by a smidgen, but this is nitpicking.

Lessons Learned
  • Bargain for at least thrice the time you think you might need. If you can do the same for the budget, and you got the time, buy yourself some clean drum scans.
  • Live in a place where printers deliver the quality you expect. Be prepared to reverse-engineer their profiling by eyeballing five data points.
  • Be a merciless nag or employ one to impress upon the printer that you need to see proofs in time to make adjustments.
  • Keep notes; it will speed up things considerably.
  • Try running Photoshop with a dedicated scratch disk.

A glance at the installation at LOT gallery (in the foreground furniture prototypes made out of fly ash from a power plant by UK architecture students in Rives Rash’s studio):

Show

Last update: May 2009