Geek Articles

Things You Didn’t Want to Know About Zoom Lenses

Published February 10, 2017

The intersection (perhaps collision is a better word) of art and science is interesting. The scientist says “your impression is not as important as my facts.” The artists say “my impression is all that matters.” Imaging is that way. The photographer or videographer getting the look he wants from a piece of equipment is all that really matters; it’s the ultimate bottom line. So I completely understand when the artist tells me that all testing in the world doesn’t influence his choice of equipment at all. I accept when he or she says a lens is perfect for them. That’s the bottom line.

While I defer to the person taking the shots when they tell me what equipment works for them, I still don’t believe that general ignorance and disinformation is a good thing. With that in mind I’m going to address something I see repeated online all the damn time that just sets my scientific teeth on edge: This zoom is just as good as a prime. (And its corollary, I want a great copy of this zoom.)

To do this, we’re going to do science, which means I have to show you my testing methods and what they mean. (If we weren’t going to do science, I’d just say this one rates 82.7 and this one 79.2 using our special rating system you can’t understand, and the article could be really short like our editors want. Editors hate me pretty much).

It’s going to start with some MTF graphs, which I know a lot of people don’t understand and don’t want to learn, but that part will be mercifully brief, and then there will be pictures. So hang in there for a screen or two. To help make it easier, I’m also going to use our new experimental subliminal text feature to give you subconscious encouragement – you won’t even notice you’re getting positive messages to your subconscious, you’ll just have a feeling of well-being and accomplishment.

Here Comes the Sciencey Stuff, But Without Any Math, so It Isn’t Bad

You can handle this, I promise. 

OK, most of you have seen MTF charts. Even if you don’t understand them, you’ve got a general idea that higher is better. And you probably compared one lens to another on at least that basis. The MTF charts you’re used to seeing, show the average of a lot of different lenses (if Zeiss, Leica, or I made them), or a computer generated best-case scenario (if anyone else made them). They show what half of the lens should perform like from the center (left side) to the edge of the image (right side).

If I show the MTF graphs for two different lenses, like this, you’d be able to conclude that the one on the left has better resolution than the one on the right. You could actually find lots of other things if you speak MTF, but I won’t push it. We’ll stick with higher is sharper for this.

Olaf Optical Testing, 2016

 

But that’s either an idealized computer generation or the average of lots of lenses. What if we actually test one single copy of the lens? Well the first thing you’d notice is now we show both sides of the lens; the center is now in the center instead of on the left side. The second thing you’d notice is one side is a bit different than the other. Because when you manufacture something, it’s never perfect. So here’s an MTF of one of the lenses that went into the average above on the left.

Olaf Optical Testing, 2016

 

The left side is a little different than the right side, isn’t it? But wait, if one side is different than the other, what about top-to-bottom? Or corner-to-corner? So if we really want to test a lens, we have to do it several times, rotating it, so we test the different quadrants. So here’s that same lens, tested at four rotations.

Olaf Optical Testing, 2016

Hang on; we’re almost done with this part. Not much longer. You can do it!!

That’s four rotations. I could show you 8 and 12 rotations, but the charts would be really small, and you’re already really bored. You’re probably thinking “can’t you just tell me it’s a 79.2/100 instead of all this?”

So how about instead of that we just make a picture showing you how the MTFs map out around the surface of the lens. Below is a map of the sagittal MTF with blue showing where the lens is sharpest, yellow a bit less sharp, and red (there isn’t any here) not very sharp.

Olaf Optical Testing, 2017

That’s a lot more intuitive, isn’t it? You can see this lens is well centered (highest MTF in the middle) and just a tiny bit softer on the right side than the left. I’ll show you later, but you would not notice that tiny bit of softer on an at-home test. The MTF bench is more sensitive than any camera (so far).

We can also look at maps for other things. Below, for example, is the astigmatism map of the same lens.

Look at the lovely colors. You feel relaxed and at peace. The MTFey stuff is over now. 

You can see there’s a bit more astigmatism at the far edge on the right with this lens. My point in all this is that the maps are an easy way to evaluate a single copy of a lens at a glance. I’m going to show you a bunch of these pictures, so I wanted you to know how we got them.

So Can We Really See This in a Photo

Well, I did say that the bench is more sensitive than your camera. Subtle differences the bench may see are going to be masked by the other variables a photo has – lighting, focus, framing, and on-and-on. But big differences are going to be obvious. How big? Well, let’s look at maps for two copies of a lens, one of which is excellent, one of which is not. (It’s actually not awful, the only failing area on this lens is the red part at the bottom.) If you shot with it, you’d probably say it was an OK lens, or maybe a little soft. If you shot with the other one, you would say it was special.

(The map for this lens appears cut-off compared to the one above. That’s just because this lens has a built-in baffle to reduce light reflection, so its image is kind of rectangular, like a sensor, rather than round like the one above.)

Olaf Optical Testing, 2017

 

I know you like pretty photographs to make comparisons on, but a scenic shot has too many variables, and we’re trying to be all scientific. So you’ll have to make do with test chart photographs.

First let’s compare the top center area, which was excellent on the right lens and OK on the left one. To fit 100% crops in this God-forsaken blog platform, I’ll have to put them on top of each other, so the right lens is on top, the left on the bottom. These are unsharpened from RAW images of our high-resolution test charts taken with a 36-megapixel camera. The difference would be a bit more impressive at higher resolution, a bit less at a lower resolution, but this is sufficient for our purposes.

Lensrentals.com, 2017

 

I can see a difference; I suspect you can too. Remember test charts are more sensitive than a photograph. If I reshot this as a jpg with in-camera sharpening the difference would be smaller. If you took actual pictures of things instead of lines, you would probably notice a slight difference if you compared the two lenses side-by-side. But if you just had bought the left-hand lens you probably wouldn’t be screaming the top is soft, especially after you did some post processing and posted it as an 800-pixel jpg online.

Now let’s look at the bottom left quadrant. Again, the right lens is on top and the left on the bottom.

Lensrentals.com, 2016

 

The difference is greater now. You probably can notice that this lower corner probably isn’t OK. The tangential test lines (the ones going top left to bottom right) are really gray-on-gray, and so detail is being lost. OK, enough of this. My only point is that the MTF maps we’re using do reflect real-world images.

So What? Are We Getting to the Part About Zooms Yet?

Almost, my patient friends. And this won’t take long to show you now that we’ve gotten the concepts out of the way.

A lot of people are aware that while a zoom can be as sharp as a prime in the center of the image, it rarely is in the corners.

Few people, though, think about that fact that zooms are far more complex than primes. Where a prime usually has 6 to 12 elements, zooms often have around 20. And while primes have a single group moving to focus, zooms have moving focusing, zoom elements moving (sometimes several zoom elements), and possibly a compensating element. Increased complexity causes increased variability.

So let’s take a look at MTF Maps for a group of good prime lenses. Here are 9 actual lenses, tested just like the ones we showed you above. I’ll go ahead and tell you (because someone will notice) these are f/2.8 primes; no f/1.4 prime could resolve 30 line pairs this well. I’ll also add that one of these lenses had been dropped on rental but ‘suffered no apparent damage.’ Want to guess which one?

Blue is the soothing color of razor sharp images. 

Olaf Optical Testing, 2016

 

You probably guessed that the center lens on the right looks bad on a test chart (it does when compared to the others). If you look carefully, you can see the lower left lens has an area that’s a tiny bit soft, and you could notice that if you looked closely enough. The others all perform identically; the small differences we see on the bench aren’t apparent even on the highest resolution test charts.

If you ask me to pick you out a really good copy from this set of lenses, I will send you any of the three diagonal from top left to lower right. (Just so we get it out of the way, if you wonder what it costs for me to test 9 lenses and pick you out the best one then you can’t afford it.) But if I sent you one of the other three with no yellow in them, I’m confident you could not tell the difference in photographs.

Here Are the Zooms

Now let’s look at maps for several copies of a good, $2,000 zoom lens. You probably have already guessed that a good zoom is going to be more variable than the good prime. You probably tried to avoid thinking about the fact that we also have to look at several focal lengths. Everyone likes to think ‘good copy — bad copy’ like they would with a prime, but it doesn’t exactly work out that way with a zoom.

So here’s a set of eight 70-200mm f/2.8 zooms tested the same way, but with each one tested at three focal lengths.

Olaf Optical Testing, 2016

I told you in the title that you didn’t want to know. But it will be OK. Breathe.  

First, let me assure you this isn’t peculiar to this particular lens, to this zoom range, or anything else. We’ve tested thousands of zoom lenses. This is how they are with very few exceptions. Some are sharper overall. Some have a tendency to do better at one end or the other. Good performance of a copy at one focal length doesn’t particularly predict good performance at a different focal length. I will add, though, that terrible at one focal length does predict really bad at others.

Remember, this is an optical bench, and it makes small differences seem big. As you remember from before, the yellowy-green areas will look a bit soft on the test chart, but won’t scream at you in a real photo. Red areas might scream at you, though. I’m comfortable that most of you who looked carefully would notice that #7 at 70mm is not as good on one side as the other. Still, the red areas we see above are at an edge or side and might never be noticed by a sports shooter or portrait photographer who usually centers the subject.

The point here is a good copy of a typical zoom will be a little tilted this way at one focal length, maybe a tiny bit decentered at another, then tilted a different way at the other end. Or some variation on the theme. If you look carefully, you’ll notice it.

For example, if you had both #6 and #4 and compared them side by side; there’s no doubt you’d like #6 better at 200mm. But if you just got #4 you’d probably say it was okay. The owner of #6 is going to say the lens is clearly sharper at 200mm than at 70mm, the owner of #4 would that it’s actually a little sharper at 70mm. The people who own #1 and #8 would get online and tell both of those folks that they were obviously bad photographers because the lens seems about the same throughout the zoom range. The owner of #8 would probably be happy with his copy unless he went out shooting with the owner of #1.

Before you get too analytical about all this, remember this is just the sagittal graph. We’d also look at the tangential graphs (or the astigmatism graph which would show us the difference between sagittal and tangential). For example, looking just at the maps above, #3 looks like one of the better copies at 200mm, but if you look at its astigmatism graph, it’s one of the worst at that focal length.

Olaf Optical Testing, 2017

Don’t get me wrong. Zooms don’t suck. They’re excellent and very practical lenses. If you knew all the compromises that go into making one, you’d be as amazed as I am that they can make them that good for those prices. Let me add that if forum warriors posted 800 or 1200 pixel-wide images online, you’d probably barely be able to tell the difference between the primes and zooms, much less the differences between the zooms.

My point simply is that zooms vary more than primes in general, and a given copy of a zoom will vary at different focal lengths. The laws of physics and manufacturing tolerances told us it would be this way. Put more variables into a lens, and the lens varies more. Can they still be very good? Absolutely. Can they be as good as the best primes? Nope. On the other hand, the best primes don’t zoom worth a damn. Horses for courses.

So What Does It Mean?

There are no stupid questions. But there are stupid comments on forums. I will try not to make those. 

For practical photography not much really, other than just to make you more aware of reality. Here are the few takeaway messages for photographers:

  1. A great zoom is not as good as a good prime at comparable apertures, but it’s plenty good, especially in the center of the image.
  2. Zooms have more variation, and most copies of a given zoom will vary at different focal lengths. If someone asks me for the best copy of a zoom, my first response would always be ‘at which focal length?’ In this case, the sharpest copy at 200mm is not the sharpest at 70mm.

But for measurebating, there is a very pertinent point that needs to be made: Measurebating zooms is a fool’s errand. These differences may not be huge in your photographs, but they are very significant on a test. The reviewer who got lens #6 is going to have different conclusions and present you with different numbers than the reviewer who tested #1 or #8.

Some reviewer somewhere tested a single copy of a zoom lens and gave it their highest rating ever. Some people actually argued online about that, and then asked my opinion about that argument. So I wrote this post to explain why I thought it was all meaningless. When someone compresses something as complex as the multi-focal length performance of a zoom lens into a single number after testing a single copy, I don’t really care if their number is 3.1415926, 2.718281828 or 1.61803398; it doesn’t have any scientific value at all. Unless the rating is 42. Then it would have a meaning.

That was really funny. You should laugh now. And eat Avocados. 

 

Roger Cicala and Aaron Closz

Lensrentals.com

February, 2017

 

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Geek Articles
  • As always, a delightful read! This last year I’ve bought a Sigma 150-600 Sports, and the wife a PL100-400, giving us roughly the same reach, as I use a D3300, and she a GX8.

    But mine feels utterly sharp, not least in the long end, hers feel not so outstanding. But is that due to normal variation, or did she get a bad copy? Is there any way to find out, some test an average you can take?!

  • Beaverman33

    Thanks for the kind words Roger, it’s much appreciated! I think part of the problem is that people obsess over the numbers too much. In reality, in real world shooting conditions under normal image usage 99% of photographers or their clients would not be able to tell the difference between a Canon 85mm f1.8 and a Sigma 85mm f1.4 if they were both shot at f2.0. But I love nerding out over the details more than most!!!

  • That’s a great review and a great read. And a good example of what I’m trying to say: several real world reviews by several dedicated reviewers give the information people need most when they’re considering a new lens. That’s always been part of the pleasure of the selection process for me. I don’t understand why people want to go ‘it’s a 72.4, it’s got to be good!’

  • Brandon Dube

    Unless the lens is diffraction limited, and really no consumer lens is, the image quality is governed by only by aberrations. Changing them with decenters and tilts is the entire reason you see what you see in the plots on this blog post.

  • Beaverman33

    This is a really interesting piece Roger and ties in a little bit to a review that I’ve just done on the Sigma 24-35mm F2.0 DG HSM Art zoom lens, and I’ve got to say that I’m very impressed with it. It’s a lens that quite a few people aren’t aware of and those that do know about it usually always ask the same question . . . is it as good as the equivalent prime lenses? Being aIso also have the Sigma 24mm and 35mm f1.4 Art primes I could try to find out.

    This is by no means as through or scientific a review as you guys do but more coming at it from a real world usage point of view . . .

    http://simonbrettellphotography.co.uk/sigma-24-35mm-f2-0-dg-hsm-art-lens-review/

  • Patrick Chase

    This is a fascinating discussion – Thanks!

    When we talk about offsetting errors in a production setting we implicitly mean “close enough to not be considered unacceptable”. I recognize that there are no mathematically offsetting aberrations as you describe, but I also think that may not be germane to the topic of real-world process control.

    With that said, most of my experiences where errors offset each other (such that introducing “good” parts broke the system) were non-optical, so I absolutely believe that such situations are a lot less common.

  • Brandon Dube

    You can match power (focal length). You can’t match power and aberrations. All solutions there are unique enough for any configuration that there are “no” twin solutions.

  • Patrick Chase

    I attempted to reply to this earlier but it seems to have vanished…

    Diffraction is perhaps a bit unique in that it significantly impacts mid frequencies, as with the aforementioned ~12% reduction in 30 lp/mm MTF, but still allows you to resolve fairly high frequencies. Again using an f/5.6 lens at 560 nm as an example, diffraction would reduce MTF by 50% at ~130 lp/mm and wouldn’t cause extinction (0 MTF) until 320 lp/mm.

    One way you could think of it is that it degrades micro-contrast more than it impacts ultimate resolution. In digital workflows you can correct modest micro-contrast loss by sharpening, but you can’t recover lost spatial information, so in that sense diffraction is benign provided you don’t stop down too far (at f/32 and 560 nm the extinction frequency is 56 lp/mm, so at that point you’re losing a lot of spatial information).

    That’s different from defocus, which has a more “compact” (short-tailed) PSF and therefore imposes a steeper rolloff in the frequency domain.

  • Patrick Chase

    I can think of contrived/degenerate examples where some of those things would at least partially cancel, but for real lenses I think you’re right.

    It’s important to recognize that modern lenses and cameras are very complex opto-mechanical systems, where customer-perceived quality is driven by more than just the traditional parameters you cite. Throwing something out completely off the cuff: Could you have offsetting errors between the optical power of a stabilizing group and the gain of the piezo actuators that move it? I realize that such an error in the stabilizing group would impact optical performance of the system formula as a whole, but then you have the question of sensitivity.

  • Brandon Dube

    Ok. But optical tolerances (that is, in terms of the image) follow a root-sum-square relationship. Two decenters, tilts, radii errors, center thickness errors, etc, can only make the image worse, they can never cancel.

  • Patrick Chase

    Sure, and that’s an incredibly universal problem in process control.

    Overall system behaviors (what the customer cares about) may be very sensitive to some changes and relatively insensitive to others. You gave an example of the latter in the optical domain, but every engineer who’s ever done process control can come up with a litany of similar examples in other domains. Ideally you spec your components in a way that emphasizes parameters that matter, but it isn’t always feasible to do so. You often end up with specs that are “too tight” most of the time but have to be that way to prevent disaster.

    It’s also alarmingly common to have cases where two components are out of spec such that they cancel each other out, and introducing a batch of *good* parts to the line breaks the product. Such is life. That’s why companies that manufacture stuff (ideally) employ engineers so sort it all out.

    Capability-based controls such as “showmeyourpics” described are useful precisely because they help us detect those cases.

  • Brandon Dube

    What does “out of spec” mean, and how does it influence the image formation? If you decenter the entire optical system with respect to the sensor nearly nothing changes.

  • Patrick Chase

    I agree that electronics and optics are very different, but you could say the same about just about any pair of product categories or industries you might care to pick. The causes and nature of variations are always unique to some degree, and engineers (including the optical sort) are remarkably good at “drawing distinctions” that make them seem more so.

    With that said, the statistics of processes and process control are remarkably universal, and that extends to optical systems. I say this having been intimately involved in all too many production issues where optics were implicated. We missed a LOT of issues by doing sampling inspection, that were fairly easy to find with more modern process control methodologies. A classic example is where you have a batch of ever-so-slight slightly out-of-spec barrels suddenly show up in one bin on the production line.

  • Patrick Chase

    Diffraction is somewhat unique in that it causes a significant drop in MTF at relatively low frequencies, but still allows you to resolve very high frequencies. One simplistic way to think about it is that it impacts microcontrast more than it limits ultimate resolution. That’s an important distinction in digital workflows, because we can sharpen to correct modest attenuation at medium frequencies (for example the ~12% loss that I previously referenced), but we can’t recover information that has been destroyed.

    Continuing with the example of 560 nm light at f/5.6, diffraction would reduce MTF by 50% at ~130 lp/mm. This means that even though there is a noticeable impact at 30 lp/mm, the image still contains useable spatial information even at much higher ones.

  • Carleton Foxx

    Good point. But I was just reading an author who said that the effects of diffraction are not always as devastating as the numbers tell us. I have no clue whether it’s true or not. What are your thoughts? Is there a predictable sweet spot where the benefits of aberration correction exceeds the degradation from diffraction?

  • Michael Clark

    What part of the following statement is so hard to understand? He *does* talk to many of the lensmakers seeing as to how they are some of his testing service’s customers.

    “But you do realize that we do testing for several of the lens companies, consult for others, and in a few cases have been used to determine causes of unacceptable variability? We don’t just look at them and test them, we take them apart, adjust them, and put them back together.”

  • showmeyourpics

    Hi there, it is not a matter of specific products, it’s a matter of universal behavior of manufacturing processes. This has been exhaustively covered beginning in the mid-1920s with Dr. Walter A. Shewhart development of the fundamentals of statistical process control. William Edwards Deming himself, who initially promoted the use of sampling inspection on which MIL-Std 105 was based, later recognized the invalidity of the underlying statistical principles and moved to recommend either zero or 100% inspection. The 0% inspection is also based on the premise that the manufacturer is doing ongoing process capability studies. Deeper knowledge of the main causes of normal variation in a manufacturing process is achieved through the use of a technique called “design of experiments”.

  • It’s all governed by the Gods of baseball. Take the absolute best player at each position. Take the absolute best 8 utility players (pitchers don’t count here). Test the skills of all. What do you have, primes and zooms. All are very good … but under high scrutiny, pretty easy to see the difference!

  • Carleton Foxx

    Provacative. Explain.

  • Patrick Chase

    The problem with that chart is that it’s obviously a geometric simulation that doesn’t include diffraction, and is physically impossible in the real world.

    The theoretical upper limit for visible light (using 560 nm as a nominal) at 30 lp/mm and f/5.6 is about 88% MTF IIRC. Any chart that shows an f/5.6 lens performing better than that is bogus.

  • Brandon Dube

    Unfortunately, electronics and optics are extremely different and the tolerancing and manufacturing do not behave at all the same.

  • Dave New

    I bought a 70-300 DO IS for my Canon camera based on a review on LuLa. It was OK, until I upgraded from my 8MP 20D to a 7D. Then I found that it was soft above 200mm, and I couldn’t stand to shoot with it any more. I finally got a 100-400mm L II soon after they came out, and the difference in the field was gratifying. I’m sure that if I go to a 50MP body in the future, all my glass will look like crap.

    There was also an article on LuLa many years ago, pointing out that a 13MP Canon 1DS would out-resolve most of the current crop of lenses. It took a while for the lenses to catch up with the sensors.

    So it goes…

  • Dave New

    A few years ago, the laptop/LCD display industry tried to convince folks that a few scattered ‘hot pixels’ were NOT defects, and that you couldn’t return a system based on that. The reaction from the user community was swift. Those that insisted on shipping crap soon found that no one would buy their stuff, and the problem magically went away (the manufacturers scrap the scrap and ship only good displays).

  • showmeyourpics

    Hi, as a former test and quality engineer in military and industrial electronics, I would like to add my 2 cents on the reality of sampling inspection of products. If not disturbed, processes make product within a certain tolerance (normal variation) for any one of their variables. Sampling inspection tries to measure these tolerances by testing a number of units out of a complete production run. A sample of 1 is meaningless because we have no idea where it fits withing the tolerance range. Unfortunately, even larger samples fail to provide a clear picture because processes do not make product swinging diligently from one side of the tolerance range to the other and back. They hang out for a while making product in one place of the tolerance range then move slowly to another and so on making clusters of product with similar “dimensions”. The result is that we are never sure if our sample is truly random and how representative of the entire production run it really is. It is much better if the manufacturer performs a process capability study where samples are drawn from production at regular intervals (i.e.: every hour) and the test results are plotted on an ongoing graph. This approach is not only much more accurate in predicting the real tolerances of the product but also shows that something special is bothering the process if the measurements start running away.

  • Dave New

    Sounds like a good DFSS (Design For Six Sigma) project. DFSS is especially designed to root out variance in design and manufacturing that even ‘experts’ can’t figure out. It produces some very surprising results sometimes, especially pointing out that just ‘frobbing the knobs’ is not necessarily the way to optimize a design or manufacturing process. Instead, you concentrate on optimizing the ideal energy transfer (in the case of the lens, the ability to transfer the light energy with the least amount of ‘noise’ [meaning unwanted variances], ultimately optimizing the signal-to-noise ratio of the ideal energy transfer function.

  • Dave New

    Heh. How about the lens manufacturer stop shipping sh*t and passing it off as good? In the industry I’m in, we wouldn’t be able to palm off crap on our customers.

  • Of course it is! The point is simply that you can’t expect a zoom to be equal at all focal lengths and there will be more copy-to-copy variation than a prime. And generally and equivalent prime will resolve better in the edges than an equivalent zoom. But lots of really good zooms are going to have higher resolution than lower quality primes. Like the examples given above.

  • Paul Lackey

    My apologies.. I figured it was too obvious, but I was just desperate to insert myself into the goings-on.. 8’P Carry on..

  • Paul M

    yummm, pi.

  • elkhornsun

    Quite a broad statement to make. I have shot with the Nikon 14mm f/2.8 and the Nikon 18mm f/2.8 and the Canon 14mm f/2.8 and Canon 24mm f/1.4 and the Nikon 14-24mm f/2.8 zoom lens is noticeably sharper than any of these prime lenses when set at the same focal length. The difference was obvious when shooting with a 12MP full frame Nikon DSLR and with a 10MP APS-H Canon DSLR camera. It would be even more apparent with sensors providing much greater resolution image capture.

Follow on Feedly