Technical Discussions

How Autofocus (Often) Works

Published July 5, 2010

Clarke’s Law: Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke
Clark’s Law: Any sufficiently advanced cluelessness is indistinguishable from malice. J. Porter Clark

My Apologies

Nothing in online forums demonstrates the two laws above better than discussions involving camera autofocus systems. Clarke’s law because very, very few people have the slightest grasp of how autofocus works. Clark’s law because since they don’t understand autofocus, when the topic comes up the Three Rules of Online Discussion (1 – Establish blame early, 2 – Repeat it Loudly, 3 – Repeat it Often) take hold. So most discussions that start with “My camera (or this lens) doesn’t autofocus well” quickly end up in “because you’re a bad photographer” or “because that equipment sucks” perseveration.

Not that I really understood autofocus. Sure, I knew some things that are obviously true: phase detection AF is fast, contrast detection is slow; f/2.8 lenses focus more accurately than slower lenses; third party lenses tend to have more problems than manufacturer’s lenses. But since I didn’t understand how autofocus really worked, I didn’t understand why those things were true.

So, I thought I’d just go read about it on the internet and educate myself. Except (and you can’t really imagine my shock here) there wasn’t diddly-squat on the internet to read. A couple of short articles and blogs, at least one of which had some major errors that even I could see. A couple of marketing pieces by camera manufacturers full of claims but with few specifics. Nothing else. I had reached the end of the internet, at least as far as SLR autofocus was concerned.

So there it was—a vacuum waiting to be filled. My chance to write the definitive article on a topic had finally arrived, since, by definition, the only article would be the best article. (I guess it would also be the worst article, but I try to keep a positive attitude). Never one to let lack of knowledge stop me from writing on a topic, I decided to proceed at all speed.

But all speed wasn’t so fast. The camera manufacturers, understandably, don’t advertise how they make their autofocus systems work, they just advertise that theirs is different, somehow, and much better than anyone else’s. So I spent a month collecting and reading books, journal articles, even patent filings and then proceeded to write a comprehensive piece on autofocus.

This isn’t it. That ran about 14 pages and even I fell asleep halfway through proofreading it. This is the condensed, abridged, Cliff Notes version of that article. To get it to a reasonable length, I’ll have to leave out the optics and physics involved, but I’ve done my best to keep things accurate and included references at the end for those of you who want to see those things.

Let’s start with the simpler (and more accurate) of the two common types of autofocus systems used in SLRs: contrast assessment autofocus.

Contrast Assessment Autofocus

Contrast assessment is just that: the camera’s computer evaluates the histogram it sees from the sensor, moves the lens a bit and then reevaluates to see if there is more or less contrast. If contrast has increased, it continues moving the lens in that direction until contrast is maximized. If contrast decreased, it moves the lens in the other direction. Repeat as needed until contrast is maximized (which basically means moving slightly past perfect and then backing up once contrast has started to decrease again). A perfectly focused picture should be the one with the highest contrast.

If your camera shows a Live View histogram, you can contrast autofocus the right type of image (a shot where everything is at similar distance) to some degree by simply manually focusing until the histogram shows maximal contrast. In the camera’s contrast-detection autofocus, of course, only the small area marked as the autofocus detector is actually assessed, not the entire sensor. This is both to let you choose the object you want to focus on and also, so the camera’s computer doesn’t have to process the entire image’s contrast, just the autofocus points.

Disadvantages of contrast assessment

The major drawback to contrast assessment autofocus is that it is slow. The pattern of move – assess – move – assess takes time and the camera may well start by moving focus in the wrong direction and then have to reverse itself. Because it is slow and offers no predictive possibilities, contrast detection is inappropriate for action or sports shooting. The slowness can be aggravating even for stills and portraits. Contrast assessment also requires a bit better light than phase-detection autofocus, and obviously it requires an area of good contrast in the image.

Advantages of contrast assessment

Contrast assessment autofocus does have some advantages though, that have not only kept it in use, but are now increasing how much it is used. The first advantage is that it’s simpler. It doesn’t require the additional sensors and chips that phase detection autofocus requires. Simplicity lowers cost and (along with the fact that autofocus speeds are not as critical) is the main reason contrast detection is used in point and shoot cameras. (The other reason is that point and shoot cameras by nature have a large depth-of-field, so accurate focus is not as critical, either.)

Simplicity also reduces size. Mirrorless systems place a priority on small size, and the contrast detection system does not require the additional light paths, prisms, mirrors and lenses that a phase contrast system does. This is a critical advantage for the small interchangeable lens cameras we’re starting to see enter the market, all of which use contrast detection.

The second advantage is that contrast assessment can use the image sensor itself to determine focus. There is not a separate light path with prisms, mirrors, etc. that all could be less than perfectly calibrated to the sensor. During contrast assessment autofocus, the sensor is evaluating the actual image it receives, not a separate image that should be (and should be doesn’t mean the same as is) accurately calibrated to the sensor.

For this reason contrast detection, when using the image sensor, provides more reliably accurate autofocus than does the more common phase detection system. The key work here is “when using the image sensor for contrast detection”. Olympus and Sony standard-sized (not mirrorless) SLRs use a second, smaller sensor in a separate light path to generate the Live View image and provide contrast assessment. Like any calibrated system, it’s possible for the second sensor to not be accurately calibrated to the image sensor.

In summary then, contrast detection is simpler, cheaper, smaller, and theoretically more accurate than phase detection autofocus. But it is much, much slower. Camera companies are working hard to speed up contrast detection autofocus, and are making some strides, but for the near future it will remain slower.

Phase Detection Focusing

Basic Principle

The basic design of phase detection (AKA phase matching) autofocus was originated by Honeywell in the 1970, but first widely used in the Minolta Maxxum 7000 camera. Honeywell sued Minolta for patent infringement and won, so all camera manufacturers had to pay Honeywell for the rights to phase detection autofocus.

Phase detection uses the principle that when a point is in focus, the light rays coming from it will equally illuminate opposite sides of the lens (it is ‘in phase’). If the lens is focused in front of or behind the point in question, the light rays at the edge of the lens arrive in a different position (out of phase).

There are different ways to determine if the light is in or out of phase, but most current systems use mirrors, lenses, or a prism (beam splitter) to split the rays coming from opposite edges of the lens into two rays and secondary lens systems to refocus these rays on a linear sensor (usually CCD). The autofocus sensor produces a signal showing where the light rays from the opposite edges of the lens strike. If the image is properly focused, the rays from each side strike the sensor a certain distance apart. If the lens is focused in front of or behind the object, light rays from opposite sides will strike too close together or too far apart (Figure 1).

how-auto-focus-works-1
Please note: the preceding paragraph and figure is a very superficial synopsis of how phase detection works. There should be two pages of physics and formulas, and alternative methods. But for practical, “how it works” purposes, it is accurate.

It’s obvious from figure 1 that phase detection can immediately tell the camera that the lens is focused too near or too far from the object of interest, so one of the disadvantages we saw with contrast detection (the camera doesn’t know which way to move the focus) is already overcome – instead of moving back and forth and deciding which direction has more contrast, phase detection tells the camera: that way.

Less obvious is the computing that goes on. Each autofocus lens has a chip that has already told the camera, for example, “I’m a 50mm f/1.4 lens and my focus element is located at 20% less than infinity” or something similar. When you push the shutter button halfway down, several steps take place:

  • The camera reads the phase detection sensor, looks up a huge data array programmed in its chips that describes the properties of all of the manufacturer’s lenses, does some calculations and tells the lens something like “Move your autofocus this much toward infinity”.
  • The lens contains a sensor and chips that either measures the amount of current applied to the focusing motor, or actually measures how far the focusing element has moved, and sends a signal to the camera, saying: “almost there”.
  • The camera rechecks the phase detection, may send some fine-tuning signal to the lens, may even recheck a 3rd or 4th time until perfect focus is indicated. If things don’t work as planned, the infamous “hunting” may occur, but usually not.
  • Once the camera has confirmed focus, it tells the lens not to move anymore and then sends that little ‘beep’ and light signal we all know and love, and we push the shutter.
    The whole process takes a tiny fraction of a second. It’s fast.

System Design

Obviously the autofocus sensors can’t be in front of the image sensor, so the manufacturers use a partially transparent area in the mirror to allow some light to pass through and reflect from a secondary mirror onto the autofocus assembly, (Figure 2) which is usually located in the bottom of the mirror box (Figure 3) along with the exposure metering sensors.

Figure 2: light pathway to the autofocus sensor
how-auto-focus-works-2

Figure 3: Location of autofocus sensor (red arrow) in Canon 5D, reproduced from Canon, USA
how-auto-focus-works-3

Sensor Types

Each phase detection sensor can assess only a small, linear part of the image. Horizontal sensors detect vertical features best, and most images contain a preponderance of vertical features, so horizontal sensors predominate. There are also some vertical sensors, usually arrayed in a cross (Figure 4) or “H” shape with horizontal sensors. A few cameras even contain diagonal sensors.

Some of the autofocus sensors (almost always the center), by means of different refracting lenses and sensor size, are more accurate than other sensors, especially when wide aperture lenses are used. Many of these high accuracy sensors are only active when an f/2.8 or wider aperture lens is used. In Figure 4, for example, this sensor would be a more accurate cross-type sensor when an f/2.8 lens is mounted, but only a less accurate linear sensor with a lens of lesser aperture. As a general rule, higher end and newer bodies will have more sensors, and more of those sensors will be the higher accuracy type.

_Figure 4: Cross-type sensor
how-auto-focus-works-4

In the very first autofocus systems (and some current medium format cameras) there was only one sensor in the center of the image. As processing power and engineering prowess have been applied, more and more sensors were added. Most cameras have at least seven or nine and some as many as 52 separate sensors. We can select one of them, all of them, a group of them, whatever is best for the type of images we’re shooting. We can tell the camera which sensor(s) we want to use. Or we can let the camera tell us. (The camera is always courteous enough to let us know which sensors it chose. My cameras are also telepathic and will immediately choose whichever ones I didn’t want to use, which is why I pick my own.)

how-auto-focus-works-5

These multiple sensors, along with the camera’s computer, can do some other remarkable things. By sensing which sensors are in focus on a moving subject and how that changes, both in distance from the lens and across sensors over brief intervals in time, the camera can predict where that moving subject will be in the future. This is the basis of AI Servo autofocus, which, I’m afraid, is a subject too complex for this article. I’m treading water as fast as I can just describing how the camera focuses on that pot of flowers on the porch.

The Effect of Lens Aperture

No matter what the sensor type, however, it will usually be more accurate with a wider aperture lens. Remember, during autofocus the camera automatically opens the lens to its widest aperture, only closing it down to the aperture you’ve chosen just before the shutter curtain opens. Phase detection autofocus is more accurate when the light beams are entering from a wider angle. In the schematic below beams from an f/2.8 lens (blue) would enter at a wider angle than those of an f/4 lens (red), which are still wider than an f/5.6 lens (yellow). By f/8, only the most accurate sensors (usually only the center point on the more expensive bodies) can function at all, but even then focus may be slow and inaccurate. This is the reason our f/5.6 lenses stop autofocusing when we try to add a teleconverter, which changes them to f/8 or f/11 lenses.

autofocus-works-how-1

Advantages of phase detection autofocus

We’ve already covered the major advantages of phase detection autofocus:

  • It’s incredibly fast compared to contrast detection, fast enough for moving subjects.
  • The camera can use the sensor array to evaluate a subject’s movement and realize the moving object is the subject of interest, giving us the AI Servo autofocus all sportshooters complain about.

There are several less commonly used benefits. The sensor array can be used to assess the image’s depth of field giving an “electronic depth of field” preview. The camera can be set to take a picture when something enters the autofocus point (called trap autofocus, but few cameras offer this feature). If the sensors sense random movement in an otherwise static subject, they can flash a notice that camera shake is affecting the image. But speed and AI servo are what it’s all about.

Disadvantages of phase detection autofocus

I once heard the Porsche 911 described as “An interesting concept that through remarkably intense engineering became a superb automobile”. The description fits phase detection autofocus well. The systems are amazingly complex and require a remarkable amount of engineering to work as well as they do.

First and foremost, the system requires physical calibration. The light path to the imaging sensor must be calibrated with the light path to the autofocus sensor, so what is in focus for the autofocus sensor is also exactly in focus for the imaging sensor. Each lens contains chips that provide feedback to the camera, telling it exactly what position the focusing element is in and how far it moves for a given input to the lens motor. This must agree completely, so that the lens actually moves exactly where the camera told it to move, and so the camera knows exactly what position it’s in. If any of these systems aren’t calibrated perfectly, autofocus becomes inaccurate. Even if they are calibrated perfectly at the factory, if they expand and contract slightly with heat or cold they may become inaccurate, at least temporarily.

Second, the system requires software calibration. As mentioned earlier, the camera manufacturers have very complex algorithms and database tables programmed into each lens and each camera that provide this information, and which they protect from public knowledge. So, a Nikon D3, for example, knows exactly how much current must be applied for how long to the ultrasonic motor in a 70-200 f/2.8 VR lens to move the focus from 6 feet in front of the camera to 12 feet in front. And that a very different amount of current will be needed to move it from 12 feet to 30 feet, or to move the focus of a 50 f/1.4 the same distance, etc. etc. Because of this, autofocus can sometimes be improved by a firmware update, and firmware updates are often issued after a new lens is released. The update contains new algorithms for that lens.

Also because of this, third-party lenses may not be quite as accurate as manufacturers lenses at times, and third party manufacturers sometimes have to ‘rechip’ their lenses to work with certain cameras. The big guys have yet to tell the little guys: “we’ll be happy to issue a firmware update so our camera works well with your lens”. Instead the third-party guys have to take the manufacturer’s cameras and lenses, decode the signals they send back and forth, and then encode a chip in their lens translating those signals in a way that makes their lens work properly. And they have to accept the data arrays the manufacturers have designed for their own lenses, which might not be ideal for their autofocus gearing and motors. I have no firsthand knowledge of this process, but I suspect this is why certain third-party lenses seem to autofocus well with cameras of one brand, but not as well with another. And, at least theoretically, this would explain why a change in a manufacturer’s autofocus system could render third party lenses obsolete—or at least require that they get a new chip, such as recently occurred with the Sigma 120-300 f/2.8 and the Nikon D3x.

As mentioned above, lens aperture can also affect phase-detection autofocus accuracy. Usually this matters little, because the smaller aperture lens will have a greater depth of field. There is a point, however, where the aperture is too small for the sensor to autofocus accurately, usually at f/5.6 or f/8. (Remember that the camera automatically opens the lens aperture to maximum during autofocus, so the aperture you’ve set the lens at doesn’t matter, it’s the maximum aperture the lens can achieve that matters.) It also is the reason that f/2.8 lenses can sometimes autofocus in more difficult conditions than lenses of lesser aperture.

Since the autofocus sensors only receive light when the mirror is down, phase detection sensors stop working when you actually take the picture, and don’t start working again until the mirror has returned to its normal position. This is why phase detection autofocus doesn’t work during Live View and may contribute to why AI Servo autofocus can lose accuracy during a series of rapid-fire shots. When you listen to how fast a D3 or 1DMkIV is shooting at maximum FPS, it’s rather amazing that any of the images are in focus, really. But our expectations are high these days.

There are other issues, of course, but most of them we don’t think about. For example, linear (not circular) polarizing filters interfere with phase detection. You don’t see linear polarizing filters much these days, but every so often someone buys one because it’s so inexpensive and then wonders why their camera isn’t autofocusing accurately. Phase detection can also struggle with certain patterns in an image—things like checkerboards and grids, for example—can cause a phase detection system to melt down, but are handled easily by contrast detection.

Live View:

I mention Live View focusing separately because it seems to be driving the camera manufacturers back to improving contrast detection AF, and into creating hybrid AF systems. As mentioned, contrast detection does have some advantages already and overcoming its disadvantages could improve autofocusing for all of us.

As mentioned above, Olympus and Sony have systems that split the light beam, sending part to the viewfinder and part to a secondary image sensor. This system allows phase detection autofocus to remain enabled even during Live View. But it adds the possibility that Live View focusing is not absolutely accurate, since the sensor being used to focus is not the imaging sensor.

Canon has described a system that uses phase detection to focus the lens initially, then uses contrast detection to fine tune autofocus, which could have significant advantages for still and macro work (Ishikawa, et al). Nikon has apparently applied for a patent that designates certain pixels on the image sensor to be used in what apparently is a phase detection type autofocus (Kusaka). This may provide the best of both worlds.

We shall see. But what is apparent is that, for the first time in over a decade, changes to autofocus systems may be revolutionary rather than evolutionary.

References

  • Digital Camera Info: Evolution of Live Preview in Digital Photography
  • Focusing in a Flash: Scientific American. August, 2000. p82-83.
  • Goldberg, Norman: Camera Technology. The Dark Side of the Lens. Academic Press, 1992.
  • Ishikawa, et al.: Camera System and Lens Apparatus. United States Patent 6,603,929 B2
  • Kusaka; Yosuke: Image sensor, focus detection device, focus adjustment device and image-capturing apparatus. United States Patent application 20090167927
  • Kabza, K. J.: Evolution of the Live Preview in Digital Photography . Digital Camera Info.
  • Kingslake, Gordon: Optics in Photography. SPIE Optical Engineering Press. 1992
  • Predictive Autofocus Tracking Systems Nikon
  • Ray, Sidney: Autofocus and Focus Maintenance Methods. In: Ray, Sidney: Applied Photographic Optics, 3rd ed. Focal Press, 2004.
  • Ray, Sidney: Camera Features. In: The Manual of Photography, 9th ed. Focal Press, 2008.
  • Wareham, Lester: Canon AF System
  • Understanding Autofocus Cambridge In Colour
  • Wikipedia: Autofocus
  • Wikipedia: Live Preview

Author: Roger Cicala

I’m Roger and I am the founder of Lensrentals.com. Hailed as one of the optic nerds here, I enjoy shooting collimated light through 30X microscope objectives in my spare time. When I do take real pictures I like using something different: a Medium format, or Pentax K1, or a Sony RX1R.

Posted in Technical Discussions
  • Michael Clark

    In other words, Canon was – at the time you wrote this – the ONLY company with AF sensors that used wider baselines that require f/2.8 lenses to give more accurate information than is possible with the narrower baseline provided by f/5.6 lenses. The f/5.6 systems are pointed at the “f/5.6” portions of the light passing through an f/2.8 or even f/1.4 lens instead of being able to leverage the wider rays coming from the edge of the wider aperture lenses.

  • TomDibble

    The math is very precise, but the physical world is not quite so much so. There are two phenomenon likely at play: calibration, and repeatability.

    The math is perfect for getting things 100% in focus at the AF sensor. Your camera was engineered so that if something is completely in focus at the PDAF sensor it will also be completely in focus at the imaging sensor. However, small things can make that no longer true – a small amount of dirt introduced in the lens mount, the sensor mounts having been shifted slightly over time for one reason or another, etc. You’ll find that an “AFMA” (Auto-Focuse Micro-Adjustment) tool like Reican’s FoCal is very useful here, able to define optimal AF micro-adjustments for each of your lenses (and separately for “near” and “far” ends of your zoom lenses).

    The second factor at play is what happens when the AF’s computer tells the lens “focus 1 cm more towards infinity”. The little motor inside the lens needs to actually do it, and those motors are amazing feats of engineering but still not 100% perfect. So when instructed to move the focal plane by 1cm, the lens might actually move it by 0.9cm. Then it is told to move it 0.1cm, and it moves it 0.2 instead, etc. Eventually the AF computer simply says “this isn’t getting any better” and gives up. Of course, how accurate the motor is is completely dependent upon the quality of motor and mechanism in the lens, so a cheaper lens will more likely have this problem than one an order of magnitude more expensive. Bumps and knocks to the lens can also wear out internal bits and make the movements more “jerky” as a result, so a poorly-cared-for lens may also develop a lack of reproducibility.

  • John Jackson

    I realize this post is many years old, but it is so helpful in understanding the tradeoffs camera companies make to achieve focus with various platforms. It also helps you understand the limitations of any particular system. Thank you.

    What I wondered is why I achieve such erratic focus at wide apertures if the systems are so precise. My Nikon d750 will focus about half the time in a location a few cm in front or behind the focus point. It seems like if this is a purely mathematical formula, the errors should be less erratic.

    Is there any play or margin of error associated with where the camera is actually focusing? For example could objects a few mm in front or behind the focus point produce the same value as far as the camera is concerned? How accurate are these systems if you’re trying to get the front eye tack sharp in a controlled studio setting with a stationary model?

  • Jacques Driscart

    I tested a 5DMkiii with the new EF 100-400mm II lens.
    I was trying to microadjust the lens to the camera body.
    I found the optimum value but to my surprise the best Phase detection picture never reached the sharpness of the pictures adjusted by contrast on the sensor.
    Can this be explained or am I doing something wrong? I turned the stabilization off because I used a tripod. I also took all the other précautions Mirror up, 2sec delay and flex trigger.

  • Urban Domeij

    I am very glad that you do explain the origin of “phase detection”, which has always puzzled me much. As it does not have the slightest thing to do with the phase of light waves, I couldn’t fathom why such a misleading name would be minted for the simple triangulation that has been used for centuries and was built into many cameras of the past.

    I would be even happier if “Phase Detection Autofocus” were not used at all for that system, and from some time back, I refuse to use the term for the triangulating AF systems in cameras. Ever heard of the Leica rangefinder described as “phase detecting”? They are now available also on the image chip, also in Canon cameras, and it seems as Fujifilm is the vanguard here.

    I think the system is easier to understand if properly named “triangulation” and not “phase detection”, although coupled to the latter concept it is perhaps easier to fathom why it sometimes fails on repetitive patterns.

    The triangulating system uses sensors aimed at two spots of the exit pupil of the lens. Therefore two corresponding spots at its entrance pupil is the base for triangulation, and precision of the system depends on the distance between those spots in the entrance pupil. When the “PDAF” sensels are incorporated into the image chip, there is no discrepancy in distance from the lens between them and the actual image sensels. They can be used for split-image manual focusing, which I think is already implemented in some cameras.

  • Roger Cicala

    Hi Chris,

    I don’t know the answer, I’m afraid. By f/2.8 all the most sensitive sensors are active. In theory, a wider aperture could give a better angle for the AF sensor, but I don’t know if the sensor itself is capable of taking advantage of it. I would guess that if it cost more to manufacture an AF sensor that takes advantage, the answer would be no, it isn’t worth it to the manufacturer. But if the sensor can see it anyway, then yes, it would.

    My educated guess is if the sensor did take advantage of apertures wider than f/2.8, then the marketing departments would be talking about it.

    Roger

  • Useful compilation of info, thank you.

    Sort-of a trick question (but i am seiously asking, as i’ve seen different answers): as you say, often a wider aperture lens will pd af better in difficult situations. The question is: does the advantage top out when aperture exceeds max sensitivity of the af sensor? In other words, will an f/1.4 lens potentially af better/faster than an f/2.8 lens in, say, low light? My experience says oftentimes yes, but of course since one can’t directly compare the same lens in this respect, i don’t know if that may be down to other operational advantages of the faster lens. (also, i do have some expensive 2.8 zooms which beat /some/ cheap 1.4 primes.) And some say that since light further out than f/2.8 misses the pd sensor, it adds nothing, which does seem possible.

    Thanks agin for the article.

  • Shreyas Hampali

    Hi Cicala, Fantastic article!! I just had a doubt. How does the effective focal length of the lens system change during this process of autofocusing? Can you give me the approximate range over which it may vary?

  • Very nice article! well written and explained despite the subject difficulty.

  • Paul

    I seriously doubt/hope they don’t just measure currents to determine how far the motor has traveled. This would be done way better with encoders.

  • Cool, AF was previously a black box for me, thanks for explaining both concepts and its really amazing how well AF works today.

  • Great article! Good for both layman and enthusiasts! Keep it up!

  • Philippe

    Thanks for a great article. I finally understands how it works. Can’t wait for the full version.

  • nic

    It might interest you that Olympus has a design (in their micro 4/3 mirrorless systems out this year) which avoids the lack of direction prediction in classic CDAF. Apparently every few pixels in the imaging sensor have an infra-red filter, rather than the usual red, green or blue filter above them. By checking whether the image formed by just the IR photo sites is more or less in focus than the visible light image the camera knows which way to move the focus.

    I supposed the same trick could be done with the red and blue pixels, but the achromatic design of good lenses would work against you.

  • bart

    Bart, we’re working on a setup using several LensAlign pros that will let us quantitate both focus shift and field curvature and will make it part of the specs for every lens we stock (at least the wide aperture lenses), but it will be a several months project.
    Roger

    That’s great news Roger, Thanks!

  • Hello Roger,

    I just wanted to let you know of an error in the article. Sony Live-View implementation actually uses the Phase Detect autofocus system, that’s why there is a second sensor over at the pentaprism housing. Light is directed to it AFTER it has passed through the PDAF system.

    Not sure if Olympus does this as well but my guess is no.

  • Greg

    Roger,

    Great article, as always. However, you’ve gotten it wrong regarding how the live view system works on the Sony (and some of the older Olympus) DSLRs.

    First off, these systems don’t “split the light beam, sending part to the viewfinder and part to a secondary image sensor”, as you stated. Rather, they redirect the light beam to go either to the viewfinder *or* to the secondary sensor, but never to both simultaneously. So you can’t use the optical viewfinder on these cameras at the same time as you have live view for the rear LCD display activated, any more than you could do on a live view-capable Canon, Nikon or Pentax.

    Secondly, your explanation above notwithstanding, the secondary sensor on these cameras is never used for contrast-detect AF. On the contrary, the very reason for existence of these systems is to allow fast phase-detect AF during live view, in a camera possessing a TTL optical viewfinder. The phase-detect AF system in these cameras operates *exactly* the same in live view mode as it does when using the optical viewfinder. These are the only cameras that accomplish that feat.

    Speaking for the Sonys, the first generation of live-view capable DSLRs (A300/350/330/380) had no main sensor LV capability whatsoever. Focusing during live view was either phase-detect AF or manual focus (not very good MF, as you had to use the image display from the small, low-res secondary sensor). The second generation of cameras using this system (A500/550) added the option of main sensor live view, but with manual focus capability only (still no contrast-detect AF). The third and present-day generation of cameras (A560/580) retained all the previous capabilities, but added the option of contrast-detect AF during main sensor live view, as well (and also added video capability for the first time).

    Sorry to run on for so long, but I hope this clears things up a bit.

    Thanks again for a great article,

    Greg

  • Roger Cicala

    Bart, we’re working on a setup using several LensAlign pros that will let us quantitate both focus shift and field curvature and will make it part of the specs for every lens we stock (at least the wide aperture lenses), but it will be a several months project.
    Roger

  • bart

    Thanks for the great article. But now we know certain lenses perform very well with phase detect as well as contrast af at the widest picture. But you might have a lens that has focus shift issues at narrower apaertuers…. What then? Wish there was a list that documented most used lenses and there possible focus shift issues. …

  • Martin

    Any chance that D-SLRs can use their slow-but-sure contrast detection auto-focus to calibrate their phase detection auto-focus?

  • Mim

    Canon are the ONLY company with cross points requiring f2.8. All other DSLRs cross points require only f5.6

  • Wilba

    Congratulations on correctly describing the closed-loop operation of PD AF. The open-loop myth is deeply embedded (at least in Canon circles) and very hard to shift. Well done.

  • stu

    i was searching for this answer for sometime. you explained it well. will u release the full version?

Follow on Feedly