Technical Discussions

Good Vibrations: Designing a Better Stabilization Test (Part I)

Published July 30, 2013

My name’s T.J. Donegan, I’m the Editor-in-Chief of DigitalCameraInfo.com and CamcorderInfo.com (Soon to just be Reviewed.com/Cameras). We recently wrote about designing our new image stabilization test for our Science and Testing blog. I showed it to Roger and he asked for the “nerd version.” He was kind enough to let us geek out about the process here, where that kind of thing is encouraged.

 

DigitalCameraInfo.com’s latest image stabilization testing rig. (In beta!)

 

Since the beginning of DigitalCameraInfo.com and CamcorderInfo.com, we’ve always tried to develop a testing methodology that is scientific in nature: repeatable, reliable, and free from bias. While we do plenty of real-world testing during every review, the bedrock of our analysis has always been objective testing.

One of the trickiest aspects of performance to test this way is image stabilization. Things like dynamic range, color accuracy, and sharpness are relatively simple to measure; light goes in, a picture comes out, and you analyze the result. When you start introducing humans, things get screwy. How do you replicate the shakiness of the human hand? How do you design a test that is both repeatable and reliable? How do you compare those results against those of other cameras and the claims of manufacturers?


Our VP of Science and Testing, Timur Senguen Ph.D., shows our new image stabilization testing rig in action.

It’s a very complex problem. The Camera & Imaging Products Association (CIPA) finally tried to tackle it last year, drafting up standards for the manufacturers to follow when making claims about their cameras and lenses. We’re one of the few testing sites that’s taken a crack at this over the years, attempting to put stabilization systems to the test scientifically. Our last rig shook cameras in two linear dimensions (horizontally and vertically). It did what it set out to do—shake cameras—but it didn’t represent the way a human shakes. Eventually we scrapped the test, tried to learn from our experiences, and set out to design a new rig from scratch.

Shake, Shake, Shake, Senora

Our VP of Testing and Science, Timur Senguen, Ph.D. (We just call him our Chief Science Officer, because we’re a bunch of nerds) wrote up an Android application that would use the linear accelerometers and gyroscope in an Android phone to track how much people actually shake when holding a camera. This allows us to see exactly how much movement occurs in the six possible dimensions—both linear (x, y, and z) and rotational (yaw, pitch, and roll).

 

In three-dimensional space there are six possible axes of movement. Timur tried to account for time travel, but we had to hold him back.

 

Using the program on an Android phone we tested the shaking habits of 27 of our colleagues, using a variety of grip styles and weights across the camera spectrum. We tested everything from the smartphone alone up to a Canon 1D X with the new 24-70mm f/2.8L attached to see how people actually perform when trying to hold a camera steady. (The 24-70mm isn’t stabilized, but this was just designed to be representative for weight and grip style.)

You can actually play along at home here if you like. Timur’s .apk is available here, which you can install on any Android phone and run yourself. You can hold the phone like a point-and-shoot, or you can do what we did and attach the phone to the back of a DSLR to get data on how much you shake when using a proper camera and grip (to keep the weight as close as possible, remove the camera’s battery and memory card). When you’re done you can send your results to us with the name of your camera and lens and we’ll use your data for future analysis. Bonus points if you jump in the line and rock your body in time. (Sorry, I had to.)

Roger’s Note: One of the reasons I’m very interested in this development is this: my inner physician is very aware that type, frequency, and degree of resting tremor varies widely in different people, especially with age. A larger database of individual’s tremor might help identify why some people get more benefit out of a given IS system than others – and hopefully some day you’ll be able to use an app like this to define your own tremor and then look at reviews to see which stabilization system is best matched for it. Or even adjust your system’s stabilization to best match your own tremor (we all have one) like we currently adjust microfocus. So I encourage people to download the app and upload some data. 

 

Timur designed an Android application that would track the linear and rotational movement produced by the human hand when holding a camera.

 

What we found is actually quite interesting. First, people are generally exceptional at controlling for linear movement (up and down, side to side, and forward and back). We are about ten times worse at controlling for yaw and pitch (the way your head turns when you shake your head “no” or nod your head “yes”). We are about roughly four times worse at controlling for steering wheel-type rotation, which you can fix with any horizon tool on your computer.

We also found that weight is not actually a huge factor in our ability to control camera shake. The difference between how we shake a 2300 gram SLR like the 1D X and a 650 gram DSLR like the Pentax K-50 is actually very minimal. It passes the smell test: When you’ve got a hand under the camera supporting it, you’re going to have a limited range of motion; when you’re holding a point-and-shoot pinched between your thumbs and index fingers, you have no support and thus shake significantly more, even though the camera weighs significantly less.

Our findings also showed that, for yaw and pitch, we typically make lots of very small movements punctuated by a few relatively large ones. When we plotted the frequency of shakes against severity, we got a very nice exponential decay curve. All of our participants produced a similar curve, with experienced photographers (and the uncaffeinated) having a slight advantage.

 

Our data showed that the typical person produced a lot of very small movements (the left side of the graph) with a few very large movements (the right side).

 

Once we had our sample data, it was a simple matter of building the rig. (At least Timur made it look simple. I don’t know. He built it in two days. The man’s a wizard.) His rig is designed to accommodate all six axes of movement, though based on our findings we stuck with just yaw and pitch since they’re the only significant factors. While Olympus’ 5-axis stabilization is intriguing (and likely better if you have a condition that causes linear movement, such as a tremor), the limited linear movement we subject cameras to only really makes a difference at extreme telephoto focal lengths. We then tested the rig using the same Android application that we used for our human subjects and fine-tuned the software so that the rig accurately replicated the results.

With our rig built and calibrated, we then had to design a testing methodology. We first looked at the standard drafted by CIPA last year. They confirmed our primary findings—that yaw and pitch are the main villains of camera shake—but we took issue with some of their methods.

First, they use two different shaking patterns based on weight (one for cameras under 400 grams, one for cameras over 600 grams, they use both if it falls in the middle), which we found wasn’t a contributing factor in camera shake. We have two shake patterns, but one is reserved just for smartphones and gripless point-and-shoots, regardless of weight.

For actual analysis, they also use a very high-contrast chart and a rather obtuse metric they devised, which translates as “comprehensive bokeh amount.” You can read all about it here. It’s fairly convoluted, and we ultimately decided to not go that way.

The CIPA standard does actually have quite a bit going for it, especially for something that came out of a committee. (If there’s wisdom in crowds, then committees are the uncanny valley.)  It’s certainly far better than the convoluted battery test, which calls for fun things like always using the LCD as a viewfinder, turning the power off and on every ten shots, moving through the entire zoom range whenever you turn the camera on, and basically all the things you never actually do with your camera. This is what usually happens when 27 engineers from 19 different companies try to come to a consensus.

 

When you get a lot of very smart people in a room, they tend to make puzzling decisions.

 

We primarily use Imatest for our image analysis, so we’ve settled on a methodology that closely aligns with the one outlined here, by Imatest’s creator, Norman Koren. It involves looking at the detrimental effect that camera shake has on sharpness, using the slanted edge chart that we already use for resolution testing.

We’re still in the process of beta testing our rig, but we’ve begun collecting data on cameras we have in house. We’re not yet applying these results in our scoring, but we’ll be back soon to describe some of our findings in part II.

 

T. J. Donegan

July, 2013

Author: tjdonegan

Posted in Technical Discussions
  • BTW that’s with a lens that resolves 1100 lp/ih, and he also may want to revise the CoC measure for his DoF calculations.

  • ^ Maybe the math will be easier if you forget about pixels and consider the lens to be the limiting factor (or at least the lens-camera combination): I would look at the imatest results for line pairs over image height.

    Also, instead of thinking how much displacement we allow before we say the image is soft (a full line? half of that? a quarter?), we can take the usual rule as “truth for beginners”, so, say, a camera+lens that deliver 550 line pairs per image height.

    And then it’s obvious that yes, a D800 user with a super sharp lens and who’s going to look at the image at the pixel level wants to use a 1/2f rule instead of the usual 1/f.

  • Scott McMorrow

    The probability density of the angular displacement frequency data in your first plot is interesting. This curve can be plotted as a function of degrees over the shutter interval. If we take the 100 count point for instance, the angular displacement in degrees at various shutter speeds is:

    Shutter Speed Degrees of Deflection
    1/8000 0.000214859
    1/4000 0.000429718
    1/2000 0.000859437
    1/1000 0.001718873
    1/500 0.003437747
    1/250 0.006875494
    1/125 0.013750987
    1/60 0.02864789
    1/30 0.05729578

    We can then compute the angular displacement of 1 pixel pitch across the sensor of a D800 camera.

    Focal Length 1 Pixel FOV
    24 mm 0.00992
    28 mm 0.00890
    35 mm 0.00741
    50 mm 0.00538
    85 mm 0.00325
    105 mm 0.00265
    200 mm 0.00140
    300 mm 0.00094
    400 mm 0.00071

    Assuming that the sensor and lens can resolve down to the pixel pitch level after applying the appropriate sharpening, then these curves show the limits of hand holding the camera/lens combination.

    Conventional wisdom has said that 1/f shutter spee is the limit for hand holding. However most D800 owners would say 1/2f is more appropriate. Well for a 50 mm lens, if you’re trying to hand hold and keep full 1 pixel pitch resolution, according to these tables the shutter speed would need to be faster than 1/319, or a 1/6f limit.

    However, no lens is thus far able to resolve anywhere near the 1 pixel pitch limit. A better measure of hand hold ability might be the square root of the Bayer sensel pitch, which is 1.414 times pixel pitch. This would be equivalent to obtaining 18 Mpix resolution out of a 36 Mpix camera. In that case, the hand hold limit for a 50 mm lens would be 1/226 without VR, which is still a 1/4.5f ratio.

    Of course, I’m assuming the above presented probability density curve presented above is correct. If it is, then it’s no wonder that pixel peepers complain about the sharpness of D800 photos hand held. What we do not know from the above plot is the probability distribution of the angular displacement frequency during the time that the shutter is open, which might very well change this assessment.

    The dynamic characteristics of camera shake and mirror slap could be evaluated using Imatest resolution testing. It would be interesting to plot effective resolution vs. shutter speed for the camera on tripod with and without exposure delay, and then for hand held single shot vs. burst.

    This stuff is fascinating!

  • Dylan Sutton

    From a purely theoretical perspective your paragraph regarding weight (quoted at the bottom of this post) makes little sense to me. The camera doesn’t . A heavier camera by definition requires more force to move it – a lighter camera is not going to be easier to shake “even though it weighs less”, but BECAUSE it weighs less. The reason I believe you found little difference between the DSLR cameras is because “total system mass” when held correctly approaches the mass of the camera plus head, torso and arms of the photographer.

    The grip style is IMO a confounding factor, because when held in the typical ‘shoot from the LCD’ manner, the total system off-axis inertia includes less of the photographer’s torso and none of the head.

    Furthermore, more weight further in front of the sensor (i.e. long lens) should significantly reduce the rate of Yaw and Pitch, although of course increased magnification of the image on the sensor will have the opposite effect on image sharpness. Increasing the length (up to a point) of the lens also reduces pitch errors independent of mass as it becomes an exercise in reducing vertical displacement of the front element as opposed to rotation around the sensor. Your choice of a short normal zoom is not good for showing whether reality matches theory on this.

    Interesting to know whether the rig (and rig-camera interface) has enough rigidity and power to properly test the heavier camera-lens combinations.

    As an additional note, I would like to see an evaluation of whether the shake pattern (frequency, amplitude and yaw/pitch/roll/displacement) during the exposure is different than when holding the camera between exposures.
    Many photogs (at least those who learned to shoot with no IS, on KR64, at $20 a roll, with a 2 week wait to get your 36 shots back!) use similar techniques to shooters to steady the camera while shooting, e.g. through altered breathing pattern, exhale-hold breath-shoot. Look also at target bow shooters – their bows actually have off-axis wights attached to improve stability.

    —————
    QUOTE
    “We also found that weight is not actually a huge factor in our ability to control camera shake. The difference between how we shake a 2300 gram SLR like the 1D X and a 650 gram DSLR like the Pentax K-50 is actually very minimal. It passes the smell test: When you’ve got a hand under the camera supporting it, you’re going to have a limited range of motion; when you’re holding a point-and-shoot pinched between your thumbs and index fingers, you have no support and thus shake significantly more, even though the camera weighs significantly less.”

  • tjdonegan

    Hey Scott,

    Awesome questions and totally valid points, thank you for the comments!. First, we’re definitely seeing a lot of issues with the app running on other devices. It was written specifically for a Nexus 4, without any fragmentation work done, so I’m not sure specifically what would be required to get it working on other devices.

    As to the data we are collecting, the sampling rate for the app is 100ms. The data is derived directly from the gyroscopes and the accelerometers, which for yaw/pitch/roll gives us the rate of rotation, not the absolute degree of rotation. Android can give you absolute-ish numbers if you use the orientation commands, but you’re right those numbers are heavily processed using the geomagnetic field sensor and the accelerometer (which is why we don’t use them). That processing is probably why you also see a much lower sampling rate than what we are using, since there are calculations being done and not a straight dump of the gyroscope data. Google deprecated the orientation sensors from Android 2.2 onward for these reasons.

  • Scott McMorrow

    TJ, since I could not get Timur’s program to work, I’ve been using a different Android program, Sensor Dump, to access the sensor orientation information in my Motorola Droid Razr. Looking closely at the data I’d say there are some issues.

    First, the sample rate for the sensor dump is around 40 ms, which limits it’s usefulness. Looking at the Azimuth, Pitch and Roll data, I’ve concluded that the Azimuth data seems to be accurate. It’s a fairly smooth and continuous curve, which is what it should be, since it reflects direct measurements of the local magnetic field.

    Pitch and Roll look to me to be heavily processed. Reading a bit of the Android information I can find, Pitch and roll are created by post processing the accelerometer and magnetic field strength data. These curves have an awful lot of noise in them, and because they are based on two different sensors they most likely have a bunch of numerical noise due to differentiation of data from two sensors with different time steps and varying measurement lag. I would not trust this data. However, Timur may be accessing the sensor data in a different way to achieve a higher sample rate or better data processing.

    Given the extremely low sample rate, I would not be at all comfortable with any derived results. If I were designing this experiment, I would want to use a much higher sensor sample rate, and confirm the measurements with a different sensor.

  • Scott McMorrow

    Same problem here with the Shaker program. After the app runs, the file “testing.csv” contains only testing.

  • possible cause of the problem: the sd card in my phone is not called sdcard0, but sdcard

  • Alan Smithee

    Mr Donegan, I ran the apk on a HTC one. But, as others have noted, the program does not work on other devices as well, so that rather makes me suspicious: I can’t verify that the program captures *any( data, let alone how accurate it is. The accelerometers of phones are incredibly cheap, so I wouldn’t trust them to be accurate.

    I can certainly see that the writer is not a programmer: I teach programming and if he would have submitted that to me, I would have given him a fail for such sloppy work. Next time you are doing something like this, use a proper trained programmer rather than someone whose talents seemingly lie elsewhere. There are plenty of good programmers out there, including places like elance and getacoder that would have done a much better job.

  • I installed the app on my HTC Desire and ran the tests, but I can’t find the testing.csv file, it’s not in the root directory of the SD card, or elsewhere. Is it sent to you directly? That would be a problem because I wanted to look at the numbers, and because some of the tests didn’t work right (I dropped down the camera before the test was over, for example).

  • Jose Bueno

    Hello T.J.: Here you have the data from Samsung Galaxy Note II. I’ll try to send two more files. Sadly (happily) I’ve started summer holidays today and will take a few days to go to institution again.

    Ups!. Have the same problem than Alan Smithee On July 31, 2013 at 2:24 PM: “When it ran, it created a file, but all it contained was 7 bytes of data: the word “testing”. Does this actually capture any data at all?”

    Any suggestion?

    Salud

    Jose Bueno

  • David

    I also downloaded the app and tested in on my old Verizon facinate (Galaxy S1 model) phone. It runs, but saves the data with the same name to a location that I don’t think exists on my phone. When installing it only asked for permission to write to the SD card. Is it also sending out the data? IF not how would you wish to get the data. It would be good if the file name saving could be edited so the user could tell you what camera I taped it to and what lens was used.

  • tjdonegan

    Thanks for the comment, QSVM. The Olympus OM-D E-M5 was honestly the camera that led us to totally redesign how we were shaking cameras. I personally reviewed that camera and found the stabilization system was great in anecdotal use, but was coming up with only a marginal improvement in the lab. Once we decided that the test simply wasn’t doing what we needed it to do (it was all linear, no rotational, which we now know wasn’t much use at all), we scrapped it and took the stabilization score out of the equation for all cameras and camcorders (they used the same rig). It was a tough decision, but we didn’t want to be giving faulty ratings for a test we weren’t 100% confident in.

    Those aren’t fun calls to make, but the result has been the new test, which we feel is far better. I’m very excited to get it out of beta testing and into the regular testing rotation.

  • tjdonegan

    Hi Alan, I’m sorry to hear that. What model of phone are you using? It’s possible there’s a fragmentation problem, Android being what it is. I can assure you that the app should be collecting plenty of data, but Timur is certainly not an app developer by trade so I wouldn’t rule out a snafu.

  • Quaker Shaker Video Maker

    The Camcorderinfo site’s attention to stabilization was unique and important. Too few reviews measure stabilization performance at all, except to cite the manufacturer’s claims for the f-stop advantage it allows for still photography. In the case of hand-held video, stabilization affects quality a whole lot more than resolution, frame rate, or sensor size.

    Why, then, do fewer and fewer of the Camcorderinfo or Reviewed articles furnish an efficiency chart or score for video stabilization? Of particular interest: the relative performance of the Sony “balanced optical steadishot,” the Panasonic 5-axis stabilization, and whatever Canon offers on its HF G30 or similar models. Another bewilderment: why the Olympus OMD EM5, which was first to introduce 5-axis stabilization, got such a low rating for stabilization.

  • Awesome.

    I expected linear shifts to have no effect anyway: rotation has a much bigger effect on the framing of your shot.

    Just a thing: your eyes help keep the camera steady too. I will make my measurements while actually looking at my target, as I do when I shoot.

    I will download the app, and once I’ve found a good way to attach the phone to my camera (with all the accessories it has, the phone will have to go on top), I will test myself standing up, standing against a wall or a pole, sitting down, sitting down and leaning against a pole or a wall, and also with my MagicRig video stabilizer. At a minimum, this app should work great as a training tool!

  • Alan Smithee

    Hmm. I just tried running this apk. It is *very* poorly designed, obviously by someone who is not a programmer, because it does not obey even the most basic UI rules. When it ran, it created a file, but all it contained was 7 bytes of data: the word “testing”. Does this actually capture any data at all?

  • tjdonegan

    @someone: Thanks for the comment! With most batteries coming in at between 75-190 grams and sitting roughly in line with the image sensor, we felt Timur’s 130 gram phone wouldn’t throw things off too much if we also removed the battery. We also felt that positioning the phone behind the LCD kept the center of gravity very close, as we typically looked at the camera as a lever being pulled by gravity with the image sensor being the fulcrum. With the phone sitting just a millimeter or two behind the sensor, we were comfortable we were getting the most accurate results we could. We also instructed people to shoot the way they normally would, and to use the shutter button and and viewfinder as though they were shooting continuously as you would in low light.

    When testing the rig itself, we used the exact same setup that we did in our human testing. Even though by necessity the camera is being moved while attached to the tripod mount, after some fine-tuning of the rig the accelerometer and gyroscopes gave the same readings as they did when the camera was being held by our test subjects, replicating both frequency and amplitude for yaw and pitch.

    On the weight thing, we made sure to include the Canon 1D X with the 24-70mm f/2.8L because that was the heaviest camera and lens combination we’ve ever put through a full review. We found that the shake readings from that combo were practically identical to the ones we saw with the Pentax K-50 and 35mm macro, which weighs a little more than 1/4th what the 1D X combo does. This contradicts CIPA’s results to a degree, but we actually found that shake was much worse when using the phone by itself or with a small point and shoot, because of the change in grip style. I would guess that a longer telephoto like the 70-200mm f/2.8L IS II would also necessitate a change in hand position and give different results. We don’t use lenses like that when reviewing bodies so we didn’t want to include it in our standard test pattern, but we could always cook up a custom shake pattern were we to do a one-off review of a lens like that.

  • KyleSTL

    Any chance you’ll code the app for iOS? Would love to try it myself, and supply my data for a greater n.

  • Alan Smithee

    Interesting idea. However, as an Android developer, I would *never* install an unsigned .APK like this, as that is breaking one of the main rules of security. Why did they not put this app onto the Play market, or release it as open source? They also don’t give any details of what happens with the data, anonymous policy, etc. Smells a little fishy to me…

  • +1 on Siegfried’s comment 🙂

  • fahrertuer

    How does the mounting of the camera affect the measuremeant?
    I mean, with a longer lens the camera is held (slightly) differently than with a shorter lens (at least that’s what I’ve noticed with me). But on your test rig you can only mount the camera by its tripod socket or, with longer lenses – and if available – on the mounting ring on the lens. But either way you have a large, unsupported mass, swinging freely that is usually supported by a hand.

    Maybe a good, simple solution would be to use a board to mount the camera and use a beanbag to simulate a second hand stabilizing the lens barrel

  • Nqina Dlamini

    The “Uncanny Vallet” I loved that one.
    As above (someone), I also tend to shake most when I depress the shutter button.
    I uwittingly follow my downward finger press with my hands (both hands), and then come up again after completing the shutter press movement.

    I really enjoyed reaging, can’t wait for part two.
    Thank You

  • someone

    After you attached the Android device to the camera, I hope you told people to actually press the shutter button. I’m sure that’s where you get most yaw movement. Also, I hope on your rig you compensated for your measurements being measured from the back of the camera while your introducing movement at the sensor’s axis. Finally, if the mass of the camera truly is negligent, did you try adding weight to the front of the camera to preserve the center of mass to see if that has any meaningful effect? (i.e., removing mass from the center of the camera and adding it to the rear of the camera doesn’t necessarily preserve the user’s handling characteristics as the center of mass is shifted further back while possibly maintaining the same mass.)

  • tjdonegan

    Thanks for your comment, Aaron. I’m excited to write up the second part. It will definitely be more on the technical side and I think you’ll get a kick out of it. There will certainly be a great deal more math, but with stabilization it’s generally easy to show the performance differences between cameras because the defects are so obvious.

  • tjdonegan

    Thanks for the kind words, Zig. We had a lot of fun coming up with the rig and overcoming some of the obstacles we ran into. I think the app is the coolest part of the process. It helps having someone like Timur around who pretty much just taught himself how to code for Android to make the app. Again, the man’s a wizard.

  • tjdonegan

    Hey Tobi, thanks for the comment. Timur made up that graph for our original post, so it was meant to be a little more approachable for a lay person. Certainly an FFT would get the frequency/amplitude over time information across just as well, and I’ll have Timur take a look at the data to get that info.

    Edit: Just to clarify, despite the tag on my responses I’m not an employee of LensRentals.com, though we’re certainly fans of the blog. At Reviewed.com we have a very strict ethics policy that we treat as gospel, moreso than any other editorial organization I’ve ever belonged to. I didn’t receive any compensation for this post, either, we just thought this was the sort of technically-minded crowd for us to talk about our new test with!

  • Tobi

    I don’t quite get what you mean by ‘severity of shakes’ or ‘Total Frequency (count)’. Why not just use a standard FFT and have some comparable results?

    Tobi

  • Siegfried

    That’s the first time I regret about having no iphone (or android phone, whatever). Great work, gentlemen!

    Zig

  • Aaron

    I can’t wait for the followups on this. As you’ll find, a lot of Roger’s readers appreciate some technical, in depth research and explanations. I personally prefer more detail, even if I don’t understand it all and skip over very heavy math parts.

Follow on Feedly