Good Vibrations: Designing a Better Stabilization Test (Part I)

Posted by

My name’s T.J. Donegan, I’m the Editor-in-Chief of DigitalCameraInfo.com and CamcorderInfo.com (Soon to just be Reviewed.com/Cameras). We recently wrote about designing our new image stabilization test for our Science and Testing blog. I showed it to Roger and he asked for the “nerd version.” He was kind enough to let us geek out about the process here, where that kind of thing is encouraged.


DigitalCameraInfo.com's latest image stabilization testing rig. (In beta!)


Since the beginning of DigitalCameraInfo.com and CamcorderInfo.com, we’ve always tried to develop a testing methodology that is scientific in nature: repeatable, reliable, and free from bias. While we do plenty of real-world testing during every review, the bedrock of our analysis has always been objective testing.

One of the trickiest aspects of performance to test this way is image stabilization. Things like dynamic range, color accuracy, and sharpness are relatively simple to measure; light goes in, a picture comes out, and you analyze the result. When you start introducing humans, things get screwy. How do you replicate the shakiness of the human hand? How do you design a test that is both repeatable and reliable? How do you compare those results against those of other cameras and the claims of manufacturers?

Our VP of Science and Testing, Timur Senguen Ph.D., shows our new image stabilization testing rig in action.

It’s a very complex problem. The Camera & Imaging Products Association (CIPA) finally tried to tackle it last year, drafting up standards for the manufacturers to follow when making claims about their cameras and lenses. We’re one of the few testing sites that's taken a crack at this over the years, attempting to put stabilization systems to the test scientifically. Our last rig shook cameras in two linear dimensions (horizontally and vertically). It did what it set out to do—shake cameras—but it didn’t represent the way a human shakes. Eventually we scrapped the test, tried to learn from our experiences, and set out to design a new rig from scratch.

Shake, Shake, Shake, Senora

Our VP of Testing and Science, Timur Senguen, Ph.D. (We just call him our Chief Science Officer, because we’re a bunch of nerds) wrote up an Android application that would use the linear accelerometers and gyroscope in an Android phone to track how much people actually shake when holding a camera. This allows us to see exactly how much movement occurs in the six possible dimensions—both linear (x, y, and z) and rotational (yaw, pitch, and roll).


In three-dimensional space there are six possible axes of movement. Timur tried to account for time travel, but we had to hold him back.


Using the program on an Android phone we tested the shaking habits of 27 of our colleagues, using a variety of grip styles and weights across the camera spectrum. We tested everything from the smartphone alone up to a Canon 1D X with the new 24-70mm f/2.8L attached to see how people actually perform when trying to hold a camera steady. (The 24-70mm isn’t stabilized, but this was just designed to be representative for weight and grip style.)

You can actually play along at home here if you like. Timur’s .apk is available here, which you can install on any Android phone and run yourself. You can hold the phone like a point-and-shoot, or you can do what we did and attach the phone to the back of a DSLR to get data on how much you shake when using a proper camera and grip (to keep the weight as close as possible, remove the camera’s battery and memory card). When you’re done you can send your results to us with the name of your camera and lens and we’ll use your data for future analysis. Bonus points if you jump in the line and rock your body in time. (Sorry, I had to.)

Roger's Note: One of the reasons I'm very interested in this development is this: my inner physician is very aware that type, frequency, and degree of resting tremor varies widely in different people, especially with age. A larger database of individual's tremor might help identify why some people get more benefit out of a given IS system than others - and hopefully some day you'll be able to use an app like this to define your own tremor and then look at reviews to see which stabilization system is best matched for it. Or even adjust your system's stabilization to best match your own tremor (we all have one) like we currently adjust microfocus. So I encourage people to download the app and upload some data. 


Timur designed an Android application that would track the linear and rotational movement produced by the human hand when holding a camera.


What we found is actually quite interesting. First, people are generally exceptional at controlling for linear movement (up and down, side to side, and forward and back). We are about ten times worse at controlling for yaw and pitch (the way your head turns when you shake your head “no” or nod your head “yes”). We are about roughly four times worse at controlling for steering wheel-type rotation, which you can fix with any horizon tool on your computer.

We also found that weight is not actually a huge factor in our ability to control camera shake. The difference between how we shake a 2300 gram SLR like the 1D X and a 650 gram DSLR like the Pentax K-50 is actually very minimal. It passes the smell test: When you’ve got a hand under the camera supporting it, you’re going to have a limited range of motion; when you’re holding a point-and-shoot pinched between your thumbs and index fingers, you have no support and thus shake significantly more, even though the camera weighs significantly less.

Our findings also showed that, for yaw and pitch, we typically make lots of very small movements punctuated by a few relatively large ones. When we plotted the frequency of shakes against severity, we got a very nice exponential decay curve. All of our participants produced a similar curve, with experienced photographers (and the uncaffeinated) having a slight advantage.


Our data showed that the typical person produced a lot of very small movements (the left side of the graph) with a few very large movements (the right side).


Once we had our sample data, it was a simple matter of building the rig. (At least Timur made it look simple. I don’t know. He built it in two days. The man’s a wizard.) His rig is designed to accommodate all six axes of movement, though based on our findings we stuck with just yaw and pitch since they're the only significant factors. While Olympus' 5-axis stabilization is intriguing (and likely better if you have a condition that causes linear movement, such as a tremor), the limited linear movement we subject cameras to only really makes a difference at extreme telephoto focal lengths. We then tested the rig using the same Android application that we used for our human subjects and fine-tuned the software so that the rig accurately replicated the results.

With our rig built and calibrated, we then had to design a testing methodology. We first looked at the standard drafted by CIPA last year. They confirmed our primary findings—that yaw and pitch are the main villains of camera shake—but we took issue with some of their methods.

First, they use two different shaking patterns based on weight (one for cameras under 400 grams, one for cameras over 600 grams, they use both if it falls in the middle), which we found wasn’t a contributing factor in camera shake. We have two shake patterns, but one is reserved just for smartphones and gripless point-and-shoots, regardless of weight.

For actual analysis, they also use a very high-contrast chart and a rather obtuse metric they devised, which translates as “comprehensive bokeh amount.” You can read all about it here. It’s fairly convoluted, and we ultimately decided to not go that way.

The CIPA standard does actually have quite a bit going for it, especially for something that came out of a committee. (If there’s wisdom in crowds, then committees are the uncanny valley.)  It’s certainly far better than the convoluted battery test, which calls for fun things like always using the LCD as a viewfinder, turning the power off and on every ten shots, moving through the entire zoom range whenever you turn the camera on, and basically all the things you never actually do with your camera. This is what usually happens when 27 engineers from 19 different companies try to come to a consensus.


When you get a lot of very smart people in a room, they tend to make puzzling decisions.


We primarily use Imatest for our image analysis, so we've settled on a methodology that closely aligns with the one outlined here, by Imatest's creator, Norman Koren. It involves looking at the detrimental effect that camera shake has on sharpness, using the slanted edge chart that we already use for resolution testing.

We're still in the process of beta testing our rig, but we've begun collecting data on cameras we have in house. We're not yet applying these results in our scoring, but we'll be back soon to describe some of our findings in part II.


T. J. Donegan

July, 2013

42 Responses to “Good Vibrations: Designing a Better Stabilization Test (Part I)”

Aaron said:

I can't wait for the followups on this. As you'll find, a lot of Roger's readers appreciate some technical, in depth research and explanations. I personally prefer more detail, even if I don't understand it all and skip over very heavy math parts.

Siegfried said:

That's the first time I regret about having no iphone (or android phone, whatever). Great work, gentlemen!


Tobi said:

I don't quite get what you mean by 'severity of shakes' or 'Total Frequency (count)'. Why not just use a standard FFT and have some comparable results?


LensRentals Employee

tjdonegan said:

Hey Tobi, thanks for the comment. Timur made up that graph for our original post, so it was meant to be a little more approachable for a lay person. Certainly an FFT would get the frequency/amplitude over time information across just as well, and I'll have Timur take a look at the data to get that info.

Edit: Just to clarify, despite the tag on my responses I'm not an employee of LensRentals.com, though we're certainly fans of the blog. At Reviewed.com we have a very strict ethics policy that we treat as gospel, moreso than any other editorial organization I've ever belonged to. I didn't receive any compensation for this post, either, we just thought this was the sort of technically-minded crowd for us to talk about our new test with!

LensRentals Employee

tjdonegan said:

Thanks for the kind words, Zig. We had a lot of fun coming up with the rig and overcoming some of the obstacles we ran into. I think the app is the coolest part of the process. It helps having someone like Timur around who pretty much just taught himself how to code for Android to make the app. Again, the man's a wizard.

LensRentals Employee

tjdonegan said:

Thanks for your comment, Aaron. I'm excited to write up the second part. It will definitely be more on the technical side and I think you'll get a kick out of it. There will certainly be a great deal more math, but with stabilization it's generally easy to show the performance differences between cameras because the defects are so obvious.

someone said:

After you attached the Android device to the camera, I hope you told people to actually press the shutter button. I'm sure that's where you get most yaw movement. Also, I hope on your rig you compensated for your measurements being measured from the back of the camera while your introducing movement at the sensor's axis. Finally, if the mass of the camera truly is negligent, did you try adding weight to the front of the camera to preserve the center of mass to see if that has any meaningful effect? (i.e., removing mass from the center of the camera and adding it to the rear of the camera doesn't necessarily preserve the user's handling characteristics as the center of mass is shifted further back while possibly maintaining the same mass.)

Nqina Dlamini said:

The "Uncanny Vallet" I loved that one.
As above (someone), I also tend to shake most when I depress the shutter button.
I uwittingly follow my downward finger press with my hands (both hands), and then come up again after completing the shutter press movement.

I really enjoyed reaging, can't wait for part two.
Thank You

fahrertuer said:

How does the mounting of the camera affect the measuremeant?
I mean, with a longer lens the camera is held (slightly) differently than with a shorter lens (at least that's what I've noticed with me). But on your test rig you can only mount the camera by its tripod socket or, with longer lenses - and if available - on the mounting ring on the lens. But either way you have a large, unsupported mass, swinging freely that is usually supported by a hand.

Maybe a good, simple solution would be to use a board to mount the camera and use a beanbag to simulate a second hand stabilizing the lens barrel

Lasse Beyer said:

+1 on Siegfried's comment :)

Alan Smithee said:

Interesting idea. However, as an Android developer, I would *never* install an unsigned .APK like this, as that is breaking one of the main rules of security. Why did they not put this app onto the Play market, or release it as open source? They also don't give any details of what happens with the data, anonymous policy, etc. Smells a little fishy to me...

KyleSTL said:

Any chance you'll code the app for iOS? Would love to try it myself, and supply my data for a greater n.

LensRentals Employee

tjdonegan said:

@someone: Thanks for the comment! With most batteries coming in at between 75-190 grams and sitting roughly in line with the image sensor, we felt Timur's 130 gram phone wouldn't throw things off too much if we also removed the battery. We also felt that positioning the phone behind the LCD kept the center of gravity very close, as we typically looked at the camera as a lever being pulled by gravity with the image sensor being the fulcrum. With the phone sitting just a millimeter or two behind the sensor, we were comfortable we were getting the most accurate results we could. We also instructed people to shoot the way they normally would, and to use the shutter button and and viewfinder as though they were shooting continuously as you would in low light.

When testing the rig itself, we used the exact same setup that we did in our human testing. Even though by necessity the camera is being moved while attached to the tripod mount, after some fine-tuning of the rig the accelerometer and gyroscopes gave the same readings as they did when the camera was being held by our test subjects, replicating both frequency and amplitude for yaw and pitch.

On the weight thing, we made sure to include the Canon 1D X with the 24-70mm f/2.8L because that was the heaviest camera and lens combination we've ever put through a full review. We found that the shake readings from that combo were practically identical to the ones we saw with the Pentax K-50 and 35mm macro, which weighs a little more than 1/4th what the 1D X combo does. This contradicts CIPA's results to a degree, but we actually found that shake was much worse when using the phone by itself or with a small point and shoot, because of the change in grip style. I would guess that a longer telephoto like the 70-200mm f/2.8L IS II would also necessitate a change in hand position and give different results. We don't use lenses like that when reviewing bodies so we didn't want to include it in our standard test pattern, but we could always cook up a custom shake pattern were we to do a one-off review of a lens like that.

Alan Smithee said:

Hmm. I just tried running this apk. It is *very* poorly designed, obviously by someone who is not a programmer, because it does not obey even the most basic UI rules. When it ran, it created a file, but all it contained was 7 bytes of data: the word "testing". Does this actually capture any data at all?

Samuel H said:


I expected linear shifts to have no effect anyway: rotation has a much bigger effect on the framing of your shot.

Just a thing: your eyes help keep the camera steady too. I will make my measurements while actually looking at my target, as I do when I shoot.

I will download the app, and once I've found a good way to attach the phone to my camera (with all the accessories it has, the phone will have to go on top), I will test myself standing up, standing against a wall or a pole, sitting down, sitting down and leaning against a pole or a wall, and also with my MagicRig video stabilizer. At a minimum, this app should work great as a training tool!

Quaker Shaker Video Maker said:

The Camcorderinfo site's attention to stabilization was unique and important. Too few reviews measure stabilization performance at all, except to cite the manufacturer's claims for the f-stop advantage it allows for still photography. In the case of hand-held video, stabilization affects quality a whole lot more than resolution, frame rate, or sensor size.

Why, then, do fewer and fewer of the Camcorderinfo or Reviewed articles furnish an efficiency chart or score for video stabilization? Of particular interest: the relative performance of the Sony "balanced optical steadishot," the Panasonic 5-axis stabilization, and whatever Canon offers on its HF G30 or similar models. Another bewilderment: why the Olympus OMD EM5, which was first to introduce 5-axis stabilization, got such a low rating for stabilization.

LensRentals Employee

tjdonegan said:

Hi Alan, I'm sorry to hear that. What model of phone are you using? It's possible there's a fragmentation problem, Android being what it is. I can assure you that the app should be collecting plenty of data, but Timur is certainly not an app developer by trade so I wouldn't rule out a snafu.

LensRentals Employee

tjdonegan said:

Thanks for the comment, QSVM. The Olympus OM-D E-M5 was honestly the camera that led us to totally redesign how we were shaking cameras. I personally reviewed that camera and found the stabilization system was great in anecdotal use, but was coming up with only a marginal improvement in the lab. Once we decided that the test simply wasn't doing what we needed it to do (it was all linear, no rotational, which we now know wasn't much use at all), we scrapped it and took the stabilization score out of the equation for all cameras and camcorders (they used the same rig). It was a tough decision, but we didn't want to be giving faulty ratings for a test we weren't 100% confident in.

Those aren't fun calls to make, but the result has been the new test, which we feel is far better. I'm very excited to get it out of beta testing and into the regular testing rotation.

David said:

I also downloaded the app and tested in on my old Verizon facinate (Galaxy S1 model) phone. It runs, but saves the data with the same name to a location that I don't think exists on my phone. When installing it only asked for permission to write to the SD card. Is it also sending out the data? IF not how would you wish to get the data. It would be good if the file name saving could be edited so the user could tell you what camera I taped it to and what lens was used.

Jose Bueno said:

Hello T.J.: Here you have the data from Samsung Galaxy Note II. I'll try to send two more files. Sadly (happily) I've started summer holidays today and will take a few days to go to institution again.

Ups!. Have the same problem than Alan Smithee On July 31, 2013 at 2:24 PM: "When it ran, it created a file, but all it contained was 7 bytes of data: the word “testing”. Does this actually capture any data at all?"

Any suggestion?


Jose Bueno

Samuel H said:

I installed the app on my HTC Desire and ran the tests, but I can't find the testing.csv file, it's not in the root directory of the SD card, or elsewhere. Is it sent to you directly? That would be a problem because I wanted to look at the numbers, and because some of the tests didn't work right (I dropped down the camera before the test was over, for example).

Alan Smithee said:

Mr Donegan, I ran the apk on a HTC one. But, as others have noted, the program does not work on other devices as well, so that rather makes me suspicious: I can't verify that the program captures *any( data, let alone how accurate it is. The accelerometers of phones are incredibly cheap, so I wouldn't trust them to be accurate.

I can certainly see that the writer is not a programmer: I teach programming and if he would have submitted that to me, I would have given him a fail for such sloppy work. Next time you are doing something like this, use a proper trained programmer rather than someone whose talents seemingly lie elsewhere. There are plenty of good programmers out there, including places like elance and getacoder that would have done a much better job.

Samuel H said:

possible cause of the problem: the sd card in my phone is not called sdcard0, but sdcard

Scott McMorrow said:

Same problem here with the Shaker program. After the app runs, the file "testing.csv" contains only testing.

Scott McMorrow said:

TJ, since I could not get Timur's program to work, I've been using a different Android program, Sensor Dump, to access the sensor orientation information in my Motorola Droid Razr. Looking closely at the data I'd say there are some issues.

First, the sample rate for the sensor dump is around 40 ms, which limits it's usefulness. Looking at the Azimuth, Pitch and Roll data, I've concluded that the Azimuth data seems to be accurate. It's a fairly smooth and continuous curve, which is what it should be, since it reflects direct measurements of the local magnetic field.

Pitch and Roll look to me to be heavily processed. Reading a bit of the Android information I can find, Pitch and roll are created by post processing the accelerometer and magnetic field strength data. These curves have an awful lot of noise in them, and because they are based on two different sensors they most likely have a bunch of numerical noise due to differentiation of data from two sensors with different time steps and varying measurement lag. I would not trust this data. However, Timur may be accessing the sensor data in a different way to achieve a higher sample rate or better data processing.

Given the extremely low sample rate, I would not be at all comfortable with any derived results. If I were designing this experiment, I would want to use a much higher sensor sample rate, and confirm the measurements with a different sensor.

LensRentals Employee

tjdonegan said:

Hey Scott,

Awesome questions and totally valid points, thank you for the comments!. First, we're definitely seeing a lot of issues with the app running on other devices. It was written specifically for a Nexus 4, without any fragmentation work done, so I'm not sure specifically what would be required to get it working on other devices.

As to the data we are collecting, the sampling rate for the app is 100ms. The data is derived directly from the gyroscopes and the accelerometers, which for yaw/pitch/roll gives us the rate of rotation, not the absolute degree of rotation. Android can give you absolute-ish numbers if you use the orientation commands, but you're right those numbers are heavily processed using the geomagnetic field sensor and the accelerometer (which is why we don't use them). That processing is probably why you also see a much lower sampling rate than what we are using, since there are calculations being done and not a straight dump of the gyroscope data. Google deprecated the orientation sensors from Android 2.2 onward for these reasons.

Dylan Sutton said:

From a purely theoretical perspective your paragraph regarding weight (quoted at the bottom of this post) makes little sense to me. The camera doesn't . A heavier camera by definition requires more force to move it - a lighter camera is not going to be easier to shake "even though it weighs less", but BECAUSE it weighs less. The reason I believe you found little difference between the DSLR cameras is because "total system mass" when held correctly approaches the mass of the camera plus head, torso and arms of the photographer.

The grip style is IMO a confounding factor, because when held in the typical 'shoot from the LCD' manner, the total system off-axis inertia includes less of the photographer's torso and none of the head.

Furthermore, more weight further in front of the sensor (i.e. long lens) should significantly reduce the rate of Yaw and Pitch, although of course increased magnification of the image on the sensor will have the opposite effect on image sharpness. Increasing the length (up to a point) of the lens also reduces pitch errors independent of mass as it becomes an exercise in reducing vertical displacement of the front element as opposed to rotation around the sensor. Your choice of a short normal zoom is not good for showing whether reality matches theory on this.

Interesting to know whether the rig (and rig-camera interface) has enough rigidity and power to properly test the heavier camera-lens combinations.

As an additional note, I would like to see an evaluation of whether the shake pattern (frequency, amplitude and yaw/pitch/roll/displacement) during the exposure is different than when holding the camera between exposures.
Many photogs (at least those who learned to shoot with no IS, on KR64, at $20 a roll, with a 2 week wait to get your 36 shots back!) use similar techniques to shooters to steady the camera while shooting, e.g. through altered breathing pattern, exhale-hold breath-shoot. Look also at target bow shooters - their bows actually have off-axis wights attached to improve stability.

"We also found that weight is not actually a huge factor in our ability to control camera shake. The difference between how we shake a 2300 gram SLR like the 1D X and a 650 gram DSLR like the Pentax K-50 is actually very minimal. It passes the smell test: When you’ve got a hand under the camera supporting it, you’re going to have a limited range of motion; when you’re holding a point-and-shoot pinched between your thumbs and index fingers, you have no support and thus shake significantly more, even though the camera weighs significantly less."

Scott McMorrow said:

The probability density of the angular displacement frequency data in your first plot is interesting. This curve can be plotted as a function of degrees over the shutter interval. If we take the 100 count point for instance, the angular displacement in degrees at various shutter speeds is:

Shutter Speed Degrees of Deflection
1/8000 0.000214859
1/4000 0.000429718
1/2000 0.000859437
1/1000 0.001718873
1/500 0.003437747
1/250 0.006875494
1/125 0.013750987
1/60 0.02864789
1/30 0.05729578

We can then compute the angular displacement of 1 pixel pitch across the sensor of a D800 camera.

Focal Length 1 Pixel FOV
24 mm 0.00992
28 mm 0.00890
35 mm 0.00741
50 mm 0.00538
85 mm 0.00325
105 mm 0.00265
200 mm 0.00140
300 mm 0.00094
400 mm 0.00071

Assuming that the sensor and lens can resolve down to the pixel pitch level after applying the appropriate sharpening, then these curves show the limits of hand holding the camera/lens combination.

Conventional wisdom has said that 1/f shutter spee is the limit for hand holding. However most D800 owners would say 1/2f is more appropriate. Well for a 50 mm lens, if you're trying to hand hold and keep full 1 pixel pitch resolution, according to these tables the shutter speed would need to be faster than 1/319, or a 1/6f limit.

However, no lens is thus far able to resolve anywhere near the 1 pixel pitch limit. A better measure of hand hold ability might be the square root of the Bayer sensel pitch, which is 1.414 times pixel pitch. This would be equivalent to obtaining 18 Mpix resolution out of a 36 Mpix camera. In that case, the hand hold limit for a 50 mm lens would be 1/226 without VR, which is still a 1/4.5f ratio.

Of course, I'm assuming the above presented probability density curve presented above is correct. If it is, then it's no wonder that pixel peepers complain about the sharpness of D800 photos hand held. What we do not know from the above plot is the probability distribution of the angular displacement frequency during the time that the shutter is open, which might very well change this assessment.

The dynamic characteristics of camera shake and mirror slap could be evaluated using Imatest resolution testing. It would be interesting to plot effective resolution vs. shutter speed for the camera on tripod with and without exposure delay, and then for hand held single shot vs. burst.

This stuff is fascinating!

Samuel H said:

^ Maybe the math will be easier if you forget about pixels and consider the lens to be the limiting factor (or at least the lens-camera combination): I would look at the imatest results for line pairs over image height.

Also, instead of thinking how much displacement we allow before we say the image is soft (a full line? half of that? a quarter?), we can take the usual rule as "truth for beginners", so, say, a camera+lens that deliver 550 line pairs per image height.

And then it's obvious that yes, a D800 user with a super sharp lens and who's going to look at the image at the pixel level wants to use a 1/2f rule instead of the usual 1/f.

Samuel H said:

BTW that's with a lens that resolves 1100 lp/ih, and he also may want to revise the CoC measure for his DoF calculations.

Scott McMorrow said:


Yes, using a resolution metric like line pairs per image height is absolutely useful. That does,however, eventually relate to the pixels in the sensor. A D800 with an ideal lens is capable of resolving 2456 lp/ph. The Zeiss APO-Sonnar 135mm f/2 lens on the D800 is currently topping out at 1940 lp/ph resolution. That's about 80% of the maximum resolution of the sensor.

Camera shake that causes blur from one pixel to an adjacent pixel would essentially blur those two pixels together, halving the imaging resolution. In that case, the sensor quickly dominates, and potential resolution is reduced to around 1250 lp/ph. A sqrt(2)pixel shake reduces resolution to 880 lp/ph. A 2 pixel shake reduces resolution to 625 lp/ph. Tell me the final resolution you desire, and I'll tell you how much shake can be tolerated. Throw this sort of shake on a lense with 550 lp/ph resolution, and you end up with a final image resolution of around 400 lp/ph.


Andre said:

So inspired by this great article and Scott M's really interesting comments, I whipped up this spreadsheet to compute the numbers that Scott came up with. It's for the horizontal dimension of the D800 sensor. At cell F13, I compute focal length multiplier rule-of-thumb, and it looks like you need 6x-7x focal length shutter speed in order to maintain per-pixel sharpness when handheld for the count=100 datapoint in the chart above.

I've set it to read-only, but you can comment, and you should be able to download it in various formats to play with it:



Scott McMorrow said:


That's fantastic. I made the computations with a spreadsheet, and it's good to get confirmation that I didn't mess it up! Next step for me is to perform some testing with Imatest. I actually own Imatest with a 72" test chart and have a Sigma 35mm f/1.4 Art lens coming. Now I need the time away from my consulting business to perform some test experiments.

Once I started looking at the information that the spreadsheet produces, it became clear to me why hand held photos are such a hit or miss proposition for me some times. The curve that was originally presented by TJ is essentially a probability density plot of the rotational rate. I picked the 100 count bin, because it appears to be close to the mean of the curve. Sometimes we press the shutter button and were're to the left of the mean where the camera rotates more, sometimes we press the button and it rotates less.

Andre said:

Scott, I've updated the spreadsheet because a friend found a bug in cells D14-22 where I used a rounding function I shouldn't have been, and I made it easier to compute the numbers for different size sensors (just input the horizontal size as well as the horizontal pixel count).

So now I'm of two minds about this computation. On the one hand, something doesn't smell right or it's super conservative. Inputing numbers for the Sony NEX-5N, Canon EOS 40D, and iPhone 4 (2592 horizontal pixels, 4.54 mm horizontal size) yields respectively 6x, 5x, and an incredible 17x for the iPhone 4. So either the iPhone 4 will improve tremendously if shot on a tripod and tripped remotely or something else is going on ...

On the other hand, thinking about the physics of it, this number seems like a best case. For example, we make an assumption that only a shift of 1 pixel will affect sharpness. However, if the projected feature on the image plane shifts, say, half a pixel, but is enough to change the value of its neighboring pixel, then that could affect sharpness as well. Also, I would think that the shakiness is actually a vector sum (since we need to include the direction of the shake) of everything on the PD chart above weighted by their frequency, and so could be much worse than .003 rads/sec.

But then I just saw a comment from one of my favorite photo bloggers, Ming Thein, who's testing an Arca-Swiss Cube C1 that he's seen differences on his D800E between the Cube and his own Manfrotto 410, so maybe we are underestimating the amount of image quality lost due to camera shake ...

So I'm not sure anymore of the conclusions. It certainly needs more thought.

Aaron Macks said:

I'd love to see some distinct shake info for macro (or at least macroish) distances. Canon claims their new HIS on the 100mm f2.8L does a better job at macro distances then regular IS, but I've yet to see numbers...

stigy said:

Great work !! would be really keen to know which method of stabilisation ( Lens vs In Body ) works out better.

Scott McMorrow said:


I found an Imatest measurement of the iPhone 4S, which has the same image size as the iPhone 5. Resolution was about 590 lp/ph. That's equivalent to 2.7 pixels. Based on that, the minimum shutter speed would be around 1/30th to obtain the full resolution possible from the camera. Looks like a pretty solid design to me.


Scott McMorrow said:


While sitting in the sun pondering, I realized that the mistake you made was with the FL rule of thumb calculation. It should be calculated with respect to the 35 mm equivalent focal length. This will give you reasonable numbers to compare with full frame cameras.

Mark Turner said:

I'm late to the party as usual, but if anyone's still listening I wanted to thank both TJ Donegan for writing and Roger for hosting such a great article, and all of the commenters' thought provoking discussion. Regarding Scott and Andre's math, I think what excites me is that it's within a rough order of magnitude of the updated rule of thumb even if it isn't giving a precise match.

I'm not quite sure why the rotational velocity at 100 is so interesting though; if these were representing the total sum of the data points, I might say that the median is closer to the left-most data point, to the left of 100 anyway, which if you plug something closer to 0.01 rad/sec into your equations will give you something closer to 1/2FL. At that point you might have a 50% chance of having a clear shot (although if you only take 2 shots to hedge your bets, you will only have a 75% chance of capturing a clear shot, statistics being what they are). Maybe a higher chance of capturing a clear shot might be nicer, in which case your 100 data point of 0.03 rad/sec might be a better proxy. But even that doesn't guarantee 0 blur, you always have a non-zero chance of a large excursion with this statistical distribution, so take a couple of shots if you really have to have the shot. My 18Mpx body gives me a little more leeway, heh.

Another thing I really like about this, is that even with the previously mentioned questions about the Nexus bandwidth and accuracy for it's sensor, your scheme subtracts any error out by re-using the Nexus as the calibration tool. That means that even if the data has an offset, it doesn't matter for the purposes of creating your vibration jig.

I'm looking forward to the follow-on article. Thanks again!

Andre said:

Scott, yes, you are correct, and a friend pointed that out to me, too. I've since added FL for the native iPhone lens, and while the multiplier is huge, due to the iPhone's very short FL, it comes out to a very reasonable 1/70ish second shutter speed to get less than 1 pixel shift, so it should be handholdable for many situations, especially in daylight.

Ilya Zakharevich said:

I’m sorry to say that what you are discussing here looks like it is completely bogus. There are several independent indications which raise the output of my bogosity detector:

A) You say “We are about ten times worse at controlling for yaw and pitch” (comparing to linear movement).

Saying that measurements of incomparable units (different dimensions) differ by 10 times is not a very good indication…

B) “With the phone sitting just a millimeter or two behind the sensor, we were comfortable we were getting the most accurate results we could…”

In cameras I saw, the sensor is deep inside the body. There is no way to get the center of mass of the phone to be close to the sensor plane.

C) Your video shows a very slow movement of your rig (a few Hz, I would say).

Basically, this means that the data you are going to collect is not relevant to hand-held cameras. How could this happen? I think the reason is the following:

D) “sampling rate for the app is 100ms”.

Bingo! Your cut-off frequency is 5Hz, and you do not collect ANY data on what happens on higher frequencies. Let me recall that the first generation of image stabilizers had about 8x dumping of shakes; it was concentrated mostly in frequencies 8Hz―80Hz, and was leading to 1.5―2.5 step improvements. (The dumping-vs-frequency curve for 7D was published; do not have a reference at hand. Hmm, looks like this may be confusing: BTW, it is Minolta, not Canon, if it is not clear from what I wrote ;-].)

This means that overwhelming majority of contributing shake is at frequencies high above 5Hz. So your measuring+activators loop throws away all the relevant information. Therefere it cannot capture anything relevant to the “real life” performance.


Ron Miller said:

Hmmm 1/70th shutter speed is pretty nice. Looking forward to more info and feedback from app users.

Leave a Reply