Category Archives: Precision

Standard Drag Models and Ballistic Coefficients

The last post mentioned the standard ballistic drag curves. Here is a chart of them for speeds up to Mach 3:

Standard Ballistic Drag Curves for speeds from 0 to Mach 3.

This chart reveals a few quirks of the standard drag models.  First is that G1 is not very precise in the transonic region (i.e., about Mach 1).  The reality of the sound barrier is reflected in the other curves, where drag hits a steep cliff that doubles or triples the subsonic drag coefficient.  The more gentle slope of the G1 curve is a result of 19th century ballisticians failing to adjust aggregate test data for variations in the speed of sound.

Looking at the low subsonic region we find another strange artifact: several of the curves show an increase in drag as speed goes to zero.  The reality is that drag should be virtually constant across low subsonic speeds.  I asked Jeff Siewert what is going on here and he explained: Back in the day, the test ranges reported what they observed, and at those low speeds the projectiles (especially G2) were likely encountering dynamic instability that increased their yaw.  So those segments reflect an increasing aerodynamic cross-section, not the (constant) drag that would be seen in stable, nose-forward flight.

Example: .308 OTM

G7 vs 168gr .308 OTM profiles

Let’s look at a typical rifle bullet: the 168gr .308 BTHP (boat-tail hollow-point), a.k.a. open-tip match (OTM). The profile of this bullet looks very close to the G7 standard projectile, as shown in this image with scaled profiles of the two:

Manufacturers of this bullet still list a Ballistic Coefficient (BC) of 0.462 for use with the G1 drag model. A few decades ago, in an effort to improve trajectory predictions, Sierra published multiple G1 BCs for different velocity ranges: 0.462 above 2600 fps, 0.447 above 2100 fps, 0.424 above 1600 fps, and 0.405 below that. Eventually Berger began to publish G7 BCs, and this bullet is often quoted with a G7 BC of 0.224. A decade ago Bryan Litz began publishing detailed drag models for rifle bullets. For Sierra’s version of this bullet he lists multiple G7 BCs: 0.226 above 3000fps, 0.222 at 2500fps, 0.214 at 2000fps, and 0.211 below 1500fps. Here is a chart of the drag curves resulting from each of these variations:

Drag curves for different ballistic coefficients for the Sierra 168gr .308 OTM bullet.
Drag curves for the Sierra 168gr .308 OTM bullet.

Long-Range Consequences

How meaningful are these differences? I ran the trajectory for each model using a muzzle velocity of 3000fps, zeroed at 500 yards. Here are the drops on the baseline (G1) trajectory, and then the additional drop when using each of the other drag curves:

Baseline (G1)
Drop: Difference from Baseline
Distance (yards)Drop (inches)G1 MultiG7G7 Multi
500 0 0.1 0.6 0.6
1000233 8.710.3 16.3
150096278.595.9142.5

So there’s not a meaningful difference until we’re looking at ranges closer to 1,000 yards. At longer ranges, however, the vertical error from the inferior models is measured in feet!

Friction and muzzle velocity variation

Minimizing muzzle velocity variance is a key to long-range shooting precision. The following chart (taken from my forthcoming book) shows the downrange effects of typical sources of vertical error in a typical .30 caliber rifle shot. In this example, the change in vertical impact caused by a 1% change in muzzle velocity is negligible at shorter ranges, but after 1000 yards it dominates the other sources of error:

Vertical change by distance from various sources of ballistic variation, shown for a typical 168gr .30 caliber rifle shot.

What causes muzzle velocity variance, and what can we do to reduce it? One obvious source is variation in powder charge weight. Within the range of safe powder charges, muzzle velocity shows a linear increase with the mass of powder. So precision loaders meter powder charge weights to hundredths of a grain. But even after eliminating any measurable variation in the weight and dimensions of cartridges, it is not uncommon to find the standard deviation of muzzle velocity running up to 1%.

What else is there? Jeff Siewart has identified variation in engraving force1 as a significant source of variation in muzzle velocity. What is engraving? Here are pictures of bullets, before and after being fired through a rifled barrel:

As a bullet enters a rifled barrel it has to engage and follow the rifling so that it gets spin stabilized. Engraving refers to the raised steel rifling of the barrel scraping into the surface of the bullet, as well as the entire circumference of the bullet deforming to create a tight seal with the barrel to keep the propellant gases from blowing past the bullet. Siewart has shown that the friction involved in engraving varies enough from one shot to another to cause the variations observed in muzzle velocity.

This engraving friction becomes even more significant in subsonic rifle rounds because (a) they are longer, which increases the surface area that must be engraved; and (b) they are fired at lower pressures and velocities, so the same variations in engraving force are a larger proportion of the total internal ballistic impulse.

Can we reduce friction?

There are two established ways of reducing friction between bullet and bore. One is to reduce the bearing surface area of the bullet that contacts the barrel. A notable example of this approach was the addition by Barnes Bullets of cannelures to its solid copper bullet, which markedly increased its shooting precision.2 The following photo shows how these cannelures reduce the engraved surface area (as well as providing more relief for material displaced by engraving):

Barnes TSX solid copper bullet with cannelures to reduce and relieve engraved surface area

The other method of reducing friction is lubrication. Ten years ago I did extensive testing of various bullet and barrel lubricants in subsonic .30 bullets. Some of these did measurably reduce friction and improve shooting precision: Rydol’s “polysonic” treatment and Dyna-Tek’s “Nano-Ceramic” coating. Impact-plating with hBN (hexagonal boron nitride) did not. This is described in detail here. I didn’t test but would also expect the established MoS2 (“moly”) and WS2 (tungsten disulfide) bullet coatings to produce the same benefits.


1 See §Peak Engraving Pressure Variation, p.5 in November 2023 paper, “Using an Error Budget Approach to Estimate Muzzle Velocity Variation.”

2 See pp.4-6 in December 2020 paper, “What causes Dispersion.”

Universal Precision Test Fixture for Firearms

I have spent a lot of time testing the precision of common firearms. As I have been explaining for years, it takes a lot of shots to get statistically significant measurements of precision. And I wanted to be able to collect large samples without wondering whether I as a shooter was introducing variance in the results.

In order to remove the shooter from the equation, precision ballistics researchers going back as far as Franklin Mann in the late 1800s have used “V rests” to clamp barrels to benches.  I wanted to do the same, but with a wide variety of modern firearms, and without disassembling them. So ten years ago I built two “test rigs” with the following design objectives:

  1. Securely hold any gun with a scope rail;
  2. Allow the gun to be loaded, unloaded, and fired while attached;
  3. Precisely return the firearm to the same zero after every shot.

Here are photos of the first version:

With a gun clamped in the fixture, I can precisely align it with the same point on the target for every shot, and then fire it while touching only the trigger. Both versions consist of:

  1. A Lower Frame supported on the bench with three Screws to fine-tune the point of aim.  To maximize their turning precision, the inserts use class 3 threads, and the end of each screw is turned to a sharp point.  The Lower Frame is tall enough to hold any man-portable gun with a magazine inserted.
  2. Two chrome-plated 12mm linear rail Shafts clamped to the Lower Frame.
  3. Two linear ball bearing Bushings on each Shaft.
  4. Springs that fit over the Shafts to absorb recoil of the Upper Bracket.
  5. An Upper Bracket, H-shaped, that screws into the Bushings.
  6. A Scope Rail on top of the Upper Bracket, to which I fixed a scope with up to 32x magnification.
  7. In the center of the Upper Bracket are two side Clamp Plates with a V channel cut along their length for gripping the scope rail of any gun to be tested.  One of the Clamp Plates rides on pins with springs, and is pushed by a Camming Lever to quickly clamp and release the guns.

The second version of the fixture added threads on top of the Upper Bracket for attaching up to 20 pounds of weight to dampen recoil, as well as a tray in the front that held up to 40 pounds of lead to keep it from lifting off the bench when shooting .308 rifles.

Target data from David Bookstaber

Statistical Inference Example: Testing .22LR Ammunition

Competitive shooters work hard to find the ammunition that delivers the highest precision in their guns. This isn’t always straightforward: Even among premium ammunition lines any particular barrel can show a preference for one load that produces poor results in others. Using the latest statistical techniques, I plugged in some of the data I collected during this test of .22LR rifles. Shown here are the recorded groups of two types of ammunition – SK Plus and SK Match – shot through my KIDD 10/22.

Aggregated test targets shot through KIDD 10/22 at 50 yards. The SK Plus points are labelled in the order they were shot.

Match is SK’s higher-end ammunition, so our expectation going into the test is that it will produce higher precision than Plus. The purpose of the analysis here is to see how well a controlled test sustains that hypothesis.

I like to call this the Not So Fast! example. Remember that shooters never have enough time or ammunition, so they prefer to draw conclusions as fast as possible when running A/B tests like this. So imagine you have first fired the 25 round group of Match shown on the left. Running the calculations, you find its estimated sigma is 0.16″ at 50 yards with a 90% confidence interval of [0.13″, 0.19″].

Now we want to see how SK Plus stacks up. In the following table I show how the statistics evolve with each shot of Plus:

Running statistics after each shot of SK Plus, compared to the complete target data of SK Match

The first few shots aren’t very good: by the third shot of Plus we estimate it is only half as accurate as Match: Sigma A/B = 0.50. (That’s the middle columns in the table: “Effect Size,” which is the ratio of the estimated sigma of the two ammo types.) But three shots is a terribly small sample, which is reflected in the very wide 90% confidence interval on that estimated Effect Size: [0.33, 1.27].

So we keep shooting, and Plus starts to look better. After 10 shots our estimate of its dispersion (a.k.a. sigma, which is the left three columns) has gone from over 0.3″ to barely over 0.2″. But compared to our data on Match it’s still not looking good: by the 11th shot the 90% confidence interval on Effect Size no longer contains 1.0, which means that with 90% probability it falls short of the precision of Match. This is also reflected in the p-value (right-most column), which collapses to single digits at this point.

But wait! We planned to shoot 30 rounds, so let’s finish the test. By the time we’ve fired 20 rounds of Plus we are not as confident that it is so inferior to Match. By the end of the test our best guess is that, in this gun, Match will produce groups only 15% tighter than Plus (that’s 1/0.87). And the probability of these aggregated data if there were no difference between the two (i.e., under the null hypothesis, which is the p-value) has jumped to 33%.

What’s the point?

  1. This example shows that small samples can suggest things that are far from the truth. We drew the worst samples of Plus right at the start of the test.
  2. By running a more statistically significant test, we have learned that when Match is scarce or expensive, we probably don’t give up much performance by instead shooting Plus through this rifle.

Excel file with the complete data and analysis is here.

Checking Probability and Statistics via Simulation

Did I get all these formulas right?

David Bookstaber's statistical formulae for Gaussian and Rayleigh parameter estimates and confidence intervals.

I’ve published the details on my Ballistipedia Github. I’ll explain how Monte Carlo simulation techniques enable anyone to verify whether these are correct. But first, the motivation:

Many shooting sports require relentless pursuit of precision in guns and ammunition. Amateur sportsmen spend thousands of dollars and hours a year trying to tweak their gear for marginal improvements in accuracy. Sadly, a lot of this effort is wasted because they lack an adequate understanding of probability and statistics.

Here is a common example: A shooter is tuning a load for a rifle. This involves assembling lots of ammunition in which the powder charge is varied by fractions of a grain over a range of acceptable values, then shooting the lots and seeing if any are exceptionally precise. But powder and bullets have become increasingly expensive, and each load and shot takes time, so there is constant pressure to draw conclusions from as few samples as possible. Compound this with the reality that humans are pattern-seeking machines who are so easily fooled by randomness that superstition is a hallmark of our species. Now we have legions of shooters going through the motions of experimentation but in the end making decisions based on essentially random noise.

As part of my renewed effort to apply rigorous statistical inference to amateur ballistics, I have been compiling formulas for p-values and confidence intervals on parameter effect size estimates for Gaussian, Exponential, and Rayleigh probability distributions.

When dealing with small samples there are bias correction terms that become increasingly relevant: For example, when n=3 failing to correct for bias leads to an estimate of standard deviation that is on average 20% too low! But how many degrees of freedom are involved? For a Rayleigh parameter estimate is it 2n, 2n-1, or 2n-2? Answer: When the samples are derived from bivariate normal coordinates where we have to estimate the center we give up two degrees, so the Gaussian correction term which is calculated with 2n+1 for a pure Rayleigh sample should be run with 2n-1, and the estimate itself is a chi-squared variate with 2n-2 degrees of freedom. How can I be certain I got that right?

This is one of the great things about fast computers and good (pseudo-) random number generators: We can actually resort to fundamental definitions and run simulations to verify statistical formulas. The definition of an x% Confidence Interval is a range of values that contains the true parameter in exactly x% of experiments. In the real world cases we care about we generally do not know the true parameter of a random variable with certainty. But when I programmatically generate a random number I know exactly what the parameter is, because I have to specify it. So with a random number generator we can simulate experiments in which we know the true parameter.

In the figure above I have formulas for confidence intervals. Are they correct? Here’s one way to check: I simulate many experiments using the random number generator, and in each experiment I use the formula to calculate a confidence interval. Then I just count how many times the confidence interval contains the true parameter. If my formula is correct, it will match the average I find through repeated experimentation. And the more simulations I run, the closer my average observation comes to the true value. (This is a fundamental theorem of probability called the Law of Large Numbers.) With modern computers I can run this simulation millions of times in a matter of seconds, which is enough to see these numbers converge to 4 or more significant figures.

My GitHub contains Jupyter notebooks where you can see and even re-run the code for these simulations. The statistics here are not extraordinary, and the formulas are widely known. More advanced statistical inference here includes some formulas that I could not find anywhere else.

Muzzle Velocity is Normally Distributed (+Infographic!)

I’m putting together a book with a lot of statistics applied to ballistics and trying to fill it with helpful examples since the pure math is hard to absorb. Below is one infographic I just finished. It’s an analysis of the muzzle velocity I measured shooting a full box of subsonic .22LR. The left column is all 50 measurements, which I then sort into a histogram to illustrate how muzzle velocity is normally distributed.

Understanding Gun Precision

I’ve written a number of posts over the years in which I test the precision of various firearms. Some readers have asked about the particular methodology I use.

When testing guns for accuracy it is common practice to look at the Extreme Spread of a group of 3 or 5 test shots. I will explain why this is a statistically bad measure on a statistically weak sample. Then I will explain why serious shooters and statisticians look instead at some variation of circular error probable (CEP) when assessing precision.

It is easy to fool yourself with Extreme Spread, and it’s even easier to fool others.
Continue reading

Buckmark .22LR Pistol Accuracy, continued

Following some review of the recent Buckmark accuracy test, some readers wondered if the suppressor was hurting precision. Others, noting that Buckmark barrels are well known for their accuracy, wondered how the stock 5.5″ barrel would fare.

Buckmark pistol with Tactical Solutions 4

I ran the standard barrel through the same procedure as before. The aggregated data and analysis are in this Excel workbook, and the summary results with links to the 50-yard targets are here:

Ammunition CEP Radius (MOA) Average fps fps Standard Deviation
CCI SV 0.9 930fps   17.1  
Eley Contact 1.0 973fps   17.8  
Eley Club 1.0 958fps   13.5  
Gemtech 42gr 1.0 941fps   13.8  
Federal AutoMatch 1.2 1053fps   19.0  
Federal Champion HV 2.1 1073fps   20.5  

This pistol shot everything well. In fact, the best four loads tested would all be expected to hit inside a Bullseye 10-ring virtually 100% of the time.

Running the Tactical Solutions barrel without the suppressor seemed likely to help its score also, at least with some loads. At 4″, it gives up 40-50fps vs. even the 5.5″ barrel. But the accuracy of CCI SV went from CEP of 2.0 MOA to 1.3 MOA. Groups of Eley Club (essentially the same performance as Eley Target) showed CEP under 1.4 MOA. However, it was not able to tighten performance of the Gemtech load. (Detailed data were added to the spreadsheet from the previous test.)

Buckmark .22LR Pistol Accuracy

How accurate is a typical rimfire pistol? Out of curiosity I mounted my Buckmark, with its 4″ Tactical Solutions barrel and 5″ AAC Element II suppressor, on my test stand and recorded shots simultaneously at 25 and 50 yards.

Buckmark pistol with suppressor in test stand

As usual, I digitized the paper targets using OnTarget TDS, and followed the statistical analysis outlined at ballisticaccuracy.com. The aggregated data and analysis are in this Excel workbook.

Ammunition CEP Radius (MOA) Average fps fps Standard Deviation
SK Plus 2.0 904fps   20.0  
Eley Target 2.0 952fps   11.9  
Gemtech 42gr 2.0 910fps   15.0  
CCI SV 3.1 914fps   16.2  
Federal Champion HV 4.7 1037fps   14.6  

I was surprised at how much more dispersion this gun shows than the 10/22 rifles I have tested. It’s certainly nothing to brag about: The 10-ring on NRA Bullseye Pistol targets (shot competitively with autoloading rimfire pistols) typically has a radius of 3MOA, and even with zero shooter error the good ammo here would only hit that 80% of the time.

Feddersen 10/22 Accuracy with Gemtech, CCI, Aguila, SK+

Test Ammo - Gemtech, CCI SV, SK+, AguilaI haven’t been able to find any decisive reviews of Gemtech’s 42gr .22LR subsonic ammunition. I finally picked some up under $4/box and decided to wring it through my Feddersen-barreled 10/22.

Since my last precision testing of 10/22 rifles, I have also refined a testing process capable of higher sample volumes, so I decided to compare the Gemtech to these other subsonic flavors presently abundant in my stockpile. (Of course, since Gemtech’s ammo is supposedly optimized for use with a suppressor, this test was conducted with an AAC Element II screwed to the muzzle of the 16″ Feddersen match barrel.)

Testing

One thing that has made precision testing much easier is this universal machine rest I developed: After every shot it returns the gun to the exact same position (which can be confirmed by the 32x scope on top), so it’s easy to shoot a string quickly and with zero shooter error.

I’ve also become a little more disciplined with respect to fouling the barrel: When shooting a clean barrel, or changing ammunition types on a fouled barrel, I ignore the first five shots. Different rimfire ammo use different lubricants, and it takes some number of shots before the bore is consistently coated in the new lubricant. Five shots isn’t really adequate to fully stabilize the bore. (A good bolt action rifle will show that ten or twenty shots are required for it to settle in.) But at the level of precision one can get out of an autoloading rimfire, five shots seems “good enough.”

At 50 yards (the test distance shot here), muzzle velocity variance doesn’t really come into play. But it certainly does at 100 yards and beyond. It was easy to prop a chronograph in front of the machine rest and record the velocity of every round fired during the testing.

Analysis

Another thing that helped streamline analysis was OnTarget’s TDS software. It can’t (yet) auto-detect multiple shots on a single target, but it does auto-detect the points of aim, and it makes marking the shots and groups fast and easy.

I took advantage of the latest statistical tools available from ballisticaccuracy.com. The aggregated data and analysis are in this Excel workbook.

Summary results, here linked to TDS-marked targets, show that (in this gun) Gemtech’s ammunition is better than Aguila but worse than CCI SV:

Ammunition CEP Radius (MOA) Average fps fps Standard Deviation
SK Plus 0.37 1045fps 14.7
CCI SV 0.50 1039fps 15.2
Gemtech 0.58 1022fps 14.5
Aguila 0.67 1015fps 10.0