Forage-testing studies reveal
variation within and among labs
Hay growers, buyers and forage testing labs have been at odds over the accuracy and consistency of test results for years.
Last fall, two farmer groups, a forage-testing association and two university specialists decided to scientifically do something about it. They sent multiple blind alfalfa hay samples to – and received results back from – 21 forage-analysis laboratories.
The results show that less than half of the labs produced consistent results. The majority gave highly variable results or were in a mid- range in variability, according to the forage specialists who ran the blind tests. If given grade marks, labs with consistent results would get As, mid-range labs would receive Cs and the labs with the most variable results would garner Fs, they add. Originally, the results of the lab comparisons – and lab names – were to be published by this magazine and on several Web sites. But because of miscommunications between the entities testing the labs and some of the labs involved, only general results have been made public here.
Although these results don’t let growers and buyers off the hook on sampling accurately, they show that labs also produce variability. Some labs are, in part, to blame for the problems buyers and sellers have when they use forage-test results to price the crop, the forage specialists add.
When hay buyers and sellers have the same lots of hay analyzed and test results are similar, both parties feel they’re getting what the hay is worth. But when results differ – by just a few points of relative feed value (RFV) or as many as 40 – hay sellers can lose customers or feel forced to sell at lower prices. Or
buyers can lose feed value and also feel cheated.
Buyers and sellers then wonder about the validity of the samples each other sends to forage-analysis laboratories. Or they question the accuracy of results that one party’s lab may provide to the other party.
To add insult to injury, some buy ers and sellers use labs that test lower or higher in order to get better deals. That calls the question of whether labs follow the same forage-analysis procedures.
At the same time, labs complain that growers and/or buyers don’t take and provide samples properly and consistently, and that that leads to the variation in test results. Growers retort that lab results aren’t consistent and vary too much within and among labs.
All of the above happen all too often, say extension forage specialists Dan Undersander, University of Wisconsin, and Bruce Anderson, University of Nebraska. They, along with the National Forage Testing Association (NFTA) and members of the National Hay Association (NHA) and Nebraska Alfalfa Marketing Association (N.A.M.A), have grown tired of the three-way battles.
“N.A.M.A. has been investigating this problem for years because members were finding difficulties with testing samples, getting analyses and having their customers’ tests come up with different analyses,” says Anderson. “They did some sampling (in past years), sent the samples to labs, got results back, sent them (the samples) back to labs and didn’t get consistent results. And, quite frankly, the labs didn’t seem too terribly interested in adjusting. They wanted the hay growers to adjust,” he says.
NHA members were also frustrated and elected to work with Undersander and NFTA (a joint effort of NHA, the American Forage and Grassland Council and forage-testing laboratories to improve the accuracy of forage testing and build grower confidence in testing animal feeds).
So, in an NFTA-funded study, NHA and Undersander coordinated one type of blind alfalfa hay sampling test and Anderson, with N.A.M.A. funding and aid, developed a similar one. When the two forage specialists learned of each other’s efforts, they joined forces.
Results from 21 forage-analysis laboratories that received unground hay samples for testing were analyzed. Labs were chosen because growers or customers in NHA or N.A.M.A. frequently use their services, Undersander and Anderson say. All but one are currently NFTA- certified.
Continued on Next Page...
Because past blind-sample studies led labs to complain that samples sent were not identical and results not valid when compared to each other, the forage specialists say they worked to avoid that controversy. Each bagged one-cup samples they considered as close to identical as possible. (See accompanying story, “How ‘Identical’ Samples Were Created.”)
Samples were sent out in mid- to late 2007 and labs that Undersander tested received letters with results in January. Six of the tested labs had results that varied highly when compared to the results of other labs. One lab had consistent results in one university study and showed a mid-range in variability by the other university. Of four other labs tested by both universities, three gave consistent results and one had results that varied in the mid-range. Results were consistent for six additional labs and in the mid-range for four other labs.
Essentially, highly variable labs showed too much variation in results from among the several like samples sent to them, says Undersander. And those results varied too much from the results of samples from other labs, he adds. Labs with a mid-range in variability should also work to provide more consistent results, says Undersander.
“We did the normal procedure when you run a check-sample program. We decided not to do as fine a division as NFTA does” with its check sample program, Undersander says. (See “How NFTA’s Check Program Works.”)
He and Anderson looked at the analyses of each lab’s samples, calculated a standard deviation and threw out the labs with results that were outside that deviation. Those labs were considered highly variable.
Then the forage specialists calculated a mean of the remaining labs’ results and a new standard deviation. Labs with results outside of that deviation were considered in the mid-range and labs within were rated consistent.
The blind check samples were unground to look as though they were bale-cored by growers or buyers. Undersander and Anderson packaged the samples, then sent them to NHA and N.A.M.A. members and customers to submit to labs. After getting results back from labs, the growers requested the samples back. Those samples, and the results, were sent on to the forage specialists to analyze and compare.
Athough NFTA funded the Wisconsin study, the only NFTA board members involved in it were NHA representatives and Under-sander. That was to help maintain the study’s integrity, explains Don Meyer, NFTA president and president of Rock River Laboratory, Watertown, WI.
“Some laboratories really did a poor job here and to continually protect those labs isn’t what we’re (NFTA) all about,” Meyer says. “But there’s a certain range of performance that labs can be in and you can still be confident in what they are doing. “Laboratories have introduced error into this process, but we need not lose sight of the fact that sampling is probably as big if not a larger error than laboratory error,” he adds.
For results reported by NFTA, visit www.foragetesting.org.
Most labs with highly variable results in the blind forage tests were called by this magazine just prior to publication of these stories.
They questioned how the blind samples and testing were done and many immediately called the exten- sion forage specialist in charge of their results.
The names of those labs were not published because of miscommunications between the NFTA-University of Wisconsin partnership and NFTA- member labs. Nebraska results, given out just prior to the magazine’s publication, were not published for similar reasons.
“We’re already making some good changes and we’ve learned things that will help the whole industry,” says Dan Undersander, University of Wisconsin forage specialist, of the blind test experience.
A part of that “good” is that most of the labs with wide varia- tion in results agreed to work with the university forage specialists and NFTA to check and/or improve their forage-testing methods and procedures.
Lab directors were also willing to have their labs blind-tested in the future and to have the results published.
Continued on next page...
Alfalfa hay samples sent to 16 forage-analysis laboratories last fall as a blind test were prepared by Dan Undersander, University of Wisconsin extension forage specialist.
Undersander received three bales of alfalfa hay from a National Hay Association (NHA) member and took samples, making them look like hay core samples. He then had the samples scanned unground by near infrared (NIR) for forage quality so he had something to compare lab results to.
The unground samples were then sent to NHA members to submit to labs they commonly use. Each lab received and tested three blind samples. Once the growers had the results, they requested the samples back from labs. Results and samples were then sent to Undersander.
Bruce Anderson, University of Nebraska extension forage specialist, gathered and submitted blind samples in much the same way with help from Nebraska Alfalfa Marketing Association members. But his samples weren’t scanned. He thoroughly mixed and subdivided samples.
“To insure that the samples were identical, we tested our separation procedure. We took four other samples and mixed them thoroughly in the exact same manner and subdivided them in the same manner. Then we had all four analyses done in the same lab. We got very good agreement among the subsamples. The average deviation of the sam- ples was less than 2.5 points of RFV,” he says.
At least six samples were sent by N.A.M.A. members and customers to each of 10 labs that they use. Results and samples were returned to grow- ers who sent them on to Anderson to evaluate. Anderson and Undersander then compared variability within and among labs.
Both were asked why fewer than a fourth of the 107 labs certified by NFTA were checked. “It’s simply a matter of time and finances as to how much we could do,” says Anderson.
“Blind tests like this are a lot of effort,” Undersander adds. “Even if we continue to do this, it will always be a random sampling of labs rather than all labs because of the costs. NFTA paid thousands of dollars in lab fees and spent a lot in labor to get the samples sent out to somebody from the region that a lab would normally receive samples.”
Six times a year, the National Forage Testing Association sends recognizable check samples to forage-analysis labs that want to receive or retain NFTA certification status. Currently, NFTA sends ground samples to labs, then compares the results. Its purpose is to help labs evaluate their testing procedures and, by doing so, reduce variability within and among labs, says NFTA president Don Meyer.
“But everyone is aware that it’s a check sample from NFTA and labs may treat that sample a little differently than they would a normal sample coming in,” says Meyer, also president of Rock River Lab.
Test results from each lab are graded based on a narrow fixed standard deviation. Labs with tests that are off that standard deviation can earn B or C grades and still be considered certified by NFTA. Labs that get A grades are within that deviation. Any lab that gets a grade lower than a C is not certified. Those grades have not been made public, Meyer says, but customers can request lab grades from labs they are using.
For a listing of NFTA-certified labs, visit: www.foragetesting.org
Reader Reaction: Forage Testing Experience
Forage testing is part of a quality-control process from end to end: hay growing, harvesting, forage tests, hay sampling, lab tests and accurate techniques, equipment and procedures.
Experience with hay testing can give a trained eye the ability to judge hay quality within 20 RFV (relative feed value) points or less. Many buyers rely on color as a primary factor in hay quality. Most careful buyers and sellers also rely on forage testing to estimate hay value.
I had a buyer, who had no experience in forage testing, examine cores of hay from different stacks ranging from 100 to 200 RFV. We laid a core sample from each stack on the pickup bed. He had no trouble recognizing the differences in quality and this was verified by my test results from earlier hay samples. The test results added confidence in the buyer’s judgment and basis of value by the seller.
In general, the hay seller and buyer have an obligation to verify the forage testing is done by a certified laboratory. I interview the lab I use for testing and have had no negative experiences. I have been testing hay for approximately 20 years.
There is a very small percentage of alfalfa hay that has genetically engineered organisms from Roundup Ready alfalfa. Special harvesting and testing practices must be taken until this contamination or presence is eliminated.
In regards to the lab are lab-to-lab variations. I believe there may be a problem due to electronic accuracy and electronic hardware and standards – sensors and infrared spectrum processing. Different hardware manufacturers, system design and electronic calibration may cause significant differences between some labs’ test results. Other factors may be involved as well.
Chuck Noble (FAX/Phone: 425-747-7092)
South Dakota hay and seed producer