Access the full text.

Sign up today, get DeepDyve free for 14 days.

Arthropod Management Tests
, Volume 1 (1): 6 – Jan 1, 1976

/lp/oxford-university-press/the-role-of-statistics-in-improving-insecticide-and-acaricide-tests-DdB96L3Jje

- Publisher
- Oxford University Press
- Copyright
- © The Entomological Society of America
- eISSN
- 2155-9856
- DOI
- 10.1093/iat/1.1.4
- Publisher site
- See Article on Publisher Site

THE ROLE OF STATISTICS I N IMPROVING INSECTICIDE AN D ACARICIDE TESTS THROUGH PLANNING , DATA ANALYSIS AN D INTERPRETATION Larry A . Nelson Statisticians are asked for advice in the planning, analysis, and interpretation of experiments. Perhaps planning is the most important and yet neglected phase. Too little time and effort are devoted to planning experiments which are often expensive and long term in duration. The researcher should sit down with a statistician and work out the design details of an experiment when it is first being planned. This is because the analysis and interpretations depend upon the design, lay-out , and method of conducting the experiment. The statistician often makes his most valuable contribution by asking questions which wil l cause the researcher to reexamine all aspects of the problem including his reasons for conducting the experiment. All plans can then be tailored to the particular experimental setting. This should result in a well-designed, precise experiment. Analysis and interpretation of the resulting data should be straight forward . Statistical tests of significance should be valid , and the conclusions drawn thereby should focus directly upon the problem being studied. PLANNIN G Reason for Planning Experiments — Planning is designed to assure that the treatments selected for the experiment wil l most effec tivel y provide comparisons, estimates, and/or hypothesis tests which are of interest. Planning also increases the prospects of having the correct size of experiment for the particular situation. Under-replication often results in failure to detect differences which exist. Over- replication can be costly. Planning involves choice of a randomization scheme which will assure both unbiased estimates and the validity of statistical tests and wil l contain provisions for controlling error variation. Steps in Planning — The scientific method suggests that planning of research be done in steps. These include (1) defining the objectives , (2) formulating a set of detailed specifications for the test, and (3) determining specifically the method to be used in analysis of the data. Step I: Importance of a Clear Definition of the Objectives. The first step in planning a test is to define the objectives. Objec tives may be in the form of questions to be answered, hypotheses to be tested, specifications to be met, or effects to be estimated. Many researchers try to cover too much ground and thus have objectives which are too broad and diffuse. On the other hand, there are cases where the objectives are so limited that experiments could be combined into a single test. The statement of objectives should be clear , concise, and specific. It should include information as to the extent of the popula tion over which the generalizations are to be made (e.g . one farm , 6 counties, poorly drained soils, etc.) . The results of the research wil l usually be based on a sample (one or several experiments) and this sample is of value only if it furnishes information about the population to which the conclusions are to be applied. Step II : Detailed Specifications for the Test. The detailed specifications of an experiment should be formalized in writing . The randomization plans may be shown in a diagram (drawn to scale). Some of the more important specifications are (1) sites, (2) treatments, (3) experimental material, (4) experimental design, (5) replication , (6) design and randomization in series of tests, (7) technique, (8) characters to be measured, and (9) supplementary variables. Sites. The sites are drawn to represent the area about which inferences are to be drawn. Final selection of sites to be used is made after careful screening of a group of prospective sites. Often these are deliberately selected to represent certain environmental characteristics. On other occasions, the sites may be selected in a random manner. Treatments. Selection of treatments involves consideration of the following: a . the number of treatments to be used. b . the function of the treatments in relation to the purpose of the experiment. c . whether treatments are to be factorially arranged. d . the definition of levels of factors. e . whether an untreated check is needed. The number of treatments (and replications) must be worked out carefully subject to the desired degree of precision and amount of experimental material available. Field experiments which have more than 15 treatments are apt to be imprecise because of the large areas involved. The purpose of an experiment may be to compare chemicals and "spot the winner". Often the applications must be rate-specific to the chemicals. The treatments then consist of package combinations of chemical-rate. The chemical-rate which performs best is then recommended. Other purposes of experiments may be to estimate the effects of treatments, determine the slope of response, explore a response surface, or find an inflection point. Treatments may be arranged in a manner which provides clues as to why they behave as they do. Factorial experiments in which treatments are combinations of levels of two or more factors are sometimes useful for this purpose. An example is a 3 x 2 factorial in which there are six treatment combinations of three levels of insecticide and two methods of application. One benefit of the factorial arrange ment is the possibility of estimating the interaction of factors. Another benefit is added precision due to "hidden replication" from one factor when studying the other factor(s). Response surfaces showing response relationships may be fitted to data from factorial experiments (quantitative variables). These facilitat e estimation of optimal rates of the input factors. An example is a growth chamber study in which four rates of insecticide are varied with five rates of herbicide and the cotton growth response is measured. The purpose is to study interaction of the two variables and draw isoboles for the response. With many factors a t several levels, the number of treatments is large. Consequently incomplete factorials are used in some cases. Care should be taken in selection of treatment combinations for incomplete factorials to assure that those comparisons which are of interest wil l be valid. Another approach in reducing the number of treatments is to use experiments with some treatments factorial ly arranged and others which are not part of a factorial . For example, treatments might be: Rate A-Method 1 , Rate A-Method 2 , Rate B-Method 1 , Rate B- Method 2 , Rate C-Method 1 , and Rate D-Method 2. The first four treatments are factorially arranged. The last two are not. Whether factorial or non-factorial, the rates of quantitative variables and the specific classes of qualitative variables comprising treatments must be specified. A decision also must be made as to the spacing(s) between levels (e.g . equal spacings of 20 kg/ha or variable spacings) and the correct total range in rates necessary to bracket the region of response (e.g . 0 to 80 kg/ha). Controls are often needed for a basis of comparison, especially if it is not known that other treatments wil l be effective . In factorial experiments in which none of the treatment combinations is an untreated check, this control can be randomized in with the Professor of Statistics, North Carolina State University, Raleigh, North Carolina 27607. 5 factorial treatments. There are very few situations which call for a separate control for each treatment. Controls should not be included i f it is known that the control produces no results or if all insects die without the presence of the material being tested. Experimental material. Treatments are applied to experimental units which collectively are called the experimental material. Examples of experimental units are plots of land, trees, vials of insects, and pots of soil in a greenhouse. Selection of the experimental units for inclusion in the experiment depends upon the purpose of the experiment. If the experimental units (plots) are to be used as a medium on which treatments are to be compared, these units should be selected to be as homogeneous as possible. Homogeneous means factors which might influence response (plant population, soil fertility and moisture, size of plant, etc.) are nearly constant. On the other hand, if the object is to evaluate some property of the experimental unit themselves (e.g . average weights of two different insect strains), less selection and more randomness is needed in the choice of units. It should be understood that selection may narrow the population to which the results apply. In the field , the units are plots of land. Decisions must be made con cerning size and shape of plots as wel l as the need for border rows, etc . References which deal with these aspects of field plot technique are (7 , 8 and 12). Smith (9) reported some quantitative techniques for estimating optimal plot size from uniformity trial and cost data. He reported that within the range of 1/4 to 4 times optimal plot size the efficiency of the experiment was not greatly different. Local conditions often call for modifications of the optimal size estimated by his procedure. In particular, the mechanical restrictions imposed by the equipment and techniques used in the experimentation often dictate the size and shape of plot. In general, it may be said that plots in most fiel d experiments should be long and narrow with the elongation in the direction of the gradient. Plots with this configuration provide minimum error and yet are convenient to handle with row crop equipment. Usually experience with a particular crop and/or pest species in one locality enables a researcher to determine the plot size and shape which are convenient to use and yet which produce the precision required. The purpose of the experiment is very important. Breed ing experiments may require completely different plot configurations than those experiments involving comparisons of the effects of chemical treatments. Border areas (or guard rows) are used in cases where the treatment imposed upon one plot is expected to influence the adjacent plot . A correct judgment of the degree of this influence is important. Otherwise an excessive portion of the experimental area may be devoted to border area. It is not uncommon to use as much as 50 to 60 percent of the experimental area for border area. Obtaining and analyzing the data by individual sample (e.g . plant, section of row, etc.) instead of by entire plot allows the estimation of relative sizes of experimental and sampling errors. Hence information is available for improving techniques in future experiments. Systematic sampling (e.g . every tenth plant in the row or three feet segments of the row spaced equal distances apart) should produce the representative samples needed. Intervals in systematic sampling must be carefully chosen if clustering is present (e.g . sampl ing of certain gregarious insects). Changes in population levels with time should be given consideration when sampling insects within plots. This implies multiple sampling during the crop growing season. Sampling strategy throughout the season wil l depend on (1) stage of insect, (2) mobility , (3) reproductive potential, and (4) weather parameters. A discussion of various aspects of plot sampling in rice experiments is given in (7). It is advisable to take a constant number of samples per plot to maintain balance which is very important in the analysis of variance of the data. More wil l be said about the importance of balance in Experimental design. If the experimental or sampling unit is a group of insects, the behavior of the target test species (hopper, flyer, jumper, etc.) should be studied to determine an optimal unit size. In this case, the traditional plot size and shape concepts would not necessarily apply. Experimental designs. There are several experimental designs which are used extensively for experimentation in many fields of research. The choice of an appropriate design for a specific situation is based on a number of considerations but one general rule is that the plan should be kept as simple as possible. There is a tremendous advantage in analysis and interpretation if the design is balanced. By balance is meant that all treatment combinations have the same number of observations (usually replications). Without this equal replication, considerable difficulty can be encountered in analysis and the resulting estimates can also be very poor. There is no imbalance problem in the simple designs to be discussed below if used as described in the experimental design textbooks. However, if the plots of the experiment are sampled and the number of samples varies from plot to plot , the balance wil l be lost and problems wil l arise. Missing data due to environmental causes, mortality , etc. can also cause imbalance. Given information on the variabilit y pattern of the experimental material, the number and nature of treatments, and the experi mental techniques employed, one particular design wil l usually emerge as the logical choice. The commonly used designs differ mainly in the way in which treatments are randomly assigned to the experimental units. By randomization is meant the process of assigning treatments to plots in such a way that all treatments have an equal chance of being assigned to a particular plot. Randomization provides assurance that a treatment wil l not continually be favored or handicapped in various replications by some extraneous source of variation, known or unknown. Randomization, like an insurance policy, protects against disturbances that may or may not occur. Tables of random numbers or computers may be used as sources of random numbers. The commonly used experimental designs allow one to utilize knowledge of the experimental site to provide for control of known extraneous sources of variation by blocking. The basic difference between these designs is the number of restrictions on randomization, the restrictions being necessary because of the blocking. With the Randomized Complete Blocks Design (RCB), which is by far the most commonly used design, one restriction is placed on randomization. A complete set of treatments is randomized withi n each block. The object is to block in such a way that, although the blocks may differ from one another considerably, the units within each block are relatively uniform. Environmental sources of variation such as moisture and natural fertility of the soil often are a basis for blocking. In other cases, each run of a set of operations is a block, the runs being done at different times. Nearly square, compact blocks are preferred to long, narrow blocks in the field. In addition to its precision attributable to the blocking, the RCB is a simple design. By simplicity is meant that treatment are readily assigned to the experimental plots at random and that the field lay-out and analysis of data are simple. Furthermore, missing plot values may be readily estimated when using this design. It accommodates a wide range of numbers of treatments and replications although practical considerations place limits on both. The Latin Square Design (LS) provides error control (blocking) in two directions. It is useful for high precision experiments having variations due to extraneous factors in two directions. Usually it is used for experiments having from two to 10 treatments because of the requirement that the number of replications must equal the number of treatments. For the smaller numbers of treatments, e.g . two and three , several squares are usually needed to provide a reliable estimate of experimental error. The LS Design is not as simple to randomize as the RCB because it has two restrictions on randomization (each row and each column must contain a complete set of treatments) and the field lay-out could be more complex. Also, missing data pose more of a problem than in the case of the RCB Design. The Split-Plot Design (S-P) is used in cases (1) where the nature of the experimental material or mechanical aspects of the research requires differential plot sizes for various factors, or (2) when more precision is desired on the test of certain effects (sub-plot factor such asacaricid e and interaction) than on others (whole-plot factor such as crop variety) . It is also used for perennial experiments. Years in this case are the sub-plots. This design is very commonly used in plant science experimentation. In designing S-P experiments, the factor requiring a more sensitive test is usually assigned to the sub-plot. Two separate randomizations are required, one for the whole-plot 6 and one for the sub-plot treatments within each whole-plot . If sampling is done within sub-plots, it is important to have constant numbers of samples in order to maintain balance. The split-plot principle may be extended to more than one split (e.g . split-split plot designs). In addition , there are some variants of the S-P design which involve stripping of plots for one factor over the plots for a second factor (e.g . split-block design). These are usually used in cases where mechanical factors prevent forming smaller plots withi n the whole-plots (e.g . spraying in one direction, plowing in another). Consultation with a statistician in planning a complex stripped design should assure that the data from the experiment wil l be analyzable. The Completely Randomized Design (CR) although very simple and flexible with regard to the number of replications within each treatment, does not control error variation through blocking . Therefore it is not precise enough for most fiel d experiments. However, it is used for field and greenhouse experiments. For an excellent detailed description of the above designs and a discussion of their use, see Cochran and Cox (1). This text also discusses more complicated design (e.g . lattice designs for experiments involving large numbers of treatments). Two comprehensive experimental design bibliographies are (4 and 5). Faulty experimental design often involves improper (or no) randomization and/or estimation and use of error terms which are not of proper size. An example of improper randomization (and incorrect error term) is the aerial spray of insecticide treatments in long strips as shown below (the replication is withi n each treatment). | Rep. 1 f Rep. II Rep. Ill i Rep. IV | Treatment A IMPROPER DESIGN | Rep. 1 Rep. II Rep. IV | Treatment B Rep. Ill | Rep. 1 : Rep. II Rep. Ill ! Rep. IV | Treatment C The proper randomized arrangement (although more difficult and expensive to run) is shown below. PROPER DESIGN Block Block Block III Block IV Use of the non-randomized version results in lack of a valid test for treatments and also an underestimation of experimental error. Randomization can be a problem in split plot experiments involving growth chambers. Often there are not enough chambers available for replication of the temperature-humidity treatment (whole-plot factor). It is also difficul t to change the setting on temper ature-humidit y to randomly assign these treatments to various chambers. As a result there is not a vali d test for temperature-humidity. Multipl e runs using different temperatures-humidities in a chamber should serve as a source of replication for the whole plot factor thus alleviatin g the problem. Many confuse sampling error with experimental error. Samples in close proximity within a plot wil l have far less variability than plots from block to block. One clue that an underestimate of error has occurred is when al l sources in the analysis of variance test significant with very large F-values. In these cases, the randomization scheme should be reviewed. Much of the invalidity of results of statistical tests arises not from faculty design but rather from data which are not reliable. Some plots (and even entire tests) are subject to so many variations due to uncontrolled and unmeasured factors it is wishful thinking to suppose that the inferences drawn wil l be valid . Replication. Replication assures an estimate of experimental error. Moreover, it is a simple means of increasing precision of estimates of means and the sensitivity of tests of significance. Beyond a certain number of replications, the precision benefits do not offset the cost of another replication. Single replication experiments are not recommended for field use. Some workers have tried using two replications for field experi ments but they usually increase the number to three or four in subsequent years because of the poor precision obtained initially . Four replications may not be adequate for some experiments on highly variable materials. Table 2.1 in Cochran and Cox (1) may be used as a guide to choice of an appropriate number of replications if some idea of the coefficient of variation of the experimental material is available from previous experiments. The degree of precision required, the magnitude of differences to be measured, and the significance level must also be specified . The table is accompanied by examples o f its use. Regardless of the calculated number of replications, the amount of land or other experimental material, time, and money available for the study usually place an upper limit on the number used. One practice which has been used in research for extension purposes has been to systematically arrange the treatments in one replication in the manner which best makes it a demonstrational trial. For example, insecticide rates might be progressively increased from low to high from one side of the replication to another. Or, in another type of experiment, chemicals from each company might be placed together in groups within the replication. Treatments have then been randomly assigned to the plots within each of the other replications. The increased facilit y for making visual comparisons of effects would thus seem to offset the statistical disadvantages of this practice . Repetition of experiments over several locations and years is another form of replication. Conclusions are usually better founded if based upon data replicated over space and time. Interactions of treatments with environmental effects are also available using repetition of this kind. Design and randomization in series of tests. Parallel design (common treatments and number of replications) and independent randomization of individual tests in a series o f experiments facilitates the combined analysis of variance. Uniform approaches to the management of the experiment and the collection of data must be developed. Technique. Detailed procedures for conducting various phases of the experiment and a time schedule for their execution should be spelled out in writing . All personnel dealing with the treatments, plots and data should be aware of the various sources of errors and the need for good technique. The principal goals of good technique are to: (1) secure uniformity in the application of treatments, (2) exercise sufficient control over external influences so that every treatment produces its effect under comparable and desired conditions, (3) devise suitable unbiased measures of the effects of treatments, and (4) prevent gross errors. An example of good technique is to have each of three men harvest an individual replication rather than to have the three harvest al l replications as a team. Still more preferable is to have one person harvest all three replications. Statisticians have made one of their most important contributions to research programs by suggesting ways to improve techniques. One example is the study of variation patterns in data from previous years to obtain clues as to what changes in technique might improve precision. Another approach is to help the researcher design preliminary experiments which focus upon key but limited aspects of technique. A third method is to superimpose a sampling and measurement study upon existing experimental plots to facilitate the estimation of relative sizes of various sources of variation (e.g . plant to plant and leaf to leaf as compared wit h plot to plot variation). 7 Characters to be measured. The variables which are submitted to statistical analyses are called the characters. Examples are number of liv e insects per ten feet of row and the proportion of the insects which are female. In some areas of research, certain standard characters have been established and information on them is obtained routinely. In other cases, the researcher is not certain which characters best reflect the effects of the controlled variables. Therefore,he measures a number of them in order to select one or more which wil l be useful for future studies. The time at which characters are measured is important. In some cases, multiple measurements should be taken on characters throughout the season to adequately sample climatic effects. Supplementary variables. Throughout the course of the experiment, the researcher should try to determine if environmental factors not controlled by the experimental design are affecting the results of the experiment. Readings on these supplementary variables should be recorded for possible use in statistical control. These variables are called "covariables". They are used in an analysis of covariance for (1) increasing the precision of an experiment and (2) adjusting treatment means to a constant level of the covariable. One common use of covariance is for adjusting total plot yields for plant population of the plot in cases where the stand is uneven. Another use is the adjustment of final weight of groups of insects fed different diets in mass rearing experiments by their initial weight. It may be necessary to estimate some of these covariables on a rather subjective scale (e.g . index of wind damage). Even with such subjectivity, there may be an improvement in precision. Step III : Determining the method to be used in analysis of the data. A description of how the data wil l be analyzed should be written out in the planning phase. Statistical methods references may be cited for details of analyses. Also useful is an outline of the sources of variation (e.g . Blocks, Treatments, Harvest, etc.) and degrees of freedom for the analysis of variance. THE DATA AN D THEIR ANALYSES The Data — The data reflect not only treatment effects but also variation due to a number of other causes, known and unknown. Some plots may not produce reliable data because they have been subjected to unusual environmental effects. Values are obtained for these missing plots by missing plot formulas (10) and inserted in the analysis to complete the balance. Alternatively, least squares techniques may be used for the entire analysis. It is important to distinguish situations where values for a plot are zero from those where data are missing. Data variation patterns should be studied very carefully prior to the analysis of variance to find unusual values which might bias the conclusions about treatment effects. It has been estimated that from 5 to 10 percent of the numbers in many data sets are in error due to misreading of instruments, copying errors, misplaced decimals, misidentification of treatment and replication numbers, etc. After aberrant observations are identified, questions should be raised as to their cause. If there is biological justification, the aberrant observations may be omitted and replaced by missing plot estimates. Many sets of data are obviously not homogeneous. Some treatments may have zero values in all replications. Common sense and good judgment should be used in excluding data which have zero (or otherwise different) variance from the remainder of the data. Some times the untreated check should be omitted from the overall analysis of variance because it has a higher (or lower) variance than the remainder of the treatments. There are often biological reasons for expecting this variance to be different. A statistician should be consulted if there is question about the homogeneity of variance in a set of data. Otherwise, the analysis of variance may not reflect a true picture of the data if the error heterogeneity problem exists and it is not rectified. The presence of zeros or negative numbers in a data set bothers some researchers. It is not the magnitudes or signs of the numbers in a set which is important. It is the sizes of the differences among the numbers and how they vary over the data set which is paramount. If the choice between conducting an analysis of variance on actual numbers of insects controlled and percentage control arises, it is usually better to analyze the actual numbers. The exception is that percentages may be used if all based upon the same divisor. The percentage calculation is then equivalent to data coding. Percentages based upon widely varying denominators are apt to have different variances. A good way to handle the problem is to analyze and report the actual numbers controlled and then include in the table of means, percentage control figures calculated from means of the control data. No standard error would be reported with the percentage control figures. Data transformation is done when it is necessary for statistical purposes. A good discussion of the various transformations and their use is given in Chapter 8 of Steel and Torrie (11). Researchers should consult a statistician on questions concerning the need for a trans formation and the choice of an appropriate one. The disadvantage of using a transformation is that the comparisons are made and reported on a new scale which may not be familiar to the reader (e.g . log, square root, etc.) . This is because it is usually not valid to convert the means and their standard errors back to the original scale for reporting and comparison purposes. Data which consist of averages (e.g . over several samples) are less apt to need transformation than individual observations because averages more often follow the assumptions required for statistical analyses. When transformations are made, it is common practice to perform an analysis of variance on the original data as wel l as on the transformed data. Over a period of years, this writer has found a good correspondence between the results obtained in both cases from an interpretational point of view except in data sets having extreme deviations from the statistical assumptions. Analyses of Data— Biologists sometimes report data which have not been subjected to statistical analysis. Statisticians insist that replication , randomization and other design principles are necessary for good experimentation. Data from well designed experiments should then be analyzed to separate the treatment effects (important) from the random effects (of little interest). Statistical analyses also have the effect of reducing the bulk of the data so that their essential features may be presented in a concise fashion. Statisticians now place more emphasis on the estimation of effects (e.g . slopes of dosage response curves) than on routine perfor mance of tests of hypotheses. This shift in emphasis has come about due to the need to further specify the magnitude of the differences and to shed light upon the mechanism of the treatment effects. The appropriate analysis varies according to the nature of the test and the data. Student's "t " (10) is appropriate for a simple comparison of the means of two groups. There are a number of applications of chi-square tests in entomological work. They are used with insect count data to test inde pendence of factors.Chi-square is also used in insect breeding work to test hypotheses of specific genetic ratios. For further information on the uses of chi-square, Chapters 17, 18, and 19 of Steel and Torrie (11) should be consulted. Analysis of variance methods for commonly used designs are reported in several texts (2 , 3 , 6 , 10, 11), and should be routine to most researchers. Many of the practical decisions related to the data and their interpretation are not routine. A thorough study of the data should be made to have a basis for these decisions. Too often , data are placed on cards and analyzed by a computer before their patterns have been studied carefully. The false impression that a computer has some magic effect on data and wil l produce better results than a desk calculator seems to persist. On the contrary, the results from a more leisurely desk calculator analysis which permits close scrutiny of the data may be better. Computers do have their place for data analysis, however. They perform routine calculations used in statistical analyses accurately and efficiently. They are invaluable in cases where the designs are complicated or where large numbers of characters measured at one or more location(s) need to be analyzed. If the computer route is taken, it is important that the researcher understands the analysis which the computer is performing. Regardless of method of performing calculations, results should be checked for accuracy by independent calculations. Programming and key-punch errors can cause large biases of results in some computer runs. Spot checking of computer output should be made using a desk or pocket calculator. If analyses are to be made solely on the desk calculator, they should be checked by an independent operator. 8 Numbers from a mechanical or electrical device should not be assumed to be accurate. Some of the modern pocket calculators do not give accurate results when batteries are low. At large computing centers, statistical system packages such as BMD, GENSTAT, IMSL SAS, and SPSS may be used with a minimum knowledge of programming and instructions to the computer. Again, it is important that the user understands what the computer is doing when a particular procedure within the package is being used. At small facilities the researcher may need to devote considerable effort to writing computer programs for his own needs, and these programs wil l need to be checked for accuracy before being used routinely. Two problems which continually arise in data analysis are reporting more digits than are biologically measurable and accurate and , on the other hand, rounding data too much in the analysis. The former can give the reader a false impression that the data were measured with a great deal of accuracy. A good discussion of the number of figures which should be carried in data is given in Chapter 3 of Cochran and Cox. The rounding errors can cause some extremely erroneous results in regression analyses. The general rule when a series of calculations is to be performed is to round only after all the calculations have been performed. The statistician should be consulted for advice on the appropriate analysis to use regardless of how computations are to be performed. He is a specialist in the study of data variation and he often has insights into the data which the researcher might miss. He wil l base his recommendation for analysis upon a knowledge of the way in which the experiment was designed. The results are apt to be better if he was involved in the choice of the design at the planning stage. INTERPRETATION AND REPORTING OF RESULTS Interpretation — The interpretation of the results of an experiment is one of the most important phases. It has theoretical aspects which are based upon the mathematical laws of probability. Experimental observations are limited experiences which are carefully planned in advance and designed to form a secure basis of new knowledge. In the interpretation phase, the results of these planned experiences are extrapolated to the underlying population (e.g . a five county area). There is a degree of uncertainty attached to this induction, but probabilities may be attached to our uncertainties. Examples are those probabilities associated with the calculated F and t statistics in tests of significance. Interpretation of the results of statistical analyses is also an art . Because the statistical methods books do not treat data interpre tation in depth, some individuals have developed an intuitive faculty for this interpretation. This would account for the fact that two researchers might interpret the analysis of a set of data in different ways. Whether an intuitive or a more structured approach is used, much of the interpretation process involves careful study of variation patterns in the data. The results of statistical analyses and the conclusions should reflect trends seen in the data before the analysis was made. The actual techniques used in interpretation vary with the purposes of the experiment and the nature of the treatments. Main effects and interactions should be tested as part of the analysis of data from factorial experiments. If the treatments are a series of levels of a quantitative factor (e.g. dosage of acaricide) , curve fitting would show the overall relationship between the response means and the levels. Statisticians should be asked to provide standard errors for the specific contrasts which are made in curve fitting (e.g . linear, quadratic, etc.) . Use of Duncan's New Multiple Range or LSD tests does not make sense in this case. If treatments are primarily qualitative or combinations of levels of quantitative and qualitative factors (but not factorially arranged), detailed information about the treatments should be used in constructing comparisons among means. The comparisons should be of interest from a biological point of view. Selection of these comparisons can usually be accomplished best by the cooperative efforts of a researcher and a statistician. The biological basis of choosing the set of comparisons overrides the desire to keep al l comparisons orthogonal. By orthogonality is meant that the sums of squares of t-1 independent comparisons add to the total sum of squares for treat ments with t-1 degrees of freedom. Again, a statistician should be asked to provide standard errors and make the tests of significance for the comparisons chosen. It is incorrect to use Duncan's New Multiple Range Test for the situations described above because the nature of the treatments implies specific comparisons. In some cases, there is interest in comparing the mean of the untreated check with each of the other means individually. Dunnett's LSD clearl y is appropriate for this purpose (see Steel and Torrie (11)). Because of its simplicity , the LSD is a comparison procedure for pairs of means which has been widely used. It is designed for planned comparisons. It is a poor test for comparing the largest and smallest means. Functionally it is a sensitive test and therefore has a high probability of rejecting false hypotheses. Errors are expressed as a percentage of the total number of comparisons made. The LSD has a fixed range, meaning that the same criterion is used for making al l comparisons regardless of how close the means are with regard to their ranks. The LSD test is usually protected in practice by requiring that it be used only in cases where the F for Treatments is significant. A less sensitive fixed range test used in much the same manner as the LSD is Tukey's-w Procedure. Its error rate is the percentage of experiments with at least one falsely rejected hypothesis. When virtually no basis is available for making logical comparisons or subdividing treatments into groups biologically (e.g . crop variety trials), multiple comparison procedures such as Duncan's New Multiple Range Test (or Student-Newman-Keuls Procedure) may be used. This is the only situation in which use of these multiple comparison procedures is valid. Reporting of Results — The description of materials and methods should include all statistical aspects of the test such as experi mental design, number of replications, plot size, data transformation, etc. An example is, "The test was randomized according to a Randomized Complete Block Design with four blocks. Plots consisted of four 20 ft rows, the middle two of which were harvested. Row spacing was 36 inches. A square root transformation was made prior to the analysis of the data. " Most journals accept tables of means but not analysis of variance tables. Consequently, the author of a technical article must interpret the data by techniques given above and then summarize the interpretations in the narrative portion of the paper or report. Tables of means should be accompanied by standard errors provided by a statistician for valid and meaningful comparisons. Figures should also help in the interpretation. Some writers overuse the term "significance" in reporting results. This makes reading of their papers tedious. Results and con clusions should be reported in positive terms using the terminology of the scientific field . For example, it is better to say, "The number of insects killed is greater for Insecticide A than for Insecticide B" than to say, "Insecticide A and Insecticide B are significantly different at the five percent level". SUMMARY Statistics may be used to improve insecticide and acaricide tests in many ways. Statistical assistance should be sought from the planning through the data reporting stages. Well designed experiments are expected to be efficient and economical; analysis and interpretation-of resulting data should be straightforward. A number of factors enter into "good" design. Each experiment calls for a unique set of design choices to be made cooperatively by the researcher and statistician. 9 Statisticians use a few basic plans or designs for most experiments. These usually involve design principles of blocking, random ization, and replication. Routine data analysis procedures for these designs are reported in statistics books but many analysis and inter pretation aspects call for statistical assistance. Data should be studied carefully for reasonableness. There may be a need to omit certain treatments or to perform data transfor mations for statistical reasons. Otherwise data should be analyzed in the original scale for ease in interpretation. It is correct to use multiple comparisons procedures such as Duncan's New Multiple Range Test only when logical comparisons are not suggested by the nature of the treatments (e.g . crop variety tests). Most data are from experiments in which considerably more infor mation about treatments is available . This information should be used by the researcher and statistician to construct logical comparisons. The statistician should then calculate standard errors for these comparisons. The researcher has the responsibility of interpreting and reporting test results. Tables of means (accompanied by standard errors for logical comparisons) and figures may be used to support the findings summarized in the narrative. REFERENCES CITED 1 . Cochran, W. G. and G . M . Cox. 1962. Experimental Designs, 2nd Ed. , Wiley , New York. 2 . Cox, D. R. 1958. Planning of Experiments. Wiley, New York. 3 . Federer, W . T. 1955. Experimental Design, Theory and Application. Macmillan, New York. 4 . Federer, W . T. and L. N . Balaam. 1972. Bibliography on Experiment and Treatment Design Pre-1968. Published for the International Statistical Institute by Oliver and Boyd, Edinburgh. 5 . Federer, W. T. and A . J. Federer. 1973. A study of statistical design publications from 1968 through 1971. The American Statistician 27:160-163. 6 . Fisher, R. A. 1951. The Design of Experiments. 6th Ed. , Hafner, New York. 7 . Gomes, K. A . 1972. Techniques for Field Experiments with Rice; Layout, Sampling, Sources of Error. International Rice Research Institute, Los Banos. 8 . LeClerg, E. L., W. H. Leonard and A . G. Clark. 1962. Field Plot Technique, 2nd Ed. , Burgess Publishing, Minneapolis. 9 . Smith, H. F. 1938. An empirical law describing heterogeneity in the yields of agricultural crops. Journal of Ag . Sci. 28:1-23. 10. Snedecor, G. W. and Cochran, W. G. 1967. Statistical Methods, 6th Ed. , Iowa State Univ. Press, Ames. 11 . Steel, R. G. D. and Torrie, J . H. I960. Procedures of Statistics. McGraw-Hill, New York. 12. Wishart, M . A . and H. G. Sanders. 1958. Principles and Practice of Field Experimentation. 2nd Edition, Technical Communication 18. Commonwealth Bureau of Plant Breeding and Genetics. Cambridge.

Arthropod Management Tests – Oxford University Press

**Published: ** Jan 1, 1976

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.