How do scientists usually designed experiments
Investigators can subconsciously 'fudge' their data if they know what result they want to find. The answer is to do the experiment 'blind', so the investigators and the subjects, if humans are being studied do not know which treatment's effect they are observing. This can make the logistics of doing the experiment more complex: for example, when determining the haemoglobin concentration of male and female class members.
There is a story about a professor who devised a maze for measuring the intelligence of rats. One day he gave his technicians, who actually made the measurements, three groups of rats. He told them one group had been specially bred for intelligence, one for stupidity and the third was average. The technicians assessed the rats' intelligence and confirmed that the 'bright' group performed the best and the 'stupid' group the worst.
The point is, of course, that the professor had put animals into the three groups at random. They did not differ in intelligence. Good experiments often, though not always, involve measuring something: a weight, say.
When you make measurements, it is important you know both the accuracy and the precision of your measuring system. These two terms are not synonymous: 'accuracy' means the ability of the method to give an unbiassed answer on average, whereas 'precision' is an index of the method's reproducibility. Ideally your method should be both accurate i. Sometimes one is more important than the other. For example, if you were looking for small changes with time in a quantity such as an athlete's haemoglobin concentration , you would need a precise measure of it rather more than an accurate one.
Accuracy and precision together help you to judge the reliability of your data. They also help you to judge to how many significant figures you should quote your results. For example, if you use a balance reading to the nearest gram, you should give the results to the nearest gram and not, say, to the nearest tenth of a gram.
Some experiments are very difficult to do because it is not obvious what can be measured. This is a real problem in animal behaviour: for example, there is no obvious unit or measure for 'emotional state'.
It is usually necessary to isolate measurable components of behaviour. Thus the speed at which a tiger paces up and down a cage can give some indication of the internal state of the animal but can never give a full picture of it.
Many of these points are rather abstract, but they should become clearer when you think about the following examples. Forty bean plants, growing in pots, were covered one afternoon by individual glass containers and left in the laboratory overnight. Next morning, the inside of the lid of each container was found to be covered in droplets of a fluid which proved to be water.
The water could have come from the plants, the soil, the pots, or the air in the jar. Control experiments should have been set up to test for these possibilities. Example 2: Is your supermarket's 'own brand' of washing powder as good as a nationally-advertised one? Eric Triton bemoaned the fact that his wife Ariel insisted on washing his clothes with their local supermarket's own brand of powder. He was sure the well-known brand he saw performing miracles on television most evenings would do better.
He therefore set out to prove as much. Mr Triton decided to compare the effectiveness of the two products on what his wife called 'difficult' dirt: grass stains on white linen handkerchiefs. He followed the instructions on the packets exactly, weighing out the same amount of powder and using their washing machine's programme for white linens.
Mr Triton was aware of the need for an index of 'cleanliness' and therefore devised a subjective scale, ranging from 10 'whiter than white' to 0 the starting level of dirtiness.
Mr Triton's belief was substantially confirmed. He scored the handkerchief cleaned by the national brand an impressive 8, whereas the own-brand powder only managed 7. Triumphantly, he reported the outcome to his wife. Mrs Triton, however, was unimpressed. She pointed out to her husband that there were several flaws in his experiment and convinced him that the outcome was 'not proven'.
There is a story about an eminent Professor at Cambridge who gave a paper at a scientific meeting and was asked by a questioner "what statistical test did you use to verify your results? I draw a histogram of my results, pin it to the notice board, then walk to the other end of the corridor. If I can still see a difference between the treatments then it's significant".
The relevance of this story lies in what it does not say! If an experiment is designed and executed properly - as we would expect of an eminent scientist - then the results often speak for themselves. For example, this might be true of experiments in which mutants are generated or genes inserted in an organism, giving a clear change of behaviour such as resistance to an antibiotic or expression of a new trait.
Such "all or nothing" effects seldom need to be backed by statistical tests, but they still need good experimental design. However, in many areas of biology we work with variable effects - differences in the growth rates of organisms, quantitative differences in antibiotic resistance or in size or in rates of biochemical reactions, etc. Then we not only need statistical tests to analyse those differences but we also need good experimental design to ensure that we haven't biased our results in some way, without realising it.
Good experimental design is the key to good science. But it's not as easy as it might seem. In many cases good experimental design involves having a clear idea about how we will analyse the results when we get them.
That's why statisiticians often tell us to think about the statistical tests we will use before we start an experiment. Three important steps in good experimental design. Define the objectives. Record i. Devise a strategy. Record precisely how you can achieve the objective. This includes thinking about the size and structure of the experiment - how many treatments? Set down all the operational details. How will the experiment be performed in practice?
In what order will things be done? Should the treatments be randomised or follow a set structure? Can the experiment be done in a day? Will there be time for lunch? If all this sounds trivial or obvious, then read on. It's not as easy as you think! Example 1. Experiments that yield no useful results because we did not collect enough data. Suppose that we want to test the results of a Mendelian genetic cross.
We start with 2 parents of genotype AABB and aabb where A and a represent the dominant and recessive alleles of one gene, and B and b represent the dominant and recessive alleles of another gene. We know that all the F 1 generation first generation progeny of these parents will have genotype AaBb and that their phenotype will display both dominant alleles e. This F 1 generation will produce 4 types of gamete AB, Ab, aB and ab , and when we self-cross the F 1 generation we will end up with a variety of F 2 genotypes see the table below.
All these genotypes fall into 4 phenotypes, shown by colours in the table: double dominant , single dominant A , single dominant B and double recessive. And we know that in classical Mendelian genetics the ratio of these phenotypes is Most people also know that we use a chi squared test to analyse the results of genetic crosses: we do our experiment, count the number of F 2 progeny that fall into the different categories, and test to see if our results agree with an expectation.
In this case, the expectation would be that the results fit a ratio. But what you might not know is that a chi squared test would only be valid if every expected category in this case is 5 or more it does not matter what the actual count is in each category, but the expected count must be 5 or more.
In other words, we MUST have at least 80 F 2 progeny in order to use the chi squared test for this experiment, because then the smallest category - double recessive - would have an expected 5 individuals in it one-sixteenth of 80 being 5. Similarly, for comparing two counts e. As a different example along the same lines , we might want to compare the biomass produced by plant callus culture in flasks containing different nutrient solutions.
We know that we need more than one flask of each nutrient solution i. So, how many replicates should we use? This is a matter of judgement and the available resources but if we look at a t-table we can make some rational decisions. These are the experimental controls. The amount of erosion is the dependent variable. It is what the farmer is measuring. During an experiment, scientists make many measurements. Data in the form of numbers is quantitative.
Data gathered from advanced equipment usually goes directly into a computer, or the scientist may put the data into a spreadsheet. The data then can be manipulated. Charts and tables display data and should be clearly labeled. Statistical analysis makes more effective use of data by allowing scientists to show relationships between different categories of data.
Statistics can make sense of the variability in a data set. Graphs help scientists to visually understand the relationships between data. Pictures are created so that other people who are interested can see the relationships easily.
In just about every human endeavor, errors are unavoidable. In a scientific experiment, this is called experimental error. What are the sources of experimental errors? Systematic errors may be inherent in the experimental setup so that the numbers are always skewed in one direction. For example, a scale may always measure one-half ounce high.
The error will disappear if the scale is re-calibrated. Random errors occur because a measurement is not made precisely. For example, a stopwatch may be stopped too soon or too late. To correct for this type of error, many measurements are taken and then averaged.
If a result is inconsistent with the results from other samples and many tests have been done, it is likely that a mistake was made in that experiment and the inconsistent data point can be thrown out. Scientists study graphs, tables, diagrams, images, descriptions, and all other available data to draw a conclusion from their experiments.
Is there an answer to the question based on the results of the experiment? Was the hypothesis supported? Some experiments completely support a hypothesis and some do not.
If a hypothesis is shown to be wrong, the experiment was not a failure. All experimental results contribute to knowledge. Experiments that do or do not support a hypothesis may lead to even more questions and more experiments. After a year, the farmer finds that erosion on the traditionally farmed hill is 2.
The plants on the no-till plots are taller and the soil moisture is higher. The farmer decides to convert to no-till farming for future crops. The farmer continues researching to see what other factors may help reduce erosion. As scientists conduct experiments and make observations to test a hypothesis, over time they collect a lot of data.
If a hypothesis explains all the data and none of the data contradicts the hypothesis, the hypothesis becomes a theory. A scientific theory is supported by many observations and has no major inconsistencies.
A theory must be constantly tested and revised. Instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests.
Boundless vets and curates high-quality, openly licensed content from around the Internet. This particular resource used the following sources:. Skip to main content. Introduction to Chemistry. Search for:. The Scientific Method. Learning Objective Discuss hypotheses and the components of a scientific experiment as part of the scientific method.
Key Points In the scientific method, observations lead to questions that require answers. In the scientific method, the hypothesis is a testable statement proposed to answer a question. In the scientific method, experiments often with controls and variables are devised to test hypotheses. In the scientific method, analysis of the results of an experiment will lead to the hypothesis being accepted or rejected.
Show Sources Boundless vets and curates high-quality, openly licensed content from around the Internet. October 16, Licenses and Attributions.
0コメント