CLICK HERE FOR QUESTIONS
ANSWERS 👉 1-5 6-10 11-17 NUMERICAL QUESTIONS ANOVA
18. Standard Error
Standard Error (SE) is a statistical index that indicates how accurately a sample statistic represents the true population parameter. It measures the extent to which sample means would vary if we repeatedly took multiple samples from the same population. SE is essentially the standard deviation of the sampling distribution of a statistic (usually the mean). A smaller SE means the sample mean is a reliable and precise estimate, whereas a larger SE indicates more variability between samples.
Key Points:
- SE shows the precision of an estimate.
- It decreases as sample size increases, because larger samples represent the population better.
- SE is widely used to compute confidence intervals, which help determine the range within which the population mean is likely to fall.
- SE is also essential in hypothesis testing (e.g., t-test, z-test) for evaluating statistical significance.
- Formula for the standard error of the mean:
SE=SDNSE = \frac{SD}{\sqrt{N}}SE=NSD
This shows that SE is directly related to the standard deviation and inversely related to the square root of sample size.
Overall, SE helps researchers understand how much sample-based results might differ from true population values, improving reliability and interpretation of research findings in psychology.
19. Variance
Variance is a fundamental measure of dispersion that indicates how far the individual values in a dataset deviate from the mean. It is calculated by taking the average of the squared differences between each score and the mean. Squaring the deviations ensures that both positive and negative differences do not cancel out and gives greater weight to larger deviations.
Key Points:
- Variance reflects the degree of spread in the data. High variance indicates that scores are widely scattered, while low variance shows that scores cluster closely around the mean.
- It is expressed in squared units, as the deviations are squared during calculation.
- Variance is critical for understanding behavioural data, as many psychological variables show individual differences.
- It is a foundational component of several major statistical procedures such as ANOVA, correlation, regression, and standard deviation.
- Formula for sample variance:
s2=∑(X−Xˉ)2N−1s^{2} = \frac{\sum (X - \bar{X})^2}{N - 1}s2=N−1∑(X−Xˉ)2
Here, N−1N - 1N−1 (degrees of freedom) ensures an unbiased estimate of population variance.
Thus, variance provides a complete picture of variability within a dataset and forms the basis for many statistical analyses in psychological research.
20. Kurtosis
Kurtosis is a statistical measure used to describe the shape of a frequency distribution, particularly the heaviness of its tails and the sharpness of its peak. It helps in understanding whether the distribution has more extreme values (outliers) than a normal distribution. A distribution with high kurtosis (leptokurtic) has a sharp peak and thick tails, meaning extreme scores occur more frequently. A distribution with low kurtosis (platykurtic) has a flatter peak and thin tails. When a distribution’s kurtosis is similar to the normal curve, it is called mesokurtic.
Key Points:
- Indicates concentration of scores in tails or peak.
- Leptokurtic → tall peak, more extreme values.
- Platykurtic → flat peak, fewer extreme values.
- Mesokurtic → same peakedness as the normal distribution.
- Important for understanding psychological data where extreme responses (e.g., very high anxiety or very low scores) may appear.
Kurtosis helps researchers judge whether their data meet the assumptions of parametric tests and whether extreme values influence results.
21. Skewness
Skewness is a measure of the asymmetry of a distribution. In a perfectly symmetrical distribution, the mean, median, and mode are equal, and skewness is zero. When the distribution tail extends more towards the right, it is positively skewed, indicating that many scores cluster on the lower side and a few high scores stretch the tail. When the tail extends to the left, it is negatively skewed, meaning most scores are high with some extremely low scores pulling the distribution leftward.
Key Points:
- Describes asymmetry in data distribution.
- Positive skew → tail on the right, many low scores.
- Negative skew → tail on the left, many high scores.
- Helps identify the shape of psychological data, which is often not perfectly normal (e.g., reaction times are usually positively skewed).
- Influences choice of statistical tests; highly skewed data may require non-parametric methods.
Understanding skewness helps psychologists interpret score patterns more accurately and decide on appropriate analytical methods.
22. Sampling Error
Sampling error refers to the natural difference between a sample statistic (such as sample mean) and the true population parameter. It occurs because a sample is only a subset of the population, and therefore cannot perfectly reflect it. Sampling error is not a mistake; it is unavoidable in research. Larger samples tend to reduce sampling error because they more accurately represent the population.
Key Points:
- Represents difference between sample and population values.
- Occurs due to chance variations.
- Reduced by increasing sample size.
- Affects accuracy and reliability of research findings.
- Basis for understanding confidence intervals, standard error, and significance testing.
In psychological research, awareness of sampling error ensures better interpretation of sample-based findings and prevents overgeneralization.
23. Degrees of Freedom
Degrees of freedom (df) represent the number of independent values that can vary when computing a statistical estimate. It usually equals N minus the number of parameters estimated. For example, when calculating variance, once the mean is fixed, only N−1 values can vary freely, so df = N−1. Degrees of freedom are essential because many statistical distributions (like t-distribution and chi-square) are defined based on df.
Key Points:
- Number of independent values free to vary.
- Usually calculated as df = N − 1 or based on number of groups.
- Required in t-tests, chi-square tests, ANOVA, etc.
- Determines shape of sampling distribution for hypothesis testing.
- Ensures unbiased estimation (e.g., N−1 for variance).
Degrees of freedom help in accurate computation of probability values and ensure correct interpretation of statistical tests.
24. Histogram
A histogram is a graphical representation used to display the frequency distribution of continuous or grouped data. It consists of adjacent bars, where each bar represents a class interval, and the height represents the frequency. Unlike bar graphs, histogram bars touch each other because the data are continuous. Histograms are useful to visualize the shape of the distribution—whether it is symmetric, skewed, bimodal, or uniform.
Key Points:
- Represents continuous data with touching bars.
- Shows frequency distribution of intervals.
- Helps visualize skewness, kurtosis, mode, spread, and outliers.
- Useful for initial data inspection before statistical analysis.
- Commonly used in psychology for test scores, reaction times, error rates, etc.
Histograms provide an immediate visual understanding of how data are distributed, supporting better decision-making in analysis.
5. Scatter Plot
A scatter plot is a graph that displays the relationship between two continuous variables using dots. Each point represents a pair of values for an individual. Scatter plots show the direction, strength, and form of the relationship—whether it is positive, negative, linear, nonlinear, or absent. They are often used before running correlation or regression analyses.
Key Points:
- Plots pairs of scores on X and Y axes.
- Shows positive, negative, or no correlation.
- Helps detect outliers, clusters, and patterns.
- Indicates whether a linear regression is appropriate.
- Essential in psychological studies involving relationships (e.g., stress vs. performance, age vs. cognition).
Scatter plots offer a simple visual method to understand how two variables are associated before applying statistical tests.
26. Point & Interval Estimation (5 Marks)
Point and interval estimation are two methods of estimating population parameters using sample data. Point estimation provides a single best estimate—for example, the sample mean estimating the population mean. Interval estimation provides a range of values, known as a confidence interval, within which the true parameter is likely to fall (e.g., 95% CI). Interval estimates are more informative because they convey both the estimate and its precision.
Key Points:
- Point estimate → single value estimate.
- Interval estimate → range of likely values (confidence interval).
- Interval estimates use standard error, making them more reliable.
- Confidence intervals reflect precision and uncertainty.
- Widely used in hypothesis testing and reporting of psychological research.
Together, point and interval estimations improve the accuracy, interpretability, and credibility of statistical findings.
27. Interaction Effect (5 Marks)
An interaction effect occurs when the effect of one independent variable on a dependent variable changes depending on the level of another independent variable. This means the variables combine in such a way that their joint influence is different from their individual effects. Interaction effects are typically examined in factorial designs, especially in Two-Way ANOVA.
Key Points:
- Shows combined influence of two independent variables.
- Occurs when one variable’s effect depends on another variable.
- Essential in Two-Way ANOVA and complex experimental designs.
- Helps identify patterns that single-variable analysis cannot detect.
- Often represented visually through interaction plots.
Interaction effects are important in psychology because real-life behaviour is influenced by multiple interacting factors—not single variables in isolation.
CLICK HERE FOR QUESTIONS
ANSWERS 👉 1-5 6-10 11-17 NUMERICAL QUESTIONS ANOVA

No comments:
Post a Comment