Correct option is B
Test reliability refers to the consistency and dependability of a test’s scores over repeated administrations. It ensures that a test measures a concept accurately and consistently.
-Option B is correct as split-half and alternate-form methods are widely used to assess a test's internal consistency and equivalence.
-Option C is correct since inter-rater reliability is assessed when multiple raters are evaluating the same performance or behavior to determine the level of agreement between them.
Thus, the correct choice is B and C only (Option 2).
Information Booster:
Test Reliability Definition: Reliability refers to the consistency of a test’s results over time or across different evaluators.
Types of Reliability:
-Test-Retest Reliability: Measures stability by administering the same test twice over time.
-Split-Half Reliability: Assesses internal consistency by dividing the test into two halves and comparing scores.
-Alternate-Form Reliability: Compares different versions of the test to ensure consistency.
-Inter-Rater Reliability: Evaluates consistency between multiple raters scoring the same performance.
Importance of Reliability: High reliability ensures accurate and reproducible test results.
Difference Between Reliability and Validity: Reliability measures consistency, while validity measures accuracy (i.e., whether a test measures what it intends to).
Factors Affecting Reliability: Test length, test conditions, subject variability, and rater training impact reliability.
Statistical Measures of Reliability: Cronbach’s alpha (for internal consistency), Cohen’s kappa (for inter-rater reliability), and correlation coefficients are commonly used.