Correct option is A
Internal consistency reliability refers to the extent to which items on a test measure the same construct and are correlated with each other. It assesses whether different parts of a test yield consistent results.
-Split-half reliability (A): This method evaluates internal consistency by dividing a test into two equal halves and correlating the scores from both halves. A higher correlation indicates greater reliability.
-KR-20 (Kuder-Richardson 20) (D): This is a statistical measure of internal consistency, specifically used for dichotomous (right/wrong) scored items. It determines how well items within a test assess the same construct.
Thus, A (Split-half) and D (KR-20) are correct answers, as both are measures of internal consistency reliability.
Information Booster:
Information Booster: Reliability Testing in Psychological Assessment
Reliability refers to the consistency and stability of a psychological test over time, across different situations, or among different raters. A reliable test produces similar results under consistent conditions.
1. Types of Reliability
1.1 Test-Retest Reliability
-Measures the stability of test scores over time.
-A test is administered to the same group twice with a time gap.
-High correlation between scores indicates strong reliability.
-Example: If an intelligence test gives similar IQ scores when taken twice in a month, it has high test-retest reliability.
1.2 Inter-Rater Reliability
-Measures the consistency of scores given by different examiners or raters.
-Used for subjective assessments like essay grading or behavioral observations.
-Example: Two psychologists independently rating a patient’s anxiety level should give similar scores.
1.3 Parallel-Forms Reliability (Alternate-Forms Reliability)
-Measures the consistency between different versions (forms) of the same test.
-Two versions of a test are administered to the same group, and scores are compared.
-Example: A researcher designs two equivalent IQ tests and ensures both produce similar scores.
1.4 Internal Consistency Reliability
-Measures how well different parts of a test measure the same construct.
-Ensures that all items within a test are correlated and assess the same concept.
-Types of Internal Consistency:
---Split-Half Reliability
---Cronbach’s Alpha (α)
2. Factors Affecting Reliability
-Test Length → More items generally increase reliability.
-Test Conditions → Noise, stress, or unclear instructions can reduce reliability.
-Time Gap in Test-Retest → A long gap may lead to memory effects or actual changes in ability.
-Scoring Subjectivity → Unclear scoring criteria lower inter-rater reliability.
-Test-Taker’s Mood and Motivation → Fluctuations can impact test consistency.
Additional Information:
(B) Test-Retest Reliability: Measures the consistency of a test over time by administering it to the same group at different points. It assesses stability rather than internal consistency.
(C) Scorer Reliability (Inter-Rater Reliability): Evaluates the consistency between different raters or scorers when judging subjective responses, such as in essay scoring or behavioral observations.