Assessing the reliability and validity of the Revised Two Factor Study Process Questionnaire (R-SPQ2F) in Ghanaian medical students.
ABSTRACT: PURPOSE: We investigated the validity and reliability of the Revised Two Factor Study Process Questionnaire (R-SPQ2F) in preclinical students in Ghana. METHODS: The R-SPQ2F was administered to 189 preclinical students of the University for Development Studies, School of Medicine and Health Sciences. Both descriptive and inferential statistics with Cronbach's alpha test and factor analysis were done. RESULTS: The mean age of the students was 22.69± 0.18years, 60.8% (n=115) were males and 42.3% (n=80) were in their second year of medical training. The students had higher mean deep approach scores (31.23±7.19) than that of surface approach scores (22.62±6.48). Findings of the R-SPQ2F gave credence to a solution of two-factors indicating deep and surface approaches accounting for 49.80% and 33.57%, respectively, of the variance. The scales of deep approach (Cronbach's alpha, 0.80) and surface approach (Cronbach's alpha, 0.76) and their subscales demonstrated an internal consistency that was good. The factorial validity was comparable to other studies. CONCLUSION: Our study confirms the construct validity and internal consistency of the R-SPQ2F for measuring approaches to learning in Ghanaian preclinical students. Deep approach was the most dominant learning approach among the students. The questionnaire can be used to measure students' approaches to learning in Ghana and in other African countries.
Project description:PURPOSE:Different students may adopt different learning approaches: namely, deep and surface. This study aimed to characterize the learning strategies of medical students at Trinity School of Medicine and to explore potential correlations between deep learning approach and the students' academic scores. METHODS:The study was a questionnaire-based, cross-sectional, observational study. A total of 169 medical students in the basic science years of training were included in the study after giving informed consent. The Biggs's Revised Two-Factor Study Process Questionnaire in paper form was distributed to subjects from January to November 2017. For statistical analyses, the Student t-test, 1-way analysis of variance followed by the post-hoc t-test, and the Pearson correlation test were used. The Cronbach alpha was used to test the internal consistency of the questionnaire. RESULTS:Of the 169 subjects, 132 (response rate, 78.1%) completely filled out the questionnaires. The Cronbach alpha value for the items on the questionnaire was 0.8. The score for the deep learning approach was 29.4± 4.6, whereas the score for the surface approach was 24.3± 4.2, which was a significant difference (P< 0.05). A positive correlation was found between the deep learning approach and students' academic performance (r= 0.197, P< 0.05, df= 130). CONCLUSION:Medical students in the basic science years at Trinity School of Medicine adopted the deep learning approach more than the surface approach. Likewise, students who were more inclined towards the deep learning approach scored significantly higher on academic tests.
Project description:Academic engagement describes students' involvement in academic learning and achievement. This paper reports the psychometric properties of the University Student Engagement Inventory (USEI) with a sample of 3992 university students from nine different countries and regions from Europe, North and South America, Africa, and Asia. The USEI operationalizes a trifactorial conceptualization of academic engagement (behavioral, emotional, and cognitive). Construct validity was assessed by means of confirmatory factor analysis and reliability was assessed using Cronbach's alpha and McDonald's omega coefficients. Weak measurement invariance was observed for country/region, while strong measurement invariance was observed for gender and area of graduation. The USEI scores showed predictive validity for dropout intention, self-rated academic performance, and course approval rate while divergent validity with student burnout scores was also evident. Overall, the results indicate that the USEI can produce reliable and valid data on academic engagement of university students across the world.
Project description:It aimed at testing the validity and reliability of a validated team-based learning student assessment instrument (TBL-SAI) to assess United Kingdom pharmacy students' attitude toward TBL.TBL-SAI, consisting of 33 items, was administered to undergraduate pharmacy students from two schools of pharmacy each at University of Wolverhampton and University of Bradford were conducted on the data, along with comparison between the two schools.Students' response rate was 80.0% (138/173) in completion of the instrument. Overall, the instrument demonstrated validity and reliability when used with pharmacy students. Sub-analysis between schools of pharmacy did, however, show that four items from Wolverhampton data, had factor loadings of less than 0.40. No item in the Bradford data had factor loadings less than 0.40. Cronbach's alpha score was reliable at 0.897 for the total instrument: Wolverhampton, 0.793 and Bradford, 0.902. Students showed preference to TBL, with Bradford's scores being statistically higher (P<0.005).This validated instrument has demonstrated reliability and validity when used with pharmacy students. Furthermore students at both schools preferred TBL compared to traditional teaching.
Project description:Clinical experience is an essential component of nursing education since it provides students with the opportunity to construct and develop clinical competencies. Instructor caring is a pivotal facilitator at the forefront of clinical education, playing a key and complex educating role in clinical sectors. For these reasons the aims of this study was to assess the validity and reliability of the Italian version of NSPIC (I-NSPIC).A validation multicentre study was conducted in three different Italian universities. A total of 333 nursing students were enrolled in the 2014/2015 academic year. Exploratory factor analysis (EFA) with oblique rotation was performed to test the construct validity of I-NSPIC. The Cronbach's alpha coefficient and test retest via Intraclass Correlation Coefficient (ICC) analyses were done to assess the internal consistency and stability of the scale. A Spearman's correlation with another scale (CLES-T) was used to examine the concurrent validities.Four factors (control versus flexibility, supportive learning climate, confidence through caring, appreciation of life meaning and respectful sharing) were identified in EFA. The Cronbach's alpha value showed that I-NSPIC was a reliable instrument (? = 0.94) and the ICC coefficient was satisfactory.The I-NSPIC is a valid instrument for assessing the perception of instructor caring in Italian nursing students. It may also prove helpful in promoting the caring ability of nursing students and in increasing the caring interactions in the relationship between instructor and nursing students. The knowledge emerged from this study provide important insight in developing effective training strategies in the clinical training of undergraduate nursing students.
Project description:Method:The psychometric characteristics of MIN-SC were assessed using college freshman students at King Saud University in Riyadh, Saudi Arabia. The validity and reliability were examined using Cronbach's alpha coefficient. The construct validity was examined through principal component analysis. Results:The MIN-SC instrument was shown to be internally consistent with reliable scoring (Cronbach's alpha?=?0.910). Exploratory factor analysis resulted in 42 items loading on three main components: estimative, production, and transitional, with a factor loading of eigenvalues >2. The final model explained 38% of the variance. Conclusion:The Arabic version of MIN-SC was shown to be a valid and reliable tool for assessing attitude toward nutrition among adolescent students.
Project description:BACKGROUND:Performing a psychiatric interview and documenting the recorded findings in the form of a brief psychiatric report is one of the main learning goals in the psychiatric curriculum for medical students. However, observing and assessing students' reports is time consuming and there are no objective assessment tools at hand. Thus, we applied an integrative approach for designing a checklist that evaluates clinical performance, as a tool for the assessment of a psychiatric report. METHODS:A systematic review of the literature yielded no objective instrument for assessing the quality of written reports of psychiatric interviews. We used a 4-step mixed-methods approach to design a checklist as an assessment tool for psychiatric reports: 1. Development of a draft checklist, using literature research and focus group interviews; 2. Pilot testing and subsequent group discussion about modifications resulting from the pilot testing; 3. Creating a scoring system; 4. Testing for interrater-reliability, internal consistency and validity. RESULTS:The final checklist consisted of 36 items with a Cronbach's alpha of 0.833. Selectivity of items ranged between 0.080 and 0.796. After rater-training, an interrater-reliability of 0.96 (ICC) was achieved. CONCLUSIONS:Our approach, which integrated published evidence and the knowledge of domain experts, resulted in a reliable and valid checklist. It offers an objective instrument to measure the ability to document psychiatric interviews. It facilitates a transparent assessment of students' learning goals with the goal of structural alignment of learning goals and assessment. We discuss ways it may additionally be used to measure the ability to perform a psychiatric interview and supplement other assessment formats.
Project description:BACKGROUND: Many studies have explored approaches to learning in medical school, mostly in the classroom setting. In the clinical setting, students face different conditions that may affect their learning. Understanding students' approaches to learning is important to improve learning in the clinical setting. The aim of this study was to evaluate the Study Process Questionnaire (SPQ) as an instrument for measuring clinical learning in medical education and also to show whether learning approaches vary between rotations. METHODS: All students involved in this survey were undergraduates in their clinical phase. The SPQ was adapted to the clinical setting and was distributed in the last week of the clerkship rotation. A longitudinal study was also conducted to explore changes in learning approaches. RESULTS: Two hundred and nine students participated in this study (response rate 82.0%). The SPQ findings supported a two-factor solution involving deep and surface approaches. These two factors accounted for 45.1% and 22.5%, respectively, of the variance. The relationships between the two scales and their subscales showed the internal consistency and factorial validity of the SPQ to be comparable with previous studies. The clinical students in this study had higher scores for deep learning. The small longitudinal study showed small changes of approaches to learning with different rotation placement but not statistically significant. CONCLUSIONS: The SPQ was found to be a valid instrument for measuring approaches to learning among clinical students. More students used a deep approach than a surface approach. Changes of approach not clearly occurred with different clinical rotations.
Project description:Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students' perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument.The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors' institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined.Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of 'feedback mechanism' recorded the lowest mean (2.39/4.00), whereas the factor/subscale of 'assessment system/procedure' scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years.The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students' perceptions of the assessment environment in an undergraduate medical program.
Project description:PURPOSE: Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. METHODS: A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF) with oblimin rotation, a scree plot test, and parallel analysis (PA). The reliability of LSS was tested using Cronbach's ?, corrected item-total correlation, and test-retest. RESULTS: Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach's ? was >0.70 for all subscales in both study populations. The corrected item-total correlations were >0.30 for the items in each component. CONCLUSION: The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.
Project description:To reduce the incidence of hypoxic brain injuries among newborns a national cardiotocography (CTG) education program was implemented in Denmark. A multiple-choice question test was integrated as part of the program. The aim of this article was to describe and discuss the test development process and to introduce a feasible method for written test development in general.The test development was based on the unitary approach to validity. The process involved national consensus on learning objectives, standardized item writing, pilot testing, sensitivity analyses, standard setting and evaluation of psychometric properties using Item Response Theory models. Test responses and feedback from midwives, specialists and residents in obstetrics and gynecology, and medical and midwifery students were used in the process (proofreaders n?=?6, pilot test participants n?=?118, CTG course participants n?=?1679).The final test included 30 items and the passing score was established at 25 correct answers. All items fitted a loglinear Rasch model and the test was able to discriminate levels of competence. Seven items revealed differential item functioning in relation to profession and geographical regions, which means the test is not suitable for measuring differences between midwives and physicians or differences across regions. In the setting of pilot testing Cronbach's alpha equaled 0.79, whereas Cronbach's alpha equaled 0.63 in the setting of the CTG education program. This indicates a need for more items and items with a higher degree of difficulty in the test, and illuminates the importance of context when discussing validity.Test development is a complex and time-consuming process. The unitary approach to validity was a useful and applicable tool for development of a CTG written assessment. The process and findings supported our proposed interpretation of the assessment as measuring CTG knowledge and interpretive skills. However, for the test to function as a high-stake assessment a higher reliability is required.