Validating surveys: how we sharpen tools in the Great Teaching Toolkit

We know that teachers’ professional development can be a powerful driver of better student outcomes (Coe et al., 2020; Ventista & Brown, 2023), and feedback can be one of the most powerful professional development tools to improve goal-directed performance. The Great Teaching Toolkit has prioritised the development of more efficient, practical and powerful feedback and professional development tools. In her role as EBE’s research statistician, Dr Ourania Ventista continually reviews the validity of the survey tools, and the ways in which they can be improved. Ourania has recently conducted a statistical analysis on the student survey data for the validation process, which she discusses in this blog.

What we were looking for

We aimed to ensure that the survey is actually measuring teaching quality as we intended. The surveys were developed to reflect the Model for Great Teaching, our curriculum for teacher learning and professional development. To find out about the surveys, I ran a series of statistical analyses (i.e., factor analysis and structural equation modelling) that there is alignment between the measurement tool (i.e., the student surveys) and the dimensions and elements of the Model for Great teaching.

The interpretation of the results is another crucial aspect of the validation process. Examining how the results of these tools are interpreted is important, because more evidence is needed to support bigger claims (Kane, 2013). Therefore, it was important to ensure that the claims made are aligned to the evidence; we want to avoid over-interpretation of the feedback reports during the professional development cycle. To do this, we spoke with teachers and school leaders who used these tools and discussed how they interpret the feedback reports produced. It is worth mentioning that when reports are produced, support is offered to schools to interpret their reports in a valid way. By this, I mean that the conclusions are supported by the relevant evidence and aid in professional development.

What we find in the data

As part of the statistical analysis, we considered the factors that may affect the teachers’ scores in the student surveys. For example, research suggests that younger students (e.g., 7-8 years old) are more likely to be more positive with agree/disagree style scales compared to older students (Rubie-Davies & Hattie, 2012). In the Great Teaching Toolkit, different student survey versions are available for different age groups. Therefore, it was essential for us to explore whether the scores of different age groups were comparable. Ultimately, we examined whether a teacher’s scores are affected by factors that are not related to teaching (e.g., student age, survey versions, mode of completing the survey). Our investigation of these factors is ongoing and findings will help our continual refinement of the student surveys; our hope is that this will lead to better interpretations and stronger professional development tools.

We also conducted class-level statistical analysis. There is an assumption that students of the same class would have more common responses compared to students of different classes since the questions were about the teaching quality. Specifically, students are organised in classes and the students of the same class are taught by the same teachers. This relatedness of the data was used as an indicator of the quality of the student surveys. This means that if the instrument of teaching quality is good, the responses of students who attend the same class and have the same teacher will be more correlated to each other compared to those that attend a different class. Indeed, our analysis confirmed that the responses of students of the same class were more correlated to each other.

Final thoughts for professional development

To conclude, we found evidence which supports the validity of the student survey tools; they can be an effective tool to support teacher professional development. However, the validation work does not stop here! We are interested in continuing this with the video observation tool and the other tools included in the Great Teaching Toolkit. We are doing all this work to ensure that the teachers can use valid and trustworthy professional development tools to improve their practice and support their students.

We know the value of effective feedback for professional development and our aim is keep improving the feedback from the most important people. You can learn more about the feedback tools included as part of the Great Teaching Toolkit here.

  

References

Coe, R., Rauch, C.J., Kime, S., & Singleton, D. (2020). Great Teaching Toolkit: Evidence review. Evidence Based Education. https://evidencebased.education/great-teaching-toolkit-evidence-review/

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement50(1), 1-73. https://doi.org/10.1111/jedm.12000

Rubie-Davies, C. M., & Hattie, J. A. C. (2012). The dangers of extreme positive responses in Likert scales administered to young children. The International Journal of Educational and Psychological Assessment, 11(1), 75–89.

Ventista, O. M., & Brown, C. (2023). Teachers’ professional learning and its impact on students’ learning outcomes: Findings from a systematic review. Social Sciences & Humanities Open8(1), 100565. https://doi.org/10.1016/j.ssaho.2023.100565

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0
X
X