Forum

Please use the forum to leave questions and comments for the authors of talks and posters.

Poster 35: Whole-cl...
 
Notifications
Clear all

Poster 35: Whole-class questions in the active classroom: digging deeper  


VP2020
(@vp2020)
Member Admin
Joined: 1 year ago
Posts: 33

Quote
VickyMason
(@vickymason)
Eminent Member
Joined: 9 months ago
Posts: 32
 

Areally interesting poster, thanks. I wonder if you are planning to make changes now to your questions based on these results and how will you go about this?


ReplyQuote
Ross Galloway
 Ross Galloway
(@Ross Galloway)
Guest
Joined: 9 months ago
Posts: 1
 

Thanks!

Yes, planning to revise the questions in light of this. Top priority will be the questions with negative discrimination index (where students are more likely to get them wrong if they are generally more able students, as measured by the rest of the questions). These questions are likely to be broken in some way: potentially misleading, or making unspoken assumptions that trip up more able students (who often get very hung up on 'rigour' when actually an experienced physicist would be happy to make an appropriate simplifying assumption). Classical Test Theory is very important for identifying these, as they only show up in the context of the behaviour of the class across the entire question set. Just eye-balling individual question data tends not to highlight these, as they might be entirely innocuous-looking questions otherwise.

The other thing will be to generally try to increase the challenge level of the easier questions, as measured by the difficulty index. This might also help to promote consistent student engagement. (In principle you can see this from eye-balling individual question data, but I hadn't quite appreciated the magnitude of the issue until seeing the CTT output.)


ReplyQuote
RossGalloway
(@rossgalloway)
New Member
Joined: 9 months ago
Posts: 1
 

Thanks!

Yes, planning to revise the questions in light of this. Top priority will be the questions with negative discrimination index (where students are more likely to get them wrong if they are generally more able students, as measured by the rest of the questions). These questions are likely to be broken in some way: potentially misleading, or making unspoken assumptions that trip up more able students (who often get very hung up on 'rigour' when actually an experienced physicist would be happy to make an appropriate simplifying assumption). Classical Test Theory is very important for identifying these, as they only show up in the context of the behaviour of the class across the entire question set. Just eye-balling individual question data tends not to highlight these, as they might be entirely innocuous-looking questions otherwise.

The other thing will be to generally try to increase the challenge level of the easier questions, as measured by the difficulty index. This might also help to promote consistent student engagement. (In principle you can see this from eye-balling individual question data, but I hadn't quite appreciated the magnitude of the issue until seeing the CTT output.)


ReplyQuote
Share: