ZOMG! I have so many thoughts for you. I’ll try to be concise with the essential ones. They are in order of descending importance.
Remember that by Friday at 11:59pm you should submit your Project Update/Deliverables/Rubric. You will have a chance to change and update in October, but be as precise as you can now about what you might do. You need to do this on time because the subsequent peer assessment depends upon it. If you are in a group, every member of the group should submit an identical proposal, so that everyone gets asked to complete two peer evaluations.
An Important Clarification about the November 19 Presentations
I have not yet adequately described the format of the presentations on November 19. I’ll add it to the syllabus, but for now, let me describe here. Basically, the presentation format will be a kind of poster faire. We’ll break into two groups, and half of people will wander about talking to colleagues and the other half will staff tables with posters, laptops, and so forth. You will have a chance to have lots of short interactions one-on-one or one-on-few with the teaching team, classmates, and guests. So you won’t get up and do a 5-minute presentation in front of everyone, rather, you’ll get a chance to share a nugget with lots of small groups. After an hour, we’ll switch who is presenting and who is wandering. So that may be helpful as you think about what your presentation criteria should be like. Think about something that people can do or see.
Then, on Saturday at midnight, you should be able to log into canvas (canvas.harvard.edu) and access the two peer projects to give feedback on. The rubric for giving feedback should automagically pop up for you, but if you have questions, please let me know. Please complete this feedback by Wednesday.
The rest of readings and assignments should be both modest and straightforward. Thanks for experimenting with the peer assessment system with us.
Follow up from last week’s class
1) An Apology- First, I’m very grateful to those of you who described your difficulties in finding and completing the quizzes. I apologize for making light of the fact that some people didn’t do them, when it was in fact my fault for not being more clear in both the description of the activity and the syllabus. You should be able to go back and do the questions. Hopefully, it’s a lesson in how different people interpret and experience the same platform.
2) Why did we study IRT– I’m not sure I explained to my satisfaction why we studied IRT, and there are several reasons. The first is simply that I’m trying to design the course so that we immerse ourselves in a variety of modalities, and IRT lent itself well to a video-quiz-discussion-classfollowup format. It was a good topic to xplore in an xMOOC kind of way. Second, I wanted you to understand that when you hear people talking about “adaptive testing” in computer-aided instruction, that it’s not techno-babble beyond the realm of human comprehension. There is a logic underneath it that isn’t so hard to conceptually understand, and if you understand how it works and what it’s doing, you can understand what it might be good for an what some of the problems might be. You might understand better for instance, why the opportunity to more “efficiently” test student proficiency might be a good fit with a vision of personalization that tries to optimize individual paces and pathways through content; or you might understand better why a teacher can’t just build their own adaptive testing system–since it depends upon large items banks normed on many students. Third, I also wanted you to ask a question that Lucas asked “Is modern technology necessary for IRT? It seems like an old idea that intelligent tutors have just made more accessible.” IRT is indeed older than some of you, dating back to the 80s–and it’s been used (and adapted and improved) in intelligent tutors like Cognitive Tutor for many years. Khan Academy’s use of adaptive testing is a continuation of past efforts, not a new initiative.
I hope you understand IRT well enough to articulate something like “IRT is a statistical toolkit that characterizes uses qualities of test items. By creating measures of difficulty and discrimination of test items, we can compare test items, compare test takers who take different items, and we can be more efficient in using items to precisely characterize student proficiency.” But as will discuss next week, there are certain kinds of items that lend themselves better to large-scale assessment than others!
We will continue these discussions into next week, as we think more about self-assessment, peer-assessment, and machine-learning.
3) Personalization If you were thinking, “Gosh, this weekend I totally wish I could listen to Justin natter on for an hour about Personalization in education” I added this to the Rabbit Hole viewings for last week, Personalized Learning, Backpacks Full of Cash, Rockstar Teachers, and MOOC Madness.
4) Thank you I’m having a lovely time getting to know you, getting challenged by your thinking, and hearing about your projects and interests. Thank you! See you Wednesday.