Spotlight
For showing spotlight posts

0 0

--Originally published at Learning Education

Anonymous surveys are a common occurrence in classrooms. Yet they may not be very anonymous. Through a process known as re-identification, it is quite possible that instructors could tie your responses together with your name. What survey tools have this problem? What can instructors do to reassure students that this is not happening?

Anonymous surveys are a very important part of a course, from gauging student backgrounds at the start of the course, to getting feedback about how the course was taught near its end. For the instructor, they can provide invaluable feedback on how the course has been running. For the student, it provides an opportunity to be candid and to help improve the way the course will be taught in the future. At some institutions, they even play important roles in faculty promotions:

Student ratings are high-stakes. They come up when faculty are being considered for tenure or promotions. In fact, they’re often the only method a university uses to monitor the quality of teaching.

(Read more at npr.org)

But a critical feature of an anonymous survey relies heavily on the assumption of actual anonymity. Take for example a scenario at NYU Abu Dhabi where an instructor asked potentially too much personal information in a survey:

The survey originally asked respondents to provide information about three friends and a significant other’s sex, major, religion, ethnicity and home country. In addition, it requested that respondents rated their identified friends and significant other’s political and religious views, rate their attractiveness on a scale of 1 to 10. Respondents were supposed to provide the same information about whom they considered the most popular person in the university and themselves. Senior Alexander Peel said he felt uncomfortable by the amount of intimate information students in the Gender and Society class would have when examining responses to the survey.

(Read more at thegazelle.org)

The instructor defended her actions by saying that personally identifiable information like names were removed from the survey responses and that the responses were kept confidential to her and a small research team.

Overall, the survey worried the students — so what’s to stop them from answering untruthfully. My belief is that a student will stop answering truthfully when their concerns about the privacy of their responses outweighs their benefits from the results of the survey. Unfortunately this is quite a hard decision to infer; both for the student and the instructor.

So how does one effectively draw the line? In an ideal world, an instructor would like all students to answer truthfully and a student would want assurance that they will be safe from having their responses be linked back to them. Let’s take a look at a popular survey tool to see where it nets out.

Socrative
Socrative is a popular interactive online survey tool that is especially useful for live feedback during a class. One of the ways Socrative provides anonymous surveys is by hiding names (similar to the mechanism the instructor at NYU Abu Dhabi used):

Socrative Instructor View

Socrative Instructor View

When students answer, the instructor can then view the results (with the names removed):

Viewing Student Responses

So there are no student names and everything looks quite OK at first glance. But one could imagine the same survey as the one asked to the NYU Abu Dhabi students being asked via Socrative. The problem here is that the actual questions may give clues to a particular student in the classroom, which is exactly what the NYU Abu Dhabi students were concerned about:

England responded … by eliminating the name items from the survey…  However, some students remained worried at the possibility of deductive identification.  When consulted about this, England explained that although a possibility, this is not the objective of the exercise.

(Read more at thegazelle.org)

So it seems that Socrative provides no more assurance to students than the NYU Abu Dhabi instructor can provide, as they are using the same privacy mechanisms (removing personally identifiable information).

So where does that leave us? Do the students even have a valid concern? The answer from privacy researchers is: resoundingly yes.

Re-Identification

Re-Identification is a technique that takes a piece of information that looks anonymous and reattach the information that goes with that anonymous data. Re-Identification has been used to identify the Netflix users and their ratings based on an anonymous data set, link names to anonymous genome project datasets, and even find the governor of Massachusetts medical records from anonymous data sets. Re-Identification is possible because there are other databases that provide missing information. So by putting together two databases one can actually learn more than either database can reveal by itself.

Re-Identification: Linking Two Data Sets Together

So how might Re-Identification work in a classroom survey? The answer revolves around linking external information with the anonymized responses. This might sound tough at first, but think about who has access to the survey results: someone who has had close contact with a person multiple times a week. Add to that the student responses are probably related to the class material; something the teacher has multiple data points from each student on (from emails, questions in class or office hours, submitted essays, homework, etc.).

Take as an example from the Socrative screenshot. What if we knew that one student in the class loved to talk about candy (but maybe they did not realize it). We now might have a strong reason to believe which student had the response “Too much candy is bad.” In this case, it may not be so bad that we have a strong suspicion who gave the response but what if the survey was based on how well the instructor did in teaching a class — a student may be quite embarrassed to find out that their candid review of the instructor was linked back to them.

What Can We Do?

In my mind, students are already implicitly aware of the issues revolving around re-identification, but may not understand its power. On the other side of the classroom, I believe that instructors are not actively trying to link student responses back to the student which made the response when they give an anonymous survey (even though they might be able to).

Yet, this does not seem to be a very fulfilling conclusion. To me, we have a gaping hole in the way anonymous surveys are presented: they can be disingenuous and not very useful to both the student (their privacy could be compromised) and the instructor (students may give fake responses for fear of compromised privacy).

Luckily there are some techniques that can still provide strong privacy guarantees. Differential privacy is a particularly strong notion of privacy that prevents users from being re-identified. There are many differentially private mechanisms that could be incorporated into classroom survey tools like Socrative. The result being a survey where students do not have to worry about their involvement in the survey being linked back to them. The increase in privacy would come at the cost of accuracy (instead of seeing “exactly 2 out of 17 people said A,” you might see something like “roughly 10-22% of people said A”).

Wrapping It Up 

I hope you think twice about participating in an anonymous survey. It might just be safer to assume that the survey creator can just see your name together with the responses you made.


0 0

--Originally published at Rachel's Ramblings

This course is awesome. No, really – I find myself talking about it all the time to my friends, family, work colleagues …. they’re probably tired of hearing me talk about “that Massive class”, but I find the material incredibly compelling, thought-provoking, and simply interesting. In many ways, I wish I could put my other responsibilities on hold and just be a “full-time student” digging into this learning. But alas, that’s only a dream. In reality, I’m squeezing in as much reading and research as possible – even cutting into an hour or two of sleep some nights (which, if you know me, is a really big deal). Totally worth it.

In any case, below is my updated participation rubric. A few things have changed from the original posting, but not many. The biggest change is that I’d like make a priority to produce my own content (blogs, tweets, etc.) AND actively read / engage with what my colleagues are producing. To date, I’ve only focused on the first … and again, speaking to being that full-time-student-of-just-T509, I’m missing out by not considering the thoughts of my peers.

One side note – now that I’ve gotten into the online world in a new way, I’m not as timid about writing my ideas as I initially was. I still am quasi-fearful of what others might post / respond to, but my main concern is having the time to write thoughts. (Working a full-time job, covering another full-time job for a colleague on maternity leave, and being a part-time student will do that). I’ve been keeping a notepad log of ideas – most days, I have 3-4 themes to consider writing about.

  • BLOG POSTS: I’d like to (a) find a way to write honestly about my thoughts / opinions without fear, and (b) I’d like to post something meaningful at least once a week.
    • I’d exceed expectations if: I felt committed to what I was writing, regardless of what others might think about my opinions.
    • I’d meet expectations if: I posted weekly, but some of my posts were seemingly pc and generic because I couldn’t find a way to address my fear and/or write about why was on my mind that week.
    • I’d fall short of my expectations if: I failed to post weekly and most of my posts weren’t engaging in any way.
  • TWEETING: My goal for using Twitter is to find a way to take in information and let go of the need to consume all the information out in the Twittersphere.
    • I’d exceed expectations if: I sign on to Twitter daily, reading a variety of interesting content and engaging with the Twitterverse in some way – tweeting myself, retweeting another, or tweeting as part of an online dialogue
    • I’d meet expectations if: I achieve the goals above, yet only do so 3-4 times a week.
    • I’d fall short of my expectations if: I only signed on to Twitter and posted once weekly.
  • #t509 CLASS CONTRIBUTIONS: I find myself saying this a lot around HGSE, but I mean it every single time – that is, this is one of the coolest classes I’ve ever taken! After five weeks, I wish I could be a “full-time student of T509” and do nothing else but read, research, and go down rabbit holes of learning. It’s just that engaging. Because of that passion, I’d like to commit to helping to stimulate the #T509massive group thinking.
    • I’d exceed expectations if: (1) I find a way to verbally contribute in some of our remaining classes; (2) I make it a priority to read / comment on other student’s blogs weekly; and (3) I’m able to share ideas and information that are of interest to my classmates.
    • I’d meet expectations if: I achieved the three goals above, but only do so minimally (3-4 times over the remaining weeks)
    • I’d fall short of my expectations if: I contributed towards the goals listed above only 1-2 times in the remaining weeks.
  • ONLINE EXPLORATIONS: Since this is the first time that I’ve committed to being active in and exploring online forums, I’d like to go down as many rabbit holes as possible to dig into new ideas, opportunities, and forums that contribute to online learning experiences.
    • I’d exceed expectations if: I was able to consistently share and/or raise awareness about articles, new items, forums, and/or online platforms that could in some way contribute to the class learning
    • I’d meet expectations if: I contributed only a few (4-5) outside ideas throughout the remainder of the semester.
    • I’d fall short of my expectations if: I didn’t make this exploration a priority of my learning at all.

 

 

 


Week 5 Wrap; Week 6 Preview

1 1750

This one should be pretty brief.

Catching Up: If you have missed any work for the past two weeks– the IRT quizzes, the peer review, or (gasp) the project proposal, get it submitted ASAP or reach out to me.

Syllabus Update: I’ve updated the syllabus from the past weeks with a few additional lecture videos and resources. I did notice walking around yesterday that a few of you seem to take terrific notes during class. If you ever want to post them or scan them into the syllabus, just send me the link (or add it to the doc in a comment) and I’ll add it. A few course videos seem to be lost at HGSE IT, but we are trying to get them published.

I also added a few rabbit hole readings for next week. Unlike previous weeks, the required readings for our discussion on blended learning trend towards the celebratory and our guest speaker is an advocate. So put your hackles up. The readings I’ve added, many my own posts, are more critical.

Due Wednesday:  Revisions to your participation rubric are due on Wednesday Oct. 8, and should be delivered by tweet or email.

In Class: We’ll be joined by Julia Freeland of the Clayton Christensen Institute for Disruptive Innovation, and I provided a link in the syllabus for you to read some of her stuff. In the latter part of class, we’ll self organize into some discussion groups to think about take-aways as we wrap up our under-the-hood section.

Have a wonderful weekend!

0 0

--Originally published at Ed Tech Wannabe

pat9rice2:

David’s blog post on EdForward took the words right out of my mouth regarding Connectivist learning. In addition to his points on meta-learning, I feel that the mere fact that we are struggling through this is a learning experience. How are we to design learning environments if we haven’t completely immersed ourselves in each type?

While I struggle to always motivate myself to post something, I am seriously benefiting from reading my peers’ thoughts and findings. I wouldn’t be writing this blog post if I didn’t get something out of what David and Chris posted. Last week, I wasn’t sure what I thought about IRT. I went to the blog hub and read several different viewpoints and subsequently was able to shape my own perspective.

I see this class as an opportunity to experience the benefits and problems of Connectivism while simultaneously learning about different types of learning environments. By the end, I hope to apply my experiential learning to each of the environments to gain a full understanding of Connectivism.

We need to define the problems to develop the solutions. By experiencing and learning what works and what doesn’t, we can design better learning environments in the future (whether that includes Connectivism or not!)

Originally posted on EdForward:

Chris Swimmer, a fellow student in Massive: The Future of Learning at Scale, recently wrote a great piece on some of the pitfalls of connectivism, as we experience it in this course.  If you’re reading this and aren’t part of the course, our class follows a connectivist model, in which students contribute different things and take away different things, depending on their skill sets and interests.  In theory, the differentiation strips away much of the “busy work” that people often experience in formal education, and what’s left is a community of students whose collaboration mirrors that of “real-world” workplaces.  (You can read more in-depth about culture and connectivism here.)

I wanted to respond to Chris’s comments not because I disagree, but because I think they highlight an important element of many HGSE courses that focus on new (or unfamiliar) pedagogies, and I’m trying to make sense of the purpose behind it…

View original 389 more words


0 0

--Originally published at Kezirian's Quandries

This week in my world of graduate studies, we examined the effectiveness of peer evaluation. Having peers evaluate each others’ work is just one of many options Massive Open Online Courses (MOOCs) are exploring in order to handle the large enrollment numbers. However, as we study the pros and cons of peers evaluation, it is important to think about how we would feel with another peer, we didn’t know, examining our work.

There are plenty of things I really dislike, including birds and heights, but one of my prominent fears is receiving scrutiny from my peers. Having your writing reviewed by anyone, especially by a peer who you have never met, is a pretty intimidating thing. How do you know they are taking it seriously? How do you know they are not talking trash about your work with others? All these are fears I have had my entire life.

Although I see the possible value of effective peer evaluation systems (as mentioned in Online Learning Insights), I am afraid to fully commit to it. We live in a society that generally looks down upon failure, and therefore, we are less open to being critiqued by our peers, or anyone else for that matter. I saw it each and everyday in my own 8th grade classroom where my students would be hesitant to experiment, but were eager for me to “just tell them the answers”. Even though I am not praising this aspect of society, it is part of the world we live in, and shouldn’t be ignored when designing MOOCs.

At this moment in time, Automated Essay Scoring (AES) is a valid option for grading essays in MOOCs. As Justin Reich mentioned in his blog on EdTechTeacher, there is a correlation between essays scored by AES and by professors. AES appears to be highly accurate and a good balance for participants in MOOCs.

This feeling needs to be remedied before peer evaluation is ever going to become an effective grading tool. (Source)

Don’t get me wrong, peer evaluations definitely have a place in the learning process and should be utilized much more frequently, but it should be a formative assessment and not a summative one. Our educational system and society at large would be greatly enhanced if we were able to truly accept experimentation and failure. Until we foster this at a young age, risk taking will continue to be limited among teens and adults in society.

Peer evaluation places a large responsibility on the grader and grade recipient, neither of which have ever really been trained in these roles. The grader probably rarely was taught how to provide effective feedback and constructive critiques. The grade recipient probably never was trained to value their peers’ opinions nor were they taught it was okay to have areas for improvement. Forcing participants to be at the whim of a peer evaluation system for their summative grade seems illogical, especially when AES systems are in place that align with the social norms we are more familiar with.

0 0

--Originally published at Allison Goldsberry

I teach web design at Medford High School. Web design is not considered an academic subject and is not part of Massachusetts’ extensive standardized testing regiment. In that regard, I have plenty of flexibility in terms of how I build my curriculum, how I deliver my instruction, and how my students’ skills are evaluated.

Honestly, I could give every student an A but that does not mean he/she knows how to build a website, understands the difference between HTML and CSS, or can incorporate jQuery into a project. What skills students learn and their own motivations for taking my class are far more important to me. I make this very clear throughout the year as we learn new things.

I also emphasize and encourage their own goal setting and self-evaluations. My students will evaluate themselves throughout the year on a variety of things, such as their progress toward their goals (i.e. I want to build a website or I want to make a game) and certain important skills. I have them rate themselves on things like creativity, trying hard and not giving up, being organized, etc. This helps them evaluate themselves on things that are truly important and hopefully makes them more reflective about what they learn and their key role in the learning process.

I have also found making my own evaluation rubrics in this class has been very helpful. It has forced me to really think about what I want to get out of the class and how I’m going to make that happen.