2014

Monthly Archives: September 2014

0 0

--Originally published at Follow My Never-Ending Learning Adventure

I have always thought that to learn something (a skill, an instrument, a language, you name it), you needed to be methodical about how to approach it. I always envied those who could learn to play an instrument by simply doodling with it or playing “by ear.” The same goes for languages. I could never really grasp those skills organically. I always need to have an empiric approach.

Part of my learning journey is to come out of that comfort zone and simply try to learn by doing…not over-thinking.

As part of a Harvard class, “Designing for Learning by Creating,” taught by MIT alum Karen Brennan, my class had the pleasure of hearing Mitch Resnick speak about the Scratch program he helped develop at MIT. This was on the heels of our class creating and presenting our own Scratch projects. Here’s a link to mine…don’t expect to be impressed, but at least I tried.

I wondered how children were so adept at learning this program and yet, it was taking some of us hours to create a 30 second game or simulation to present. Then I heard Dr. Resnick speak about the learning that takes place in children and the mode in which he learn. He likened the ease of learning children possess comes from their approach: lots of experimentation, color, activity, creation, and a lack of fear of failure. We should all go back to kindergarten!

 

 

Mitch Resnick and Karen Brennan at Harvard Graduate School of Education

Mitch Resnick and Karen Brennan at Harvard Graduate School of Education

 

Maybe my problem is that I never went to kindergarten–I started first grade at age 6 in England, not speaking a word of English. Note: a moment of epiphany…this explains so much about me. I did not play with blocks, draw, or play. Perhaps I missed out on one of life’s great rights of passage?

Time for redemption!

If you missed Dr. Resnick’s TED talk, it’s available on YouTube. 

0 0

--Originally published at Follow My Never-Ending Learning Adventure

Some say, you can never go back. Well, today, I tried to prove them, whomever they arewrong. I tried my hand at making something technical, not creative.

In a previous post, I cited Mitch Resnick’s kindergarten learning philosophy, and mentioned that I never went to kindergarten and perhaps missed an opportunity for a stimulating and very immersion, hands-on learning experience. I suppose I still turned out alright :)

It’s never too late I reminded myself so today I went to a Maker Space which Dr. Brennan hosted complete with Makey Makey (yes, that’s the company’s name), a craft table, Circuits in Seconds, and more. I was not sure if I could even work with any of the materials. Anything related to engineering and or mechanics is just not something I can wrap my head around. I considered it a triumph when I changed all the hardware on my cabinets and doors (including locks!) in my home. I’m no Bob Villa, so I will take what I can get.

So I dove into the Makey Makey circuit “toy” and read the instructions, complete with pictures for those of us technically changed. Think the IKEA instruction booklet but better.

 

A Makey Makey Circuit Kit

A Makey Makey Circuit Kit

The premise what to connect one input of a USB cable to a computer and other to a the circuit board. Then an alligator clamp to a circuit marked “space.” While holding the other end of the alligator clamp, electricity was conducted from me which traveled through to the circuit so that each time I touched the “space” tab on the circuit, the cursor on my screen would move one space, in effect like hitting the space bar on the keyboard.

The exercises continued in level of complexity, and I use that term loosely. Adding more alligator clamps to different circuits and finding other sources of electrical conduction, a scissor for example, or a carrot. Yes, a carrot, which contains water, a weak electrolyte and conductor of electricity. Now touching the carrot would prompt the command and elicit a computer response to move the cursor a space.

Finally, the end product, playing the bongo on the compute using two carrot sticks. Check out the video below to see my bass skills in action.

trim.C555379E-38D1-43B4-815B-F9FE29B126EF

I was surprised to find a video from Hackidemia and TEDxYouth with some amazing students creating a house of sounds, complete with rooms and objects that you play as instruments, in an “interactive play space.” They did NOT have this when I was a kid!

 

0 0

--Originally published at Carli's T509 Blog

Helpouts by Google LogoHelpouts by Google, sometimes referred to as Google Helpouts instead, make use of the tools that Google developed for the Hangouts platform to offer classes. As such, it is an interesting and in some ways unusual example of online education. The idea behind the tool is that anyone can teach about any topic online using a platform that is essentially Google Hangouts with a few additional features. Interested students can browse through available teachers and topics by subject area or they can search for a specific topic. Topics taught run the gamut from balloon sculpture (a topic on which I participated in a Helpout for a class last spring) to computer programming. Helpouts can be offered for free or instructors can charge whatever they believe is a fair price and the platform will facilitate payment. Individual Helpouts vary greatly in length and teaching approach as well, which makes sense since anyone can request an invite to create and offer a class on the platform. The video below provides some background on the platform.

The question of whether Helpouts would qualify as “massive” education is one that I find interesting and it has made me reconsider exactly what we mean by “massive” in this context. At first glance, Helpouts seem to clearly not comport with our definition of the word “massive” because each interaction on a Helpout is between a single instructor and a single student (though it would be theoretically possible for several people to congregate in front of a single screen to watch a Helpout). However, the platform does help instructors who would otherwise have difficulty reaching a large number of students to offer their instruction worldwide. Prior to considering this, I had solely thought of “massive” as describing platforms that allowed for simultaneous participation by a large number of students, but thinking about this platform has made me realize that there are other possible ways to think about this topic.

The question of whether Helpouts are open also depends on the exact definition of the term that we adopt. The platform itself is not open as it is controlled exclusively by Google and the content created by individual instructors remains locked in the platform except for the limited number of Helpouts that allow students to record and save their sessions. However, this level of control is also exerted over many MOOC platforms and, at the level of individual classes, some Helpouts are free, which does offer a specific type of openness.

Helpouts are clearly online, but the final question of whether they are courses is also debatable. As each instructor designs their own materials, Helpouts vary in terms of teaching approach and level of formality. Helpouts are assumed to be single sessions, but there is nothing to prevent instructors and students from continuing to work with one another over the course of a number of Helpouts to create more of a traditional course. Without more support for a curriculum and the sharing of other course materials such as reading and other media, it does not seem that Helpouts support traditional, structured courses. In fact, many of the available Helpouts are more along the lines of exactly that, help for discrete problems, questions or issues. Ultimately, I think that Helpouts are significantly different from the types of massive education that we have been considering in class, but they nevertheless help us to consider the boundaries of our definitions and also show another possible approach to sharing education online.

The table below, which I will add to throughout the semester, will sum up the features of each of the platforms that I discuss on this blog.


0 0

--Originally published at t509blog - Dylan Erb

I found myself wondering what alternatives there are to item response theory (IRT). I soon discovered that IRT falls under the blanket of Pyschometrics along with a number of other theories, and so I thought it might be interesting to map them out.

Chart informed by https://en.wikipedia.org/wiki/Psychometrics

Chart informed by https://en.wikipedia.org/wiki/Psychometrics

Once again, I wrote a python script to generate this chart. You can take a look at the code below:

import pygraphviz as pgv
import pdb
import sys

textfile = 'diagram3'
title = 'Psychometrics: IRT in context'

#Read through all the lines.
nodes = ['Pyschometrics',
'Field of Study',
'PsychologicalnMeasurement',
'Skills',
'Knowledge',
'Abilities',
'Attitudes',
'PersonalitynTraits',
'EducationalnAchievement',
'ClassicalnTest Theory',
'Item Response Theory',
'The RaschnModel',
'LatentnTrait Theory',
'Strong TruenScore Theory',
'Modern MentalnTest Theory']
edges = [['Pyschometrics','Field of Study','is a              '],
['PsychologicalnMeasurement','Skills','evaluates      '],
['PsychologicalnMeasurement','Knowledge','evaluates      '],
['PsychologicalnMeasurement','Abilities','evaluates      '],
['PsychologicalnMeasurement','PersonalitynTraits','evaluates      '],
['PsychologicalnMeasurement','Attitudes','evaluates      '],
['PsychologicalnMeasurement','EducationalnAchievement','evaluates   '],
['ClassicalnTest Theory','PsychologicalnMeasurement','is a technique for                             '],
['Item Response Theory','PsychologicalnMeasurement','is a technique for   '],
['The RaschnModel','PsychologicalnMeasurement','is a technique for            '],
['Item Response Theory','LatentnTrait Theory','is also known as         '],
['Item Response Theory','Strong TruenScore Theory','is also known as        '],
['Item Response Theory','Modern MentalnTest Theory','is also known as         '],
['Item Response Theory','ClassicalnTest Theory','outperforms'],
['The RaschnModel','Item Response Theory','is a special case of     '],
['Pyschometrics','Item Response Theory','encompasses               '],
['Pyschometrics','ClassicalnTest Theory','encompasses'],
['Pyschometrics','The RaschnModel','encompasses              ']]


#Weed out all non unique edges
unique_edges = []
for x in edges:
    if x not in unique_edges:
        unique_edges.append(x)      

#Weed out all non unique nodes
unique_nodes = []
for x in nodes:
    if x not in unique_nodes:
        unique_nodes.append(x) 


print nodes
print edges

#make the graph
G=pgv.AGraph()
G=pgv.AGraph(strict=False,directed=True)

G.add_nodes_from(nodes)

for edge in edges:
    G.add_edge(edge[0],edge[1],label=edge[2])

sorted(G.edges(keys=False))

G.graph_attr['label']=title
G.graph_attr['labelloc']='t'
G.graph_attr['overlap']='false'
G.node_attr['shape']='oval'

s=G.string()
G.write(textfile+".dot")

G.layout(prog = 'dot')
G.draw(textfile+'.png')

0 0

--Originally published at t509blog - Dylan Erb

I found myself wondering what alternatives there are to item response theory (IRT). I soon discovered that IRT falls under the blanket of Pyschometrics along with a number of other theories, and so I thought it might be interesting to map them out.

Chart informed by https://en.wikipedia.org/wiki/Psychometrics

Chart informed by https://en.wikipedia.org/wiki/Psychometrics

Once again, I wrote a python script to generate this chart. You can take a look at the code below:

import pygraphviz as pgv
import pdb
import sys

textfile = 'diagram3'
title = 'Psychometrics: IRT in context'

#Read through all the lines.
nodes = ['Pyschometrics',
'Field of Study',
'PsychologicalnMeasurement',
'Skills',
'Knowledge',
'Abilities',
'Attitudes',
'PersonalitynTraits',
'EducationalnAchievement',
'ClassicalnTest Theory',
'Item Response Theory',
'The RaschnModel',
'LatentnTrait Theory',
'Strong TruenScore Theory',
'Modern MentalnTest Theory']
edges = [['Pyschometrics','Field of Study','is a              '],
['PsychologicalnMeasurement','Skills','evaluates      '],
['PsychologicalnMeasurement','Knowledge','evaluates      '],
['PsychologicalnMeasurement','Abilities','evaluates      '],
['PsychologicalnMeasurement','PersonalitynTraits','evaluates      '],
['PsychologicalnMeasurement','Attitudes','evaluates      '],
['PsychologicalnMeasurement','EducationalnAchievement','evaluates   '],
['ClassicalnTest Theory','PsychologicalnMeasurement','is a technique for                             '],
['Item Response Theory','PsychologicalnMeasurement','is a technique for   '],
['The RaschnModel','PsychologicalnMeasurement','is a technique for            '],
['Item Response Theory','LatentnTrait Theory','is also known as         '],
['Item Response Theory','Strong TruenScore Theory','is also known as        '],
['Item Response Theory','Modern MentalnTest Theory','is also known as         '],
['Item Response Theory','ClassicalnTest Theory','outperforms'],
['The RaschnModel','Item Response Theory','is a special case of     '],
['Pyschometrics','Item Response Theory','encompasses               '],
['Pyschometrics','ClassicalnTest Theory','encompasses'],
['Pyschometrics','The RaschnModel','encompasses              ']]


#Weed out all non unique edges
unique_edges = []
for x in edges:
    if x not in unique_edges:
        unique_edges.append(x)      

#Weed out all non unique nodes
unique_nodes = []
for x in nodes:
    if x not in unique_nodes:
        unique_nodes.append(x) 


print nodes
print edges

#make the graph
G=pgv.AGraph()
G=pgv.AGraph(strict=False,directed=True)

G.add_nodes_from(nodes)

for edge in edges:
    G.add_edge(edge[0],edge[1],label=edge[2])

sorted(G.edges(keys=False))

G.graph_attr['label']=title
G.graph_attr['labelloc']='t'
G.graph_attr['overlap']='false'
G.node_attr['shape']='oval'

s=G.string()
G.write(textfile+".dot")

G.layout(prog = 'dot')
G.draw(textfile+'.png')

0 0

--Originally published at Language, Literature, & Inspiration

Automated Essay Graders – In Part 1 of this blog post, I discussed how Turnitin.com can improve the essay-grading process for teachers. In addition to letting teachers easily add comments on digital copies of essays, it provides an automated plagiarism and grammar check. It does not, however, include an automated grading option. Before last week, I would have said that was a good thing, since I was against the principle of automated graders. However, I now have a new understanding of automated graders that may lead me to consider using them in the future.

  • With automated grading systems, teachers must first upload a set of graded essays to “train” the program (Reich, 2012).
  • When the system “grades” new papers, it predicts what score the teacher would have given the essay, based on patterns and information it collected from the sample set (Reich, 2012).

The authority behind the grading, then, still comes from the teacher, who sets the standards for the grades. And as the teacher, I would of course still provide extended comments on the essays and check the validity of the scores.  Therefore, the benefits of the automated grading system would include the following:

  • Students would have the opportunity to see immediate feedback about what their score would likely be. If they see that their essay will probably get a 3 out of 5, then this may provide some motivation for them to spend more time revising.
  • An automated grading system could help teachers’ scores be more consistent and would give teachers more time to focus on qualitative feedback.

At this point, I am not familiar with any automated grading systems that my colleagues are using in the classroom, but I am grateful that I have a better perspective about how they work and how they might be used. I may even seek out automated grading resources to use in my classroom next year, so stay tuned for a possible Part 3 to this post.

Source: Reich, J. (2012) Grading automated essay scoring programs.  Education Week: EdTech Researcher. Retrieved from http://www.edweek.org


0 0

--Originally published at Language, Literature, & Inspiration

Confession: I hate grading. While I enjoy reading my students’ work and providing feedback to help them improve as writers, I do not appreciate spending entire evenings or weekends watching stacks of essays dwindle ever too slowly. Last year, I had only 100 students, so grading was bearable, but I remember all too well my year with 180 students and what seemed like a mountain of work to grade. And I often think of my colleagues who are still facing hundreds of essays to grade every week.

Despite this odious, time-consuming challenge, my blood still curdles a bit at the thought of automated essay graders. Can computers actually grade essays and provide the kind of thoughtful, qualitative feedback that teachers spend so much time providing to students? I believe that the answer is no. But what computers can do is make the grading process easier for teachers and more effective for students. I have always been a pen-and-paper girl, but I am gradually making the transition to online grading.

Turnitin.com – If your school has not used Turnitin.com, then I highly encourage your English department to make a case for acquiring this valuable resource, as it has many benefits:

  • Teachers can highlight sections of writing, leave quick comments, type extended comments, and even voice-record their response to an essay.  All of these options make the grading process more efficient.
  • Teachers no longer need to carry around stacks of papers or worry about a stray paper wandering away.
  • Students can use Turnitin.com to submit multiple drafts, complete peer response activities, and review teachers’ feedback.
  • Turnitin.com provides a plagiarism check that highlights potentially plagiarized text and lists the online sources and essays where that text was found.
  • Turnitin.com provides an automated grammar check.

Turnitin.com does not have an automated grading feature, but it does make online grading more convenient for teachers and for students. For more information about grading on Turnitin.com, you can visit Turnitin.com. To read more about my thoughts on automated grading systems, please see Part 2 of this blog post.


0 0

--Originally published at Allison Goldsberry

Wiggins said:

“It is immensely difficult to create a massive open online course in the humanities and social sciences that approximates a traditional brick-and-mortar offering. This should come as no surprise given MOOCs’ origin (or to anyone reading Jonathan’s site). All three major MOOC platforms (Udacity, Coursera, and EdX) have their roots in computer science departments. And even their connectivist predecessors were created by academics in computing , too.”

And: “MOOCs emerged from the sciences because the sciences are scalable.”

I do find some truth to this, but don’t think this means MOOCs only lend themselves to technology or science courses. I also disagree with Wiggins’ somewhat limited view of peer assessments, and the proscribed set of circumstances in which he thinks they are useful.

As with the introduction of any technology, teachers need to really be clear about the educational outcomes and goals, and only then can a technical tool be introduced to help reach those goals. Before anyone decides to dismiss or embrace an xMOOC or cMOOC type of platform, the learning outcomes of a particular course need to be completely considered and understood. From there, a successful learning experience can be built, whether it is one that happens in a so-called brick-and-mortar setting, a blended environment, or totally online.

I’ll go further and say that the whole idea of online learning and MOOCs threatens people because not every teacher is intentional and mindful and respectful of his/her audience. The rise of MOOCs is forcing everyone to consider their classroom practices, including their pedagogy and learning outcomes. Weak teaching in person will also be weak teaching online. We all benefit from examining our practices and learning from others.

It is for these reasons I feel strongly that MOOCs can be used to teach anything and don’t necessarily lend themselves only to technology or science courses. If the learning outcomes are clearly understood, then any platform or technology can be used in an effective and intentional way to help students get there.

0 0

--Originally published at Innovation Injection

From the short readings this week concerning peer review, I was secretly hoping the articles would denounce this practice and sentence it a quick painful death. The consensus, from my take away anyway, was that there are benefits especially in specific context and within the right conditions laid out as indicated in Debbie Morrison’s blog post titled “Why and When Peer Grading is Effective for Open and Online Learning.” Now don’t get me wrong, I still have reservations and feelings of dread because I immediately think about obstacles such as language barriers, cultural differences, non-constructive “nice nice” feedbacks because you don’t want to come off as a jerk, insufficient knowledge, lack of peer review practice, and so forth. However, I do recognize the value of the collective, crowd sourcing, and community collaborations. After all, in most work environments, team work and collaboration are the norm (or should be). Plus peer review is just another activity of Connectivism that we read about in session 3.

The exercise of having to do peer reviews this week did make me want to stare blankly at the chalk board and whisper “really?” What was funny is that I really appreciated the feedback I received. It showed me perspectives and things I missed or thought I had vocalized but was not communicated adequately. The feedback sparked ideas. I might have gained more than I was able to offer to my fellow classmates on exercise. Still, I like to tell myself it was the learning process that was more important than the actual feedbacks I gave (my classmates might disable). Maybe the medicine was not as bad as I had thought after all.