Whither Identity: Reclaiming your templated self
by Richard White
Part III in the “Whither” series
Who owns your online data? Who owns the content that makes up the digital you? Is your digital identity locked into Facebook as a series of uploaded photos, status updates, and comments on others’ posts? Do you have a copy of your tweet timeline? If and when you decide to migrate from Facebook, will your digital identity travel with you?
Audrey Watters refers to the “templated self,” a digital “you” that is described and defined via the features and constraints of any given platform. As cyborganthropology points out, Facebook and Twitter are strongly templated, with structures are policies that are highly confining to interactions. WordPress and Google+ are less confining, but still require working within a templated (pre-structured) space. MySpace pages—to their own detriment—had much less in the way of structure (hence the obnoxious and hard-to-read backgrounds that some users delighted in presenting).
Creating one’s own website is the least restrictive platform of all, of course. User-created content is not shipped off to be stored in someone else’s silo, but is maintained and managed in one’s own domain.
How significant is this to you? Is your Digital Self hosted somewhere that’s “too big to fail?” “Software as a service” relies on a company maintaining support for that service. Some services/platforms that have gone away in the past year or two:
- Google’s social networking service Orkut
- Twitter’s image hosting service TwitPic
- Video streaming service Justin.tv
- Apple’s MobileMe shut down
- Yahoo!Blog shut down
- Everpix photo hosting service shut down
- Google Wave shut down
It’s no surprise that businesses and product lines occasionally fail or are discontinued, and that possibility is especially prevalent in technology, with boom-bust cycles akin to a bucking bronco. It’s all the more important, then, to give serious thought to how much of one’s identify one wants to invest in an organization’s template.
Closer to home—in our classrooms—it’s also the case that educational platforms enforce templated identities. Learning Management Systems, almost by necessity, structure content and data in such a way that it makes it difficult to move that data around to other places. Even something has simple and local as a classroom wiki doesn’t typically provide much in the way of data portability. The online grading program that I use for tracking my own students’ progress provides an Export utility that creates a CSV-based backup file for instructors, but provides no such option for students. Data that goes in to these systems very rarely finds its way out.
Another educational feature, perhaps peripherally related to the templated self, is the Digital Portfolio, which purports to provide some means of collecting, storing, and presenting a students’ electronic information over a longer period of time. I understand the desire for such a record–I have both digital and non-digital portfolios of work that I’ve done, assignments in school, artistic pieces, etc.–and I think schools are wise to be considering ways to implement these collections. (My school is in the process of discussing these possibilities right now.) I have to wonder, however, at the wisdom of paying for a storage/presentation service that places student assignments in a proprietary silo, with access controlled by a business that may or may not be around five years from now. Are there significant advantages here over the simple and expedient solution of having students place their most important work in a network folder?
The Internet began with a decentralization of access; anyone could access information from anywhere on the network. If Facebook and WordPress have given us templates, and in so doing forced a proprietary, siloed, centralization of our data, Watters encourages us to consider a “re-decentralization of the Web.”
If you’re interested in having an honest, long-term, presence on the Internet, reclaim your self. Get your own domain. Be who you are, rather than a Facebook status update that may or may not actually be seen by your friends, depending on whether Facebook decides to show it to them.
The original promise of the Internet was a democratization of voice: everyone had access, and everyone could be heard. Increasingly, however, voices are siloed behind paywall, registrations requirements, and licensing agreements.
Register your own domain, at hover.com or any one of hundreds of other registrars.
Reclaim your voice.
Reclaim your identity.
Whither Data?: Dude, where’d my content go?
by Richard White
Part II in our series.
In a previous post, Whither Media?, we explored the ongoing transition away from physical media, and what implications this transition might have. The related question is Whither Data?: What happens when your content—your written documents, photos, email, music, etc.—are all stored on somebody else’s computer?
The Cloud is a term that has a number of definitions, but typically it refers to a collection of servers run by a company that (usually) offers a user access via Internet to that data and those services. In addition to offering Internet access, a cloud-based service typically implies multiple servers hosting redundant copies of the data, providing faster access to the user and backups of a user’s data.
If you use Google’s Gmail, your email is stored on their servers, “in the cloud.” If you use Google Docs, your documents are stored on servers, “in the cloud.” Microsoft’s Office 365 stores your Word, Excel, and PowerPoint documents “in the cloud.” And although you may not think of it this way, many social networking sites such as Facebook also provide content and services “in the cloud” so that your conversations, photos, status announcement, comments, and Likes are store where you and others can view them.
There are a number of powerful advantages to using cloud-based services, and most of these are self-evident, especially to teachers. At my school, which provides Google Apps for Education (GAFE) to teachers and students, we’ve been able to offload our email services to Gmail and provide Google Docs and Calendars to the entire community, allowing for teaching strategies and communication workflows that simply weren’t possible before. Content Management Systems (CMS) and their educational offspring Learning Management Systems (LMS) provide a structure—usually a proprietary one—in which a teachers information can be delivered and a students interactions with that information can be tracked.
I love the fact that the ability to share data from user to user and machine to machine has become easier. Without cloud services, teachers would be forced to a) try manage an endless and non-linear flow of emailed attachments (something some of us still do, I’m sorry to say), or b) implement and manage our own servers to which students can upload documents, and from which they can download them. (Actually, I do do this, but it’s in the context of a computer science course in which those processes are part of the curriculum). Cloud services allow for shared files, shared folders, and drag-and-drop functionality that “just works” (most of the time).
There are two caveats here, however. The first concern is security. Unless students are encrypting their documents before uploading them, there’s the possibility that the information in those documents—perhaps confidential, private information—may be visible to others, either in transit or on those servers. The reality for most teachers, I think, is that the documents that students are sharing with us—book reports, essays, lab reports, homework assignments—don’t require a high degree of security, and so maybe this is just fine. If you were having students email Word documents to you before, having them work on a GoogleDoc on Google’s servers is at least as secure, and almost certainly more unless they’ve elected to make the document’s contents available publicly.
I am not a doctor or lawyer and am not aware of the specific legal requirements concerning the secure storage of patient or client information, but I would investigate that carefully before using cloud services for these purposes.
Perhaps a more significant concern for teachers and students, however, is retaining access to cloud-based content over the long term. Low-priority content like quizzes or in-class essays may not be of much concern to students, but more significant essays, research papers, or portfolio work has a higher value, and may even be submitted to colleges as part of an application. Ideally, a student would be able to retain access to their work—and it is their work, isn’t it?!—for some indeterminate time into the future. Which cloud-based services allow for that?
The notorious offenders here are the providers of online books—where online notes and marginalia disappear when your one-year access license expires—and the various Content Management and Learning Management Systems, with password-protected access that may not extend beyond the current year. Students who create or store documents in these systems are at high risk of losing access to them when the end of the school year comes around, or the next school year starts begins (depending, of course, on the administration of the system).
The same may happen with Google Apps For Education, although it is much easier to export this data onto a student’s own computer or data storage device, assuming he or she has access to something more than a Chromebook. Here, a personal Google account may come in handy, although questions about privacy of these documents may be relevant.
I don’t think we’ve yet reached the point where lost access to data is a broad concern, although some are wrestling with this issue already (as mentioned previously here. 34:20 in show). As we ask that are students create more and more of their work in a digital form, however, it’s fair that we keep these questions in mind: ‘Should students have access to the data that they’re submitting to me?’ and ‘How do I go about facilitating that access?’
Boys Like to Break Things.txt
by Richard White
It’s no secret that Technology Education has something of a gender problem. For reasons that are still unknown (at least to me), I have far fewer female students in my Computer Science courses than in my Physics classes, where the ratio is close to 50-50. I encourage young women to join my classes, and even had the honor of advising an all-girl group of “Technovators” in an app-design competition the year before.
It’s a curious thing, and although I’d like to find a way to improve the girl:boy ratio, it’s not entirely clear what I should do to do so.
An interesting thing happened in the AP Computer Science class the other day however, and it broke down along gender lines. We’d taken a few moments earlier in the period to examine an old PC running Linux that I’d placed at the front of the room. I’d taken one side of the tower off, and we were looking at the motherboard, graphics card, disk drive, CPU, etc.
And a bit later in the class we were going over shell commands that students might find useful, and after introducing the “remove” command (
rm), I mentioned the dangers of inadvertently typing
sudo rm -rf /, a command which will recursively remove every file on the computer… usually not something that one intends to do.
it’s an interesting concept, and the thrill of a dangerous command like that holds some inherent appeal, perhaps, to geeks. And then one of the boys raised his hand and asked, “How long will it take before the
rm command eventually removes some critical file that is necessary to the ongoing operation of the computer?”
It was an interesting question and one that I hadn’t given much thought to up to this point. The
rm command is working on the hard drive, while the operating system mostly resides in RAM, but there are swap operations in which the OS interacts with the hard drive, so… this was a great thing to ponder. At what point does the snake eating its own tail become unable to continue?
You know where this is going. “That’s a great question, Cyril. Let’s find out.” So I had one of the students type the requisite command on the computer that, moments ago, we’d used to demonstrate hardware. They hit the Enter key and took great delight in the list of files scrolling by on the screen, a live-action view of what was being deleted as the
rm command slowly destroyed the OS.
This singular opportunity was of such interest that the students crowded around the machine to watch the spectacle, and one young man recorded the event on his phone…
Here’s the other part that you can almost certainly guess. Of the four young women in the class, not a single one came up to the front to watch the event up close. Were they intimidated by the gathered crowd? Was the destruction just less interesting to them? Were they thinking about heading off to their next class? Are boys just more interested than girls in testing things, even (especially!) to the point of breaking them?
I don’t have the answer, but I think about these things. We all need to be thinking about these things, and encouraging women who express interest to explore the idea of learning more technology.
by Richard White
I was out walking with a friend of mine the other night along Hillhurst and happened to stumble into High Fidelity records. I don’t know if you’ve heard, but vinyl is making a comeback, and the stacks of records in this place had attracted a few evening shoppers. And even though I don’t own a record player anymore, I worked for a number of years in the early 80s at a southern California record store chain called Licorice Pizza, and I still have a soft spot in my heart for music stores of any kind.
I was a bit surprised to see a sign in the store advertising that their CDs, already in short supply, were all 50% off. I grabbed a couple of discs that I’d been meaning to buy—the soundtrack to The Life Aquatic with Steve Zissou and the White Stripes’ White Blood Cells, in case you’re curious—and chatted with the girl at the register who was ringing me up. “So, you guys aren’t going to carry CDs anymore?”
“Yup,” she said, flatly, and the logic of that decision didn’t need to be explained to me. The store has a higher mark-up, greater volume, and better profit selling rare vinyl than trying to compete with Amazon, Wal-mart, and Target selling CDs. The rack space where their dwindling supply of discs remained was losing them money.
I did what I do with all the CDs I buy these days: I scurried home and ripped them into flac files on my Linux box using the abcde utility. Because that CD isn’t going to last forever…
And neither is the CD player, right? My primary computer, the one I’m using to type right now, certainly doesn’t have a CD player, and Apple retired their “Rip. Mix. Burn.” iTunes advertising long ago. Most of my students with Apple laptops no longer have access to anything with a CD drive in it (with the possible exception of a parent’s car), and instead are more than happy to listen to and share music via iTunes, Spotify, and Pandora.
All of which leaves us with an interesting question. In a world that no longer provides physical media, does one even need to keep media? If so, where? and if not, why not?
If you ask Netflix, the answer is clear, where their DVD delivery business has been on the decline for awhile now, even as their streaming business is on the increase. I’m one of the many people who eventually ditched my delivery subscription in favor of the curling up in bed with my iPad and their streaming software, even if the selection of movies available to stream if inferior to their stock of physical disks.
And if you ask my students, the whole idea of physical media is almost foreign to them. I recently conducted an activity in my computer science class where students were required to provide evidence that they had three copies of their computer science files: one on their laptop, one on a secondary storage device like an external drive, and one “in the cloud.” (It’s a great activity, and students who are budget-constrained can satisfy the requirement simply by using the 16GB flash drive I provide them at the beginning of the year, and signing up for a free Dropbox account.)
The idea that one needs backups is nothing new, but I had one student who took exception to the requirements. “Look,” he explained, I have a copy in the cloud, and a copy on my machine. Why do I need a second copy locally?”
It took a bit of explaining for him to understand that it was entirely possible for him to have his local hard drive crash, or for him to drop his computer, or for his logic board to fry, or for someone to spill coffee into the keyboard… all of which have happened to students and colleagues of mine in the last year. “I’ll have a copy in the cloud!” he responded, and that’s certainly one of the points of having a backup in the cloud.
“Do you have a backup of your entire hard drive in the cloud?” It’s a trick question that I, the instructor, win either way. Either he doesn’t, in which case he’s lost enormous amounts of data, or he does, in which case he’s going to find out how long it takes to restore hundreds of gigabytes of data over his home Internet connection.
But I digress. The interesting discovery for me was that this young man, articulate and well-spoken, didn’t seem to be able to appreciate the concept of “losing one’s digital stuff.” And while it’s possible that he has simply had the good fortune to never undergo that experience—this is a kid, after all—I think it’s more likely that he doesn’t “have” any digital stuff to lose, at least in the traditional sense. Somebody else already keeps his stuff.
From his class assignments stored in GoogleDocs to his browser-based email to his Spotify playlists, his data is completely out of his hands, but typically available just about anywhere he can find an Internet connection… and he’s just fine with that.
Are the days of “owning” media over, then? Will I come to rely on Netflix streaming for being able to watch my favorite movie? And will I accept its disappearance with reluctant understanding when their licensing agreement with the studio runs out? Will we no longer be able to share a favorite book with a friend (or will we bound to a particular e-book platform/distribution channel if we do)?
Can you even make a mixtape anymore for that guy or girl you like? I tried the other day using iTunes, and it was a complete catastrophe.
In a world where all we have is digitized, what happens to the media that isn’t?
Three copies of your data, people. One on the computer, one on a local backup, and one in the cloud.
And take care of those precious books and records, lest they disappear forever.
P.S. A couple of days after writing this, Apple discontinued their venerable iPod Classic line. This hard drive-based music player had been in production in one form or another ever since it was originally introduced in 2001, and there’s a wonderful eulogy by Mat Honan online at Wired.
For ten years my iPod—in various incarnations—was my constant companion. It went with me on road trips and backpacking through the wilderness. I ran with it. I swam with it. (In a waterproof case!) I listened to sad songs that reminded me of friends and family no longer with me. I made a playlist for my wife to listen to during the birth of our first child, and took the iPod with us to the hospital. I took one to a friend’s wedding in Denmark, where they saved money on a DJ by running a four hour playlist, right from my iPod. And because the party lasted all night, they played it again.
Everyone played everything again and again.
And now it’s dead. Gone from the Apple Store. Disappeared, while we were all looking at some glorified watch.
In all likelihood we’re not just seeing the death of the iPod Classic, but the death of the dedicated portable music player. Now it’s all phones and apps. Everything is a camera. The single-use device is gone—and with it, the very notion of cool that it once carried. The iPhone is about as subversive as a bag of potato chips, and music doesn’t define anyone anymore.
Preach on, Brother.
DANIEL’S SEARCH HACK
by Richard White
On one of the last days of the AP Computer Science class, I met with a few students who had been participating in the High School Capture the Flag hacking contest. A high school in New Jersey had created a competition that would allow teams of high school students using digital tools to solve various types of puzzles.
One of the problems involved a text file with just two lines in it. The first line explaining that students would need to search the file for a series of English words in sequence, although those words wouldn’t be separated by spaces. The second line consisted of over 10 million letters, mostly scrambled, but with words occasionally found within them.
Here are 200 letters from that file:
You can see the word “hiss” in there, as well as “rips”, “fee”, “call”, “has”, as well as a number of 1- and 2-letter words, but clearly nothing identifiable as a sequential series of words.
So how do you go about looking those needles in that haystack?
I had some ideas, and I’d been working on the problem for a day or two. I wrote a Python program to read all 10 million characters into a string so that I could search through it. I Googled and found a couple of lists of English words, arranged in order of popularity, and inserted those into my program as a list of words.
But now what? How do you start trying to find a sequence of words in a line of ten million letters?
My first algorithm looked liked this:
- Get a word (call if word1) from the word list, and another word (word2) from the word list.
- Look in the text for an occurrence of the first word.
- If you find that word, look to see if the second word is near it. It it is print it out. If not…
- Keep looking for additional occurrences of the first word and the second word, until you can’t find anymore.
- Get a different word2 from the list and check that against word1.
- Repeat this until you’ve tried ever word in the list as word2.
- Set word1 to the next word in the list and repeat the whole process.
It didn’t work. It was a first attempt at trying to work through the file, and this particular strategy was much too slow—my search was going nowhere fast.
Time to alter the strategy.
My second try concentrated on reducing the number of text interactions that had to take place. This time, I would:
- Take the first word in the word list identify all the locations (indexes) where that word existed in the string.
- Do this again with every word in the word list until I had a “dictionary” of word locations.
- Now start at the first word, and look at its first location. Go through the other words and their locations, and if a second word was found withing 10 characters of this one…
- Look for a third word with a location with 10 characters of this one…
- … and then a fourth word. If I could find four words all within 30 characters or so, perhaps this would identify the flag phrase.
Here’s what the results look like for that strategy:
It didn’t work. I mean, the program worked fine, producing a seemingly endless string of lines that matched the specifications I had, but none of the lines produced were what I was looking for. Here’s the partial output from that second algorithm.
I still was getting “words” in each of those lines that didn’t correspond to the flag I was searching for. This was starting to get frustrating.
I went into class the next day and explained to the students my frustrations, and they were happy to brainstorm different strategies. At one point, Daniel said, “Why don’t you just look through each line in the file and count the number of words in it? Maybe the line with the most words will be the one we’re looking for.”
“That’s a good idea, Daniel, but the file isn’t split up into lines. It’s just one long line of characters.”
He thought about it for a minute, and his partner Ezra said, “Well, let’s just split it up into lines. 140 character lines. That’s good enough for Twitter.”
So that’s what we did. The students have been studying Java for this past year and I was coding in Python, so they watched while I coded their strategy:
for line in lines:
num_of_words = 0
for word in searchTerms:
if line.find(word) != -1:
num_of_words += 1
if num_of_words > 25:
print num_of_words, line
Because it was a relatively simple strategy, the two decided that they could afford to search for any words contained in 10,000-word dictionary. We watched as the results appeared on the screen:
Last login: Sat May 24 18:03:42 on ttys002
MotteRouge:~ rwhite$ /var/folders/6x/vklj_pls5215szrp_k2qcjcc0000gn/T/Cleanup\ At\ Startup/find3-422672661.132.py.command ; exit;
File has been split into lines! Proceeding…
And there it is, on the line with 33 matches, just a few moments after starting the program running:
these letters represent the cloth marker that unlocks rewards
We submitted the flag to the contest and saw out score total immediately jump up 400 points—it was our single biggest success of the competition to that point.
I like this story because it reminds me of a few things that we sometimes forget. Working on digital problems like this is not always about coding— sometimes it’s about strategies. Also, there are trade-offs in strategies that require considering constraints: how much memory, power, time, and data do you have? How do your solution results vary based on the trade-offs you make?
Computer Science teachers talk about these things, but I’m always pleased to see a situation where the students get to actually experience that exploratory process themselves.
Thanks to Daniel, Ezra, Adam, and Stephanie for being willing to play with this problem on the last day of school!
LEGACY VS. TRANSITION
by Richard White
Hello, and apologies for the long absence. You know how things can get busy…
Actually, I’ve got the best excuse of all for being away—I’ve been extra busy this year teaching the new AP Computer Science class this year. Organizing, developing materials for, and teaching that class has taken up just about every spare work moment I’ve had. I’m not unhappy about that at all—having the opportunity to work on any new course, and especially that one—is an exciting experience, and I’ve learned a lot of lessons this year, lessons that I’ll tell you about soon.
In the meantime, let’s talk briefly about Legacy vs. Transition.
Here, I’m referring to legacy in it’s modern digital sense: legacy software is software that is not the most current, but which is still supported to some extent, perhaps by virtue of the fact that it was very popular at one point, and its use is still widespread. (The adjective legacy may be extended to other uses as well, but the software context is a common one.)
Last year Microsoft announced the new version of its venerable Office suite, now called Office 365. Along with whatever features that new software includes, it also comes with a new licensing strategy. Under this new system, a one-time license to use the software is not purchased outright; rather, the user pays a monthly fee for the right to use the software. It’s a classic example of the software as a service model for software distribution, and it’s certainly within Microsoft’s rights to transition to such a model. Google has been doing it for years, and if Google doesn’t charge cash money for the service, I’ve certainly paid for their services in other ways (including my privacy every time I send or receive an email from someone with a GMail account).
After losing the ability to read ClarisWorks (and then AppleWorks) documents a number of years ago, I made the decision to transition to using Microsoft Office products, with the intention of avoiding the kind of data loss that comes from using products that have a shorter lifespan. I have gigabytes of Microsoft Office documents on my hard drive, from my own handouts, worksheets, tests, and letters of recommendation to documents that have been shared with me by practically every person with whom I have a professional relationship.
Microsoft’s new licensing plan, however, was just the impetus I needed to start thinking about transitioning to a new system. I still have the Microsoft Office suite on my computer and occasionally still edit legacy documents using those applications. New documents, however, are being created using LibreOffice, which is a serious attempt at providing free software to support the creation of OpenDocument (.odt) files.
You can see a screenshot of the LibreOffice word processing interface above, and the similarities between it and Word are such that you shouldn’t find yourself too disoriented.
LibreOffice includes translation features to bring Word documents (.doc and .docx) over, and how well your files will be translated depends in part on how hard you’ve pushed Word’s feature-packed capabilities—I haven’t explored those capabilities much yet.
For the moment, I’m just enjoying adapting myself to the new system, and creating a new series of documents that, going forward, won’t require an ongoing investment with the powers that be at Micro$oft.
In the grand scheme of things, working about the long-term viability of your electronic documents might not be something that you want to think about… but it merits some consideration. I have songs made with music mixing software that I no longer have access to. (I have final mixes of the music, but the software itself no longer works; I am unable to create new mixes of the music.)
JPG graphics images, carefully edited and compressed fifteen years ago when dial-up connections were still a thing, look terrible on the high-resolution Retina Display of an iPad. (At least the colors of those images haven’t faded with time, which is more than I can say for the paper-based photos in an old photo album of mine.)
Hardware legacy is something to consider too. I have FireWire hard drives but my laptop doesn’t have a FireWire port. I have Zip disks from the 90s, too, and no Zip drive to put them into. Fortunately, I copied everything from those drives onto a USB external hard drive a few years ago when I did have access to those machines. I was either smart or lucky to have anticipated the transitions that would have to be made happen down the road.
I don’t think there’s a one-size-fits-all to the challenges posed by aging software and hardware. Some people spend enormous time and energy making sure that they always have copies of everything digital—I tend to lean towards that end of the spectrum, as you might imagine—and others don’t want or need to keep anything of their digital life.
I don’t know any of those people, though, so I can’t really speak to that.
Computer Science in Schools
by Richard White
Happy Holidays everybody!
The holidays are no time to get any rest. Oh, no, there’s too much going on–parties, holiday shopping, out-of-town visitors–to actually get any down time. No, to actually get a chance to relax, you have to resort to more drastic measures… like getting sick.
That’s my genius plan, and it’s working just great.
While I’m sitting around waiting for my body’s defense mechanisms to do their thing, I’ll just include a quick year-end pointer here to one of Audrey Watters’s year-end Trend posts, this one on Computer Science in schools:
Despite the proliferation of these learn-to-code efforts, computer science is still not taught in the vast majority of K–12 schools, making home, college, after-school programs, and/or libraries places where students are more likely to be first exposed to the field.
There are many barriers to expanding CS education, least of which is that the curriculum is already pretty damn full. If we add more computer science, do we cut something else out? Or is CS simply another elective? To address this particular issue, the state of Washington did pass a bill this year that makes CS classes count as a math or science requirement towards high school graduation. Should computer science – specifically computer science – be required to graduate? In a Google Hangout in February, President Obama said that that “made sense.” In the UK, computing became part of the national curriculum.
She has a bit more to say on the subject, but her thoughts echo many of my own. Does everyone really need to “Learn to Code”? How important is Computer Science in the midst of an already bulging academic curriculum? How can educators and the tech industry best reach out inclusively to students on behalf of an industry that is not only famously non-inclusive, but downright hostile to some demographics?
It’s a problem that merits discussion at all levels, and there are certainly institutional responses that might be pursued. As I expand my role as a computer science educator I may even become involved in some of those—that’s certainly my intention.
In the meantime, I consider myself on the ground doing the front-line work without which nothing else matters. “For this assignment, students, we’re going to…”
If you’re not doing something cool with your computer science, well… what’s the point, really? ;)
Merry Christmas and Happy Holidays, everybody. See you in the New Year!
HOUR OF CODE
by Richard White
You may have heard about the Hour of Code this past week, a 5-day educational technology event sponsored by Code.org that is meant to inspire future generations of computer scientists and computational thinkers: by spending just an hour working on a computer science related project—playing with a coding simulation, building a game, solving an algorithmic puzzle—students of any age level will have a better understanding of the topic of computer science, and perhaps be inspired to study it further, either in school or on their own. As a computer science teacher it had popped up onto my radar a few months ago, and it sounded like an intriguing idea so I proposed the idea to our school directors, who were immediately excited about the possibilities.
Fast forward two months, lots of meetings, some curriculum development, and a website, and I’m happy to report that Hour of Code was a rousing success at Poly. We decided early on to target fifth and seventh grades at the school, and I decided early on to create a curriculum—part coding, part computational thinking discussion—that would work with our students. It certainly helps that we had an entire Apple iMac computer lab that I was free to install a user-friendly text editor on.
As I write this, we’ve finished working with the two classes of fifth graders, who thoroughly enjoyed the experience. We talked, we coded, and they walked away with an official and personalized Code.org Certificate of Completion as well as a printout of their code and corresponding Python turtle-graphics art. (Little Marco enjoyed the experience so much that he was quite put out when the lab had to be vacated before he’d put the finishing touches on his masterpiece. I learned later that the first thing he did when he got home from school that day was to plop down in front of the computer and finish his program.)
Crucial to the success of the day was the support of a large number of people, including our division Ed Tech coordinators, our Director of IT, the teachers who gave us class time to work with their students, and three of my own Upper School students who came down to assist the younger students. We had teacher visitors from other schools in attendance as well, including a professor from Caltech’s Center for Advanced Computing Research. (I don’t think he was scouting our fifth graders for prospective students, but you never know…)
The participation of all these people was vital: advancing technology use in schools is not just about getting new hardware. As a gentle reminder of this fact, our seventh grade sessions—tentatively scheduled for this week—had to be postponed due to some scheduling conflicts. All is well, though, and we’ll be running a more sophisticated Hour of Code session—one that delves into recursion—with our seventh graders at the end of January.
For futher information about Poly’s Hour of Code, including code examples, the presentation slides, or a zipped file containing both, see Polytechnic Hour of Code.
ONLINE PRESENTATION STRATEGIES
by Richard White
I blame it on the fact that I’m teaching a new course.
As I’m teaching AP Computer Science, and developing curriculum, assignments, and lessons for that class, and trying to figure out what works—and what doesn’t—there are lots of mid-course adjustments that I make. Not every assignment needs to be perfect, perhaps, but if I don’t get done addressing all the concerns in that one lesson, it’s hard to have very high expectations for the work that students will do that evening.
And in an AP course, time needs to be used wisely. I can’t afford to be expanding units when there’s a certain amount of material that must be covered by the end of the year.
Fortunately I’ve been able to leverage YouTube and GoToMeeting videoconferencing software to take up some of the slack while I get my act together. A 3-minute follow-up to a lesson, emailed to students, can help to proactively clear up a lot of confusion. Likewise, being available for online office hours, during which students can share their screens with me and we can debug their programs… that’s invaluable.
And although I’ve usually worked on the computer in the past, with a voiceover that describes what I’m doing, it’s often useful to “do a Khan” (as in Sal Khan, of Khan Academy), and just write some stuff out. I don’t have any evidence to back me up here, but my gut says that there’s an enormous cognitive benefit to developing things progressively, and by hand.
Here’s an example of a combination of drawing and computer analysis, done not for the AP Comp Sci class but in preparation for an Hour of Code unit that I’ll be using with some students. See what you think:
Do you see any advantage to demonstrating things in long form, as opposed to doing voiceovers with slides or computer displays?
More on the Hour of Code in a future post…
DIFFERENTIATED INSTRUCTION in AP COMPUTER SCIENCE
by Richard White
We’ve just completed the first quarter of the school year, and I’m loving (and for the moment surviving) the opportunity to teach a new course: AP Computer Science.
I actually began my teaching career in 1986 as the instructor of a computer programming class, first using BASIC, and then Pascal, on IBM XTs–the original beige PC. This was well before you crazy kids had access to the InterWebs, but we loved our computing machines just the same.
So it’s funny, and fun, to be teaching Computer Science again, and it’s exciting to be participating in that daily experiment we call “teaching,” in which the instructor hypothesizes about what might be an effective tool or strategy for working with a class, tries it out, and then goes home to clean up the mess of those experiments that–wonderfully or tragically–failed.
I’m finding out that my students this year have a wider range of abilities than I’m used to seeing in the AP Physics class I teach. The possible reasons for that wide range don’t really matter; I’m there to teach the students who are in the class, meet them all wherever they are, and see what I can do to help guide them in learning the subject.
How do you actually do that, though? How, practically, do I proved instruction and lessons for a classroom full of students, some of whom are going home and programming their own Blackjack programs just for fun, while others are having profound difficulties applying concepts that they appeared to have understood well just the day before?
The act of providing these varying levels of support in a single class has earned the buzzphrase differentiated instruction, and here’s what I’ve developed for a typical lesson:
- a whiteboard-based overview
- whiteboard based pseudocode
- freestyle coding for advanced students
- template-based support for intermediate students
- solution-based support for students who need more support
Wanna see it in action? Here’s a 7-minute documentary-style rundown, complete with footage of the kids hard at work.