Search

Richard Presley's Meandering Missives

Occasional thoughts on learning, training, and the business of both

All I Need Is a Pinch

DSCN0761When I was working for the Red Cross, the Reference Lab Manager asked me to design a unit that would help SBB (Specialists in Blood Banking) students prepare training materials for their staff. The chief skill (and also one of the most difficult) in preparing effective training is to create sound performance objectives. My goal was to provide classroom training and practice on Writing Effective Performance Objectives. So I scoured the internet for resources and put them together in a document that included links to web sites for more information and exercises to practice in class.

Recently, I received Lee Lefever’s newsletter and started surfing his Common Craft site, looking at some of his explainer videos and discover this one on creating Instructional Objectives. See: https://www.commoncraft.com/video/instructional-objectives He breaks the process down to the point of near-oversimplification.

This alerted me to one of the dangers of training design.

If my child comes in and says, “What’s for lunch?” they get a different answer depending on context. If they ask while we are in the middle of setting up a grand reception for their sister’s wedding it’s going to be different from what they get when I’m standing in the kitchen staring at an open refrigerator. Same question. Same person. Completely different answer depending on the context.

Common Craft is the right answer for introductory/overview material. It introduces the concept, provides a basic cognitive structure for further learning, and gives both neophytes and experts the most essential information in a memorable way in the briefest time possible. It achieves a great deal with the narrow format constraints under which it is operating.

The long-form is the right answer (or at least a stab at it) for a classroom setting where people get to practice and evaluate their skills as they progress from simple to complex problems.

Our challenge as instructional designers is to identify the best form for presenting the information based on the topic, the anticipated end results, the audience, and the way the information is going to be used. We need to be acutely aware of how the information is going to be employed by the end-user so we design our training at the level appropriate to the need. My natural tendency is to include as much information as possible. I am an instructor, after all, and fairly compulsive about my desire to instruct. It’s in my blood. Sometimes, however, the best instruction is often the least instruction because all the audience needs is a conceptual understanding of the topic. Knowing the difference is what sets the good IDs apart from the not-so-good ones.

Advertisements

Less Attentive than a Goldfish?

2462-goldfish

 

I just saw this article: http://www.marketplace.org/topics/business/goldfish-have-longer-attention-spans-americans-and-publishing-industry-knows-it

 

It contained this absurd quote: “The average American attention span in 2013 was about 8 seconds. The average attention span in 2000 was 12 seconds. And then get this kicker – the average attention of a goldfish is 9 seconds.”

 

Without knowing whether “average” means mean, median, or mode, let’s think about this statement for a moment. If the AVERAGE attention span is 8 seconds, that means half of the surveyed population was BELOW 8 seconds. Let me suggest that my own personal experience (and yours may differ) inclines me to believe that “attention” is something that exists on a continuum. How do I know this? All my educational life I have heard teachers requesting my “complete and undivided attention,” as opposed to our normal state of incomplete and divided attention. So were the experimenters measuring “complete and undivided attention” or only our lesser attention? 

 

And that’s the question – who was doing the experiment and under what circumstances? 

 

It is quotes like this that really spark my ire because they lead to Educational Urban Legends akin to “learning styles” and the “Seven Plus or Minus Two” sorts of things. Next thing you know we will be hearing everywhere that the average human attention span is 8 seconds. And people who normally exercise common sense will blurt this out as if it were a meaningful statement and tell us what we need to do as IDs to boost attention. Don’t believe me?

 

See: http://morningnewsbeat.com/News/Detail/43792/2014-02-13/

And: http://www.paceco.com/snapchat-immediate-content-marketing-brand/

And it just won’t die: http://whosyourgladys.com/blog/pay-attention-for-longer-than-a-goldfish/

And now it has moved from attributed quote to received wisdom: http://www.theguardian.com/commentisfree/2014/aug/19/satire-tag-internet-killing-facebook-tag in the third paragraph from the bottom.

 

 Come to think of it, I may want to shed my ethics and come up with a consultancy based on “boosting attention spans” with all sorts of corroborating “research” like Daniel Pink does in his presentations. I bet I could make a fortune with “Guaranteed Ways To Grab Attention In Training” that spoons out reconstituted pabulum in form of educational wisdom.

 

But I’m too tired, so I will leave it to one of you to do that. I could easily write the course for you since it would consist of little more than educational cliches, but I have no energy to market the thing.

 

What I would rather see than another How To Do It Right course is for people to develop a soundly critical eye and ear for nonsensical statements ripped from the context of their supporting research environment and spouted as universal dictums.

So what I did do was write to the professor and ask her what she meant by the statement and what was the context of her research. I still haven’t heard back from her yet. But then, she may have better things to do on a Friday than to respond to a curmudgeonly instructional designer with a burr under the saddle.

<a href="The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant TechnologiesThe Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson
My rating: 4 of 5 stars

This is a fascinating book with an interesting premise. The one takeaway from this book, for me, was that we are living through a massive economic shift, akin to the Industrial Revolution, only a lot faster. It’s nice to be a participant in the revolution, but my heart really goes out to those in the “service industry” who will be left behind. The WSJ reported not to long ago that wealth is concentrating at the top while the disparity between them and the bottom grows wider. Activists feel it’s time to do something, but exactly what remains to be determined. It would seem to me that if the Industrial Revolution and rural electrification is any clue, the “do something” that needs to be done is bring more people into the new economy and let them innovate their way to success. Mark Zuckerberg is exceptional, but only with regard to the degree of his success. There are hundreds of innovators who have made merely millions instead of billions. I don’t think the shift will be quick or easy, but I think I can see where it is going.

Brynjolfsson and McAfee recommend a trivium that improves the skills of:
– Ideation
– large-frame pattern recognition
– complex communication
In a word – the Arts. I am a strong STEM proponent, but I also feel just as strongly that one is not fully educated as a STEM practitioner if they cannot also read music and understand it when it’s played (complex communication), identify musical themes across genres (large-frame pattern recognition) or suggest ways to create new mash-ups or copositions (ideation). The same is true for prose, poetry, theater, painting, sculpture, and all the rest.

View all my reviews” title=”The Second Machine Age”>The Second Machine Age

My Good Reads review as I’m about 2/3 the way through the book. I will update as I read more.

Friday Missive – Pixar’s Braintrust

ImageI ran into a FastCompany article this week on Pixar’s Braintrust. This was an excerpt from Ed Catmull’s book, Creativity, Inc. Overcoming the Unseen Forces That Stand in the Way of True Inspiration. Since even Pixar’s worst performing releases were better than many of the best releases from other studios, I thought the excerpt might have something beneficial to those of us in creative endeavors. And I was not wrong. Here is the opening paragraph:

A hallmark of a healthy creative culture is that its people feel free to share ideas, opinions, and criticisms. Our decision making is better when we draw on the collective knowledge and unvarnished opinions of the group. Candor is the key to collaborating effectively. Lack of candor leads to dysfunctional environments. So how can a manager ensure that his or her working group, department, or company embraces candor? By putting mechanisms in place that explicitly say it is valuable. One of Pixar’s key mechanisms is the Braintrust, which we rely on to push us toward excellence and to root out mediocrity. It is our primary delivery system for straight talk. The Braintrust meets every few months or so to assess each movie we’re making. Its premise is simple: Put smart, passionate people in a room together, charge them with identifying and solving problems, and encourage them to be candid. The Braintrust is not foolproof, but when we get it right, the results are phenomenal.

Two of the big takeaways I got from the article are:

  1. Start from the premise that first efforts invariably suck. No matter how good the initial inspiration, the initial execution always sucks.
  2. No matter the project, eventually participants get lost. Candor is necessary in order to move to clarity.

As designers, we invest a bit of who we are in our work. So naturally, we feel a sense of ownership and maybe even a little pride at the brilliant idea we’ve come up with. So when we face the brutal honesty of candid feedback, we are sometimes discouraged if not a little miffed. According to Catmull, ALL initial ideas suck on first execution. If we start from this premise, then we are less likely to let our feelings get hurt when we seek honest feedback on our early efforts.

This is one advantage to the successive approximation model of development. It begins with the premise that first efforts are tentative and the expectation is that they will be improved on subsequent revisions. I like this approach because it provides me with a healthy way to process feedback.

The second lesson is that no matter how good I am or how good the project is or how stellar the leadership on the team may be, they will WITHOUT FAIL get lost. It may be tall grass. It may be meticulous attention to minutia. It may be overwhelming administration. It may be the unending backlog of recommended edits. There seems to be no end to the reasons why the project bogs down. But it does. The function of the Braintrust is to break the log jam and get things moving again. In essence, candor brings clarity.

So how do we create or foster a Braintrust? Here are their points:

  • Appoint people who have a deep understanding of the subject.
  • The Braintrust has no authority – it’s just a collection of informed opinions and ultimate responsibility rests with the project team or director.
  • The Braintrust’s job is to provide insight into sources of problems, not solutions. (I really love this one because the assumption is that the creative individual or team is fully equipped to come up with more and better solutions than the Brainstrust can on its own – what a powerful form of affirmation!)
  • Headline Braintrust findings.

While a Braintrust session may be a painful experience, I find it much less painful than the alternative:

Cognitive Load Theory

I like reading Chris Pappas the same way I like eating sour candy – if it doesn’t make me wince, it hasn’t done its job. Fortunately (or not) Chris’s latest article, Cognitive Load Theory and Instructional Design, does the trick. It reminds me of a Dilbert cartoon where the pointy-haired manager says, “In order to be profitable, we need to cut costs and increase sales.” The engineers agree that this is a good idea and ask how exactly are they supposed to do that? The manager replies, “Don’t ask me. I’m an ‘idea’ person.”

True to form, like the manager, this article provides information that is completely accurate and utterly unhelpful.

Chris avers that Cognitive Load theory adheres to principles that should be kept in mind when designing an eLearning (and presumably, traditional) course:

1.       You can reduce the amount of load that is being placed upon the learners’ working memory by integrating the various sources of information, rather than giving them the various sources individually. 

2.       In tasks or lessons that require problem solving skills, avoid using activities that require a “means-ends” approach, as this will place a load upon the working memory. Instead, use goal-free problems or examples to illustrate the point.

3.       Reduce the amount of redundancy in eLearning course design in order to reduce the amount of unnecessary repetition-induced load that is put upon the working memory.

4.       Use visual and auditory instruction techniques to increase the learners’ short term memory capacity, particularly in situations where both types of instruction are required. 

 

So what does this look like? 

How do we integrate various sources of information in eLearning? 

What are some goal-free problems we can use to illustrate a point in eLearning? 

If repetition is the key to learning, how do we reduce redundancy? In other words, where is the dividing line between necessary repetition (in whatever form) for reinforcement and unnecessary redundancy? 

And let’s just ignore the statement of the obvious –“use visual and auditory instruction techniques…where both types of instruction are required” – since it is hardly more than a tautology, particularly when he doesn’t identify examples of good and bad application of the various stimuli.

So what does he provide as advice?

Here are some tips for how you can reduce cognitive overload in your eLearning course design:

  1. Keep it simple 
    Remove all content that isn’t absolutely necessary for the learning process. For example, if you are designing a slide show to provide information, try to reduce the amount of extraneous graphics you use throughout. 
  2. Use different instructional techniques
    Present information in different ways. For instance, offer some data verbally and other data visually, such as through images or graphs. This will allow the learner to absorb information using different processing methods, which will reduce cognitive overload. 
  3. Make learning “bite sized” 
    Divide content up into smaller lessons and encourage them to only move forward with the course when they have fully grasped the current material. This will insure that they do not overload their working memory and can effectively move the information to their long term memory. 

Puh-leez. I miss the low-key wisdom of my colleagues like Andrea Mitchell who not only did a presentation on cognitive load, but had far more helpful words of advice than this. Please indulge me while I deconstruct (contradict) what Chris says.

 Simplicity

 Despite the fact that learner engagement and memory is enhanced by more complex graphics than simple ones, Chris advises designers to keep things simple. Note the lack of nuance or qualified treatment of how excess simplicity can actually impede learner retention. Sometimes, things need to be complex to be engaging. For instance, which pie chart is more engaging and memorable:

Image

http://24.media.tumblr.com/27fe9500bb2a3541991c31d6925c6553/tumblr_mr556qJXjH1qgam07o1_500.jpg

Or

Image

http://blog.psprint.com/wp-content/uploads/2011/10/PieCookieChart_Wired.png

Well, that’s not entirely fair, but it does make a point. Which one do you want to look at longer and which one has more useful information that you are likely to recall later on?

Simplicity is not always good, just like complexity is not always bad or hard to remember. How many of you can remember how to pronounce that ridiculously long word from Mary Poppins? It was the complexity that made it memorable (and the repetition through  music), not simplicity.

Different Techniques

As referenced at the beginning, this is helpful advice that tells us nothing. Might as well tell us to reduce overhead and increase sales. Ya’ think?

How about we use APPROPRIATE instructional techniques? What a concept! Here are some recommendations from the real world:

  • When the performance objective is for learners to state a policy or describe a practice, use a verbal instructional method.
  • When the performance objective is to “Identify the parts on the following piece of equipment,” it is best to use either a visual guide of the piece of equipment or a hands-on treatment of the equipment in question.
  • When the performance objective is to “Identify the third movement of Beethoven’s Fifth Symphony,” it is best to provide an auditory instruction method so learners can hear when the movement begins and identify  transitional musical phrases.

Are you getting the picture? Some things are obvious. Maybe he should have said to match your instructional method to performance expectations. We don’t train students to drive cars by methods different from the way they will be tested. Why would we do workplace training any differently? Okay, there are reasons why we would do it differently relating to what senior management mandates, but you understand my point.

Bite size

Ripley’s Believe It Or Not was bite size and often very memorable. My kids frequently spout little factoids, so I know they are persistent in long term memory. However, Chris never defines what a “bite” is and he certainly doesn’t tell us what to do with the bites.

IDs know that contextualizing and providing a cognitive framework for organizing the information is far more important than bite sizing. Let me elaborate, particularly in the context of cognitive load.

Pappas is fond of citing Miller’s Law (Memory can hold 7+/- 2 bits of information) without having actually read Miller’s paper which says nothing of the sort. See the original paper for yourself: http://psychclassics.yorku.ca/Miller/  Point being, Miller proposed that there were ways to work through the span of immediate memory and he proposed a number of ways to do this.

Miller pointed out that his experiment was limited to unidimensional stimuli. Read what he says about recoding. Also read what he says about multidimensional stimuli. Here’s an illustration of the two different kinds of stimuli with relation to short-term memory, and presumably cognitive load.

  • The game “Simon” is based on the ability of people to remember a series of four tones played in random order. Most people fall in the 7 +/- 2 category (or less) for the number of random tones they can remember in the short term. I know of no one who can duplicate the pattern of tones from any game they’ve played in the long term. In other words, the learning is not persistent over time.
  • The song “Twinkle, Twinkle, Little Star” is based on five tones played as a phrased series of 42 musical notes that many people can memorize after hearing it just one time. Additionally, the song can be sung or played decades after hearing it the first time, demonstrating its persistence.  

Not only are there more notes played, but the duration of the notes also varies, and folks find it incredibly easy to duplicate this more complex string than the radom, uniformly timed notes of Simon. When it comes to music, it’s often easier to remember longer, more complex pieces than it is to recall short, random pieces. So, despite Miller’s Law, it is really difficult to find an upper limit to what humans can remember when it comes to certain stimuli. For instance, we can remember a startling number of faces and some individuals have phenomenal performance at this. See: https://www.sciencenews.org/article/familiar-faces

CONCLUSION:

In summary, his three bits of advice are both unhelpful without a context or caveat and in some cases just downright wrong when presented as unqualified “rules” for instructional designers. 

However, in the interest of providing candles instead of just cursing the darkness, I refer you to this helpful little bit from Jon Matejcek: http://www.dashe.com/blog/elearning/improve-learner-retention-forgetting/  He offers a singular solution to the “forgetting problem” from Don Clark – Spaced Practice. In other words, a quick and easy solution to increasing learner retention is to build in “regular rehearsal and practice…over a period of time.” Instead of fixing short term memory as the eLearning blog suggest through clever tricks, instituting a performance support system that offers practice over time will be far more effective at long-term retention. I would recommend following the links in Jon’s blog post for more helpful treatments on the topic of forgetting. 

Scheming About Schemata

Image

If you’ve not already done so, I would encourage you to subscribe to eLearning Industry’s blog (http://elearningindustry.com). They will spam you relentlessly with all sorts of info bits, with the most notoriously prolific being Christopher Pappas. I suspect he spends a good part of his day surfing the internet for ID related topics and then tries to see how many he can post in a week. But it’s good spam. Sort of.

 

Today’s post, Instructional Design Models and Theories: Schema Theory, is part of a series on ID theories. This is kind of a meta blog post since it involves thinking about thinking. And to make it even more meta, I’m writing about writing about thinking about thinking.

 

Does your head hurt yet?

 

First off, thinking about thinking. Since the article is woefully penurious about defining what conceptual schemata are, I refer you to the wiki: http://en.wikipedia.org/wiki/Conceptual_schema Take note how this is tied to computer data models. I believe it is both liberating and limiting when we compare brain functions to machine functions, but my primary concern is that we will become just a tad bit too reductive if we take this approach. So I’m not going to. If you want to compare cognitive function with computer function, feel free to do so and hit “reply all” when you discover some insights. No, really. We’d be interested. I’m just not going to go there for lack of time and space (relatively speaking).

 

The article defines schemata as, “Psychological concepts that serve as a form of mental representation for selected chunks of complex knowledge which are then stored in long term memory.” Huh? Read that again and see if it tells you anything substantial. Without meaning to be too flippant, can I just say “analogies” and be done with it? Well, the educational psychologists will say “No, you can’t. That’s too simplistic,” and they would be right. With a wink, let me say that it represents a schema that may be a source of errors, but it’s the one I’ve chosen to use.

 

Anyone up for some passive-aggressive cognitive theory?

 

So, my schema for schema is “analogy” and I use if flagrantly in my training. I always have and always will. One of the principles of adult ed is that learners relate new knowledge to existing knowledge and my ID strategy is to find a shared base of existing knowledge that we can use as an analogy (or metaphor or mental model or whatever you want to call it) to frame the new knowledge. Once they get adept at the new knowledge it then becomes an analogy (or schema) for more new knowledge.

 

Bottom line: Take advantage of the human tendency to employ schemata by providing plenty of analogies for learners to use. Even if we don’t do that, engaged learners will analogize or schematize on their own (I always do, even though it looks like I’m daydreaming in class), so it’s best to make sure we use a shared schema to avoid misunderstanding.

 

Now, on to writing about writing about thinking about thinking.  I enjoyed this article, not for what it said, but for how frustrated it made me in not providing nearly enough information to be useful. And you may wonder how I was able to write so much already on schemata if the article had little helpful information in it. It’s because I already knew what it was talking about and had already formed opinions and employed strategies around it. If you read the article carefully – or even carelessly, it makes no difference really – you will find no practical strategies for employing schemata in your instructional design strategy. And this is my gripe. If you are going to title an article “Instructional Design Models…” it would be helpful to relate the contents of the article to the practice of instructional design.

 

Just a thought. 

The New Bloom

Just spotted this article in eLearning Industry dot com:

How To Write Multiple-Choice Questions Based On The Revised Bloom’s Taxonomy

Here’s what they suggest:

  1. Always use plausible incorrect answers in the questions
  2. Integrate charts into the exam
  3. Transform the verb
  4. Create examples or stories to test their understanding abilities
  5. Use multilevel thinking

Check out the article for the details as well as an overview of the New Bloom’s Taxonomy

Words With Friends – The Ideal Online Learning Component?

  Image

Just for fun, I was perusing “The Science of Training and Development in Organizations: What Matters in Practice” by Salas, Tannenbaum, Kraiger, and Smith-Jentsch (available for download here). I came upon their list of their characteristics for well designed training that enhances learning and transfer. They list the characteristics as:

a)      Trainees understand the objectives, purpose, and intended outcomes

b)      The content is meaningful and examples, exercises, and assignments are relevant to the job

c)       Trainees are provided with learning aids to help them learn, organize, and recall training content

d)      Trainees can practice in a relatively safe environment

e)      Trainees receive feedback on learning from trainers, observers, peers, or the task itself

f)       Trainees can observe and interact with other trainees

g)      The training program is coordinated effectively

Nothing new here, right? This is all textbook stuff that we get in Instructional Systems Design 101 courses. The tricky part is putting it into practice. So, as a compulsive instructional designer, I’m always on the lookout for examples to reinforce lessons and new ways of viewing old information so it sticks better. With all the talk of “gamification” it only makes sense to look at online games as a model for how some of these characteristics might look. One of the most popular games online right now is Words With Friends by Zynga. It is a Scrabble-like game using letter tiles with varying point values on a board whose spaces have different multipliers (Double Letter value, Double Word value, Triple Letter value, and Triple Word value). Players draw 7 tiles and take turns using them to spell words that must connect with existing words. They earn a 35 point bonus if they manage to use all 7 letters on a single word.

So what makes this game so addicting (even more addicting than the actual online version of Scrabble)? I’m convinced that part of its success is due to a skilled application of the characteristics of well designed training enumerated above. I would like to take a detailed look at them in the context of the game in this week’s post.

Trainees understand the objectives, purpose, and intended outcomes

Words with Friends (WWF) installs with concise instructions on game play. It explains that it is intended as a social game between friends and that you win the game by scoring more points than your opponent. Clear, concise, and simple. There really is no ambiguity in why someone would want to play WWF.

Training needs an equally unambiguous reason for being. It need not be an enumeration of the course objectives in a bulleted list, but it could easily be a simple statement of, “At the end of this training you will be able to (simple statement of task).” I’m not sure a list of enabling objectives is actually helpful to the learner at this point. Perhaps an agenda for classroom training, but online training hardly needs such.

The content is meaningful and examples, exercises, and assignments are relevant to the job

If you look at the Help folder or follow along with the tutorial, you will discover that it does NOT provide:

  • A History of Word Games
  • A Comparison to Existing Word Games
  • A Rationale for Why You Want to Play This Game
  • A Long List of Terms That Require Definition
  • Credits and Acknowledgements
  • Pretty Pictures or Graphics Not Directly Related To The Game

I could go on with a huge long laundry list of things that WWF does not include that we have all seen in online training. For some reason ID’s feel compelled to include many of the items listed above in their training, all with the best of intentions, but counterproductive to the main reason for taking the course. WWF provides clear explanations on how to play, lets you practice a bit and then turns you loose to do the bulk of your learning in the harsh, cruel world of game play.

For some things, OJT is the best place to reinforce learning of a skill. For others, simulations are the way to go. If we are designing a simulation, we may  best serve our learning by providing very little in the way of background at the outset and getting their hands dirty as soon as possible with the nitty-gritty of what they are supposed to be learning.

Trainees are provided with learning aids to help them learn, organize, and recall training content

The Help Folder is available on demand through a menu and provides:

  • “Learn to Play” which includes the basic rules
  • “Support” for help with issues related to the software or installation
  • “Help Videos” which provide demonstrations to answers for the more commonly asked questions
  • “Feedback” to suggest improvements in the game

Context-appropriate help is the best way to provide relevant instruction on topics of immediate interest. Learners are intrinsically motivated to learn, particularly if we have designed a tricky bit of instruction or simulation that leaves them baffled and frustrated until they figure out what they are doing wrong. And that doesn’t mean a pop-up that says, “Sorry. The correct answer is…” or even multiple attempts at the same question until they click the right guess.

Trainees can practice in a relatively safe environment

Using the resources listed above, learners can lean about the game without exposing their ignorance, but this is not actually where the most learning takes place. This is only the initial introduction to the game.

For simulations, the fact that they are in a simulation is already a safe place. On the job is where the danger lies. We should encourage them to learn from their mistakes in the training environment. Put the mistakes to good use. And do it where failure is cheap and doesn’t involve the loss of products or revenue.

Trainees receive feedback on learning from trainers, observers, peers, or the task itself

This is the beauty of WWF. It has a chat function that allows you to converse with your opponent. I started playing WWF with one of my old biology students and he was deplorable at the game. Whenever he made a play that gave me the opportunity to play a big score, I would text him a message that he needs to play better defense. When he asked me what I meant by that, I said he shouldn’t give me opportunities to capitalize on bonus spaces with high scoring tiles. Since then, he has improved tremendously and has even beaten me on occasion.

A recent feature WWF added is a “Leaderboard” that lets you see how well you are doing overall compared to the friends you’ve played. It assigns you a ranking based on your scores. Feedback thus ranges from immediate (you see what your play scores) to mid-term (you can track your progress against an opponent through the game via the score) to long-term (you can check your ranking on the Leaderboard).

I’m not sure if this is helpful, but if CBT tracked scores and posted high and low scores, creating a sense of competitiveness in learning, it might enhance performance. Or allowed for online live help. Just a couple random thoughts that need to be fleshed out.

Trainees can observe and interact with other trainees

WWF allows you to play multiple games with as many friends (or even random strangers) as you like. It is fairly easy to see what level of play your opponent is operating at and you can either be a learner or an instructor depending on who you choose as an opponent. 

One of the things we fail miserably at is empowering learners to become instructors. This happens on its own all the time when folks whisper back and forth in class to see what page we’re on and get clarification on various points. In webinars this happens through the chat window, sometimes in private chat. We should encourage learners to ask one another questions and answer one another’s questions. Sometimes, they have a better insight into how learners think than we do.

The training program is coordinated effectively

WWF is distributed most often over Facebook. People are able to invite their Facebook friends to play or if they don’t have (or want) a Facebook account, Zynga allows users to create a WWF account. For the most part, it is a web-hosted app that is easily accessible through your Facebook homepage or as a stand-alone app on your smartphone, tablet, or computer. You can start a game on one platform and complete it on another. While users may complain about it, the fact is, it runs quite smoothly across most platforms with a minimum of interruption.

So what does that have to do with instructional design? It illustrates how well these principles work when they are applied correctly. I wish that my training were as compelling as WWF or the newest addiction from King games, Candy Crush Saga (which is a whole different concept and one worth exploring at a later date). Until then, I think it is worthwhile for us as IDs to look at these games to see what makes them so addicting, especially when similar games are not, and see how to incorporate the principles into our training.

Game-based training seems to be all the rage nowadays. So what is a quick and easy way to create a game?

A tried and true strategy some of us used is a board game. But it it is not quick and easy. Was the training effective? Yes. Did it help break down silos? You bet. But quick and easy it was not.

I discovered Wheeldo: http://www.wheeldo.com/#/games_revised It’s limited similar to Branch Track in that you don’t have a lot of choices, but allows for a more interactive game format. Instead of the ID creating all the questions, it allows participants to create content as well. This looks like fun. If it survives and expands.

As games come more to the fore, I expect to see more of these sites coming on line with more products.

For those still looking for help with the career transition, I still recommend Emprove Group’s webinars: http://www.emprovegroup.com/career-search-strategies-20/4577553892 They have one coming up for next week and if you have not yet attended one, they are free. They do link to services that cost money, but that doesn’t mean their free webinar isn’t packed full of helpful info. Stop in and enjoy seeing what happens when an Instructional Designer/CLO goes into career counseling full time.

Create a free website or blog at WordPress.com.

Up ↑