rockidscience.com Instructional Design Basics

18Oct/10Off

Show Me the …

Growing up in Missouri (The Show Me State) has made me a skeptical type of fellow, that is, if you tell me something I’ll often ask for clarification or proof before I just accept it as law.  Now this attitude may come from living in small town America or the blue-collar background of my parents, but I don’t think it is exclusive to me—Missourians and Midwesterners in general, know what is up in this regard. And two of Missouri's native sons have eloquently summed up our thoughts:

There are three types of lies; lies, damned lies and statistics.
Quote popularized by Mark Twain

If you can't convince them, confuse them.
Harry S. Truman

So yes we are aware of the tricks people use when selling snake oil and as a trainer you may be apt to trying some of these tricks yourself.  Here we know that training, as a function, is often required to justify its existence:

  • Why should the company fund this training project?
  • Why should the company invest in this technology to deliver training?
  • Why do you need an instructional designer when Shirley over there is a subject matter expert—can’t she just make something up?

Yes there are many obstacles we must face as learning professionals; however, you should avoid selling snake oil as you may run into someone from Missouri and more importantly, these tricks are not needed.  Today I'll help you demonstrate your value by talking about the levels of assessment and how they can make believers out of everyone.

Levels of Assessment

A key to justifying your existence is figuring out the data that people care about/value.  Some folks may be pure number types (cost saved, number of people trained, hours of training offered) while others may be more concerned with returns (decrease in the number of help desk tickets, increase in patient satisfaction scores,...) and still others may want something else.  All of this is a lot to figure out and this is why assessment has been an important area of study for some time.  Much of this work is based off Kirkpatrick’s levels of assessment.

According to Kirkpatrick there are basically four purposes for assessments:

Level 1—Employee Reactions: This type of assessment will tell you what the students think about the training.  These are the basic smile sheets that you have students fill out once the class/course is complete. This type of assessment generally gives you information on what the students think of the content, materials and instructors; however, some questions here can provide you with valuable information about the quality of the content as well as the potential for use/implementation.

Level 2—Employee Learning: This is your typical knowledge test that you have students complete once training is done.  This is usually a closed ended assessment that can be tracked and measured. This type of assessment identifies what students have learned as a result of the training.  In addition these assessments can indentify areas in the training that need further refinement/development.

In this instance, if you have a question that only 30% of students got right, that may indicate an area in the training that needs additional content refinement or it may indicate a question that needs to be rewritten.

Level 3—Employee Behavior: This is a follow-up assessment that occurs after the training (30 days, 6 months, …). It will identify whether the students actually changed their behaviors as a result of the training.  This assessment is more open ended and will focus around interviews and observations.  This information is important as it helps determine whether the training is effective, that is, did the users change their behavior.  In addition this type of assessment can help identify organizational problems that might hinder the training efforts.

Here we know that training isn’t always successful and the reasons for these failures often have less to do with training and more to do with organization problems. For instance users may not change their behavior because it isn’t supported, rewarded or required.

Level 4—Return On Investment: This is another follow-up assessment that is aimed at measuring whether the training has achieved its results. Traditionally in Kirkpatrick’s levels, this is tied back to the goals and objectives within the training. So if the goal was to improve patient safety, one thing you might measure/assess here is the number of falls before and after the training.

This type of assessment is often difficult to do and as such rarely practiced in full. Data from the previous levels and other data sets are often used in place of this as a means to support the return on investment concept. Typically this is represented in a scorecard and contains a variety of data types. Industry benchmarks are often used here to highlight how your organization compares to others. Scores such as cost per hour of instruction, cost per student, Net Promoter Score and others can provide powerful information on your training efforts.

Regarding the levels of assessment, understanding what information people value and care about is a key to reporting your results.  This understanding will help shape the story you need to tell to demonstrate your value.

So if you are reporting to someone that cares about “learning” then you might want to report:

  • Differences in pre and post tests scores  (level 2),
  • Follow-up results with managers (level 3),
  • Hours of training used (level 4) and
  • Hours of training available (level 4).

If you are reporting to someone that cares about “cost” then you might want to report:

  • Cost per student (level 4)
  • Development cost per hour of instruction (level 4)
  • Industry benchmarks for cost per student and cost per hour developed (level 4) and
  • Student reactions to the training (level 1).

Using Kirkpatrick’s levels in this regard should give you the base data that you need to tell your story.

Use Your LMS

Gathering and reporting all of this data can be a big task if you are the one responsible for it. To help with this task, you should research your LMS and get to know its capabilities as well as its base data structures.  This information will enable you to create meaningful data for your reporting needs.

Some standard reports available in our LMS are:

Level 1

Level 2-- Test Attempts

Level 2-- Question Analysis

In addition there are several fields and values within our LMS data structures that can be used to develop key tracking metrics for you.  Here we can setup processes that will help with your back-end reporting needs and develop custom reports for you.  If you are interested in expanding your normal reporting efforts here, please contact me so I can help you tell your story.

While telling your story though I encourage you to tell it like it is, that is, don’t sell snake oil.  In doing so you should be able to hold your head up high while still demonstrating your value.  And if this doesn’t work, just think back to good old Harry:

To hell with them. When history is written they will be the sons of bitches - not I.
Harry S. Truman

Tagged as: 3 Comments
27Sep/10Off

Stuck on You

One of my 1st real experiences with kids occurred several years ago when I went to visit my buddy Scott.  Here I got to spend a few days with him, his wife and their son Jeffrey.  And I must say, Jeffrey was a pretty cool little kid— he was smart, creative, well-mannered and everything else you could hope for in a son.

It wasn’t all good though; Jeffrey did have one small episode while I was there.  It occurred one night while he and I were sitting at the table playing with his dinosaurs. And when it became time for his bath, the little guy calmly ignored his mom’s request and kept on playing. At this point his mom tried to reason with him, telling him that he would get to play after the bath but he wasn’t having any of it.

This standoff ended shortly when his dad got involved—here Jeffrey got a spanking, had to take his bath and then went to bed without finishing his dinosaur adventure. Things could have gone much more smoothly for the little guy if he could have just let go for a bit instead of getting stuck.  In his mind the dinosaurs had important things to do—the bath had to wait. This however, did not jive well with reality, that being, the need to listen to his parents.

In a way, this is a situation I have encountered in many of the projects I have worked on.  Here I have often seen people getting stuck on an idea of how something should be presented rather then reality of the project.  Today I’ll talk about some of these experiences and hopefully present you with some ways to get unstuck.

Approaches

I often see people getting stuck in this regard when a solution calls for a non-traditional approach.

Traditional Approaches
Traditional in this sense is a reference to behaviorist approaches that tend to follow a standard Tell/Show/Do model.  Here a course is very controlled and systematic—topic A then topic B and then C—and probably has a lecture type feel to it.

One key component of these types of approaches is the need for close-ended interactions and assessments. Here definitive correct/incorrect behaviors must be demonstrated and assessed.  In these approaches, objectives are often covered as distinct content pieces and assessed as such. And the overall training goals of the solution are realized when the users demonstrate specific behaviors.

Non-traditional Approaches
Non-traditional in this sense is a reference to constructivist and social learning approaches that tend to have their own project specific models.  Here a course is less controlled and may not be systematic— part of topic B and C unless the student wants A first and then maybe the rest of topic B—and may take the form of stories, scenarios, games, simulations and collaborative problem solving activities.

These approaches use open-ended assessment activities that require a rubric of some kind. In addition, objectives are usually covered together and wrapped around a real world story/problem. The goals of these courses are usually not defined around changing specific behaviors, rather they center on the user producing a product.

Obviously there is a lot more to behaviorism, constructivism and social learning but we’ll save this for latter posts. For now just realize that each approach has some fundamental differences in what is required and these differences are often what get people stuck.

Getting Stuck—Creating Content

A few years ago I worked on a project that had a simulation at the end of the course. Here I worked with another ID that was a hardcore, “A then B then C” type of fellow.  So as we worked on the simulation we quickly ran into some roadblocks.

As stated above, interactions for constructivist approaches tend to be more opened-ended in nature and revolve around real world situations. How this might play out in your assessment is that in the real world there often isn’t one clear right/wrong way to address a problem.  In addition, sometimes there may not even be a good choice to a problem—the lesser of two evils.  So it may make sense for your interactions to reflect these types of situations.

Knowing this I wrote the storyboards around a series of events that a user may experience on their job. The storyline that emerged here included a few of these challenging situations. In these instances, I limited the user choices to ones that didn’t have definitive correct answers.

I finished my storyboards and passed them on to the other ID for a review cycle. As this other ID reviewed these she immediately started rewriting the interactions.  Here she edited the situations and options so that there were clear right/wrong choices.  And by doing this, she altered the storyline to such an extent that it no longer followed a real world situation. In addition the final product that emerged was something that felt less like a simulation and more like a series of multiple choice questions that were loosely tied together.

This was an instance of the ID being stuck on the nature of assessment. In her mind these situations had to be clear and definitive—your classic closed ended question types. The reality of the project though called for something else.

Now in her defense I can understand her motives for the changes—she wanted the assessment to hit the exact content that was covered previously in the course.  In the course things were clear—there were specific answers and steps to follow. In addition the content had a high level of detail to it. Here the training focused around a small set of parameters, variables, and conditions. Anything outside of this may seem like a trick question or something that is unfair to the user.

These are valid concerns but there are better ways to address them in your simulations. The nice thing about these types of assessments is that you have more feedback options that are available to you.  With simulations you have the following options here:

  • Storyline—this is how the characters react or what changes to the environment happen based on the user choices.
  • Image—visual display of the changes to the environment.
  • Assessment—specific feedback you can give the user about their choice.

Each one of these options presents an opportunity to tie back to your clear and specific course content.  So in this regard you could have a character react in a way that forces a new interaction.  This new interaction could contain the expanded detail content that was covered in the course.  Your feedback offers the easiest way to address this though as you can tell users exactly how it connects back to the course content.

The important thing to remember with simulations is not to get stuck on the questions you are asking. Here focus on the story and use that to clear up and connect to any existing course content. Some tips with developing simulations can be found here. I’ll have more on simulations though in future posts.

Getting Stuck—Navigation and Interface Needs

This next project was quite a few years ago and focused on a new orientation course.  Here the sponsor wasn’t so concerned with users demonstrating mastery skills, rather they wanted a resource that would introduce users to their new world.  Engagement was a key focus for this sponsor as they wanted users excited about their jobs.

So after meeting with the sponsors I developed the appropriate design/scoping information and passed it on to the instructional designers (IDs) on the project. Here the IDs started developing the storyboards around our standard interface template. In addition to this template, they started reusing our other normal navigation controls (Table of Contents, Next, Back,…) and interactions within the course.

Shortly after they started the storyboards we all met up to see how the course was going. At this point, I discovered that what was emerging was basically our old A then B then C type of course.  I knew this wasn’t what the sponsor had in mind so I had stepped in to help the IDs on the project.  Here we had to start over as they were stuck on trying to make the content fit into existing templates and frameworks.

To start fresh we defined the common elements and themes in the content. With these themes we then talked about how they could be represented in menu structures that users could interact with. For example one theme was that users would have to interact with a lot of different people in their new job.  So we created a visual representation of these people that users could click on to interact with and access the content.  This graphical menu structure was much more engaging then our normal navigational structures. We did this with several themes and created a hierarchical content framework for the course.

We also found ways to create new interactions within the content. For example there were several tables in the content that contained various statistics about their jobs. Here we introduced a slider bar that users could interact with to view the different values. We followed this up with questions on the data contained in the new animated tables.

This process wasn’t easy though as the IDs were seriously stuck on their old methods and strategies.  To get them to be comfortable with the new approach, collaboration was a key—here I needed to create an environment that didn’t constrain possibilities.  Next we had to take these ideas and represent them online with prototypes.  In this regard, I encouraged the shitty first draft concept. I just wanted them to build out the content—we would refine it and fix it as it came. And slowly but surely the course that emerged was very open and focused around discovery interactions. More importantly our client was very satisfied with this end result.

What To Do if You Are Stuck

If you haven’t developed a non-traditional course yet and you are tasked to do this, you probably will get stuck at one point. Developing for this type of course is a paradigm shift—the standard Tell/Show/Do models and your regular interface templates will not work here.  You have to go back to your project needs and examine how those relate to constructivist and social learning activities. Here don’t be afraid of creating a terrible first draft.  Also plan on using prototyping and collaboration to help you get your content into shape.   After a few painful tries, you’ll learn to let go and just go with it.  This is important as you don’t want to end up like little Jeffrey here—getting a spanking is no fun at all.

31May/10Off

Assessing Your Assessment—Part II

Before getting into Instructional Design, I started off as a high school social studies teacher.  While prepping for that field, a main area of study for me was history and in one of my classes a professor once asked me, “why is the study of history important.”

Up until that point I had never really considered the why—for me, it was just something we had to do.  So in answer, I gave the usual, “you should study history so that you don’t repeat the problems encountered in the past.”  I thought that sounded pretty good, after all, there is a famous quote similar to what I said so it must hold some relevance.

However, my professor had a very different reason:

“The study of history is about cause and effect—this event lead to this result, which then lead to this...  So in essence, the purpose behind studying history is that it improves your critical thinking skills and is why history is taught in our schools.”

Hmmmm, now that actually sounds like a real reason for the study, unfortunately, it doesn’t match up with how history is taught. For instance, think back to your old high school history class—what kind of tests did you take; were the questions something like:

  • When did Columbus discover the new world?
  • Who conquered the Aztec empire?

I bet that’s pretty close to the truth—most of the tests you took probably centered on dates, names, and places.  And for the most part, these tests assessed your ability to recall and memorize facts and figures.  What they didn’t do was assess your critical thinking skills.

Now as may be common with my posts you may be asking yourself what this has to do with you. Well it’s simple, this is a common problem associated with test construction, this being, creating test questions that are not focused on the right thing.  Often the tests we create are focused on recall and memorizing facts and figures rather then the real purpose behind our training.

Knowledge Types
When constructing a test a way to avoid this problem is to step back a moment and reexamine your objectives, specifically, you should check to see how your objectives match up to your content knowledge types.

By using knowledge types you can break down any content into the following categories:

  • Fact
  • Concept
  • Rule/Principle
  • Procedure
  • Interpersonal
  • Attitude

These categories are important because they provide you with information on how each objective should be covered and assessed in your course.

In this light your test question needs becomes a little clearer, for instance:

  • If you have an objective that is a procedure, your assessment for that objective should be focused around testing the procedure. “Demonstrate how to …”
  • If you have an objective that is a rule/principle, you should assess that rule/principle,  “Calculate the ….”
  • If you have an objective that is a concept, you should assess that concept, “Distinguish between …”

Levels of Learning
The verbs you use in your objectives will start to highlight what is factual information or recall level and those that are at a deeper level of learning—the application level.  Verbs such as define, identify, list,… are aimed at recalling factual information and as such, like the history tests we took, probably don’t fully address your training needs.

Now it’s hard to talk about verbs for objectives and levels of learning without someone bringing up Bloom.  Bloom and his colleagues did some great work identifying levels of learning and establishing a hierarchy of these levels.  And if you want to use their work to help you develop your objectives and assessment needs you will probably be fine.

I will caution you though to think about your true needs while consulting Bloom’s work.  For instance, synthesis and evaluation are at the top of Bloom’s hierarchy—it is doubtful though that you will need to assess at that level with all of your content.  The important thing to remember here is your goal for the training; is it to change a specific behavior or to create a subject matter expert? Answering this should help you determine what level of learning your need to focus on.

In later posts I will expand on knowledge types and examples of questions to use for each type. For now just make sure you focus your test questions around the real purpose of your training.  Is it recall, is it application, is it to change behavior, or is it to create a subject matter expert?

14Apr/10Off

Feedback you must give

Many years ago a young man named Daniel was learning Karate in order to defend himself.  His neighbor, an old Asian man, became Daniel’s teacher and employed an unorthodox approach to his training. This training consisted of having Daniel complete a series of tasks. Here the master didn’t provide much in the way of feedback—Daniel didn’t know why he needed to do the activities or how well he was doing at each task.  Needless to say, Daniel soon became frustrated with the training and was ready to quit.

If you go back farther in time, you will find Luke, another young man trying to learn the art of self defense. In this pursuit, Luke had an equally frustrating teacher—his master provided feedback but often spoke cryptically or in a way that was difficult to understand. Under this type of training, Luke soon wondered if he would succeed in his goal.

If you are a fan of movies from the 80’s you may have guessed the teachers I am talking about and if so, you may think that they were effective teachers. After all if you have seen the movies, then you know that their students succeeded in their endeavors. If you don’t know who I am talking about the below quotes may help:

Master 1: Daniel-san... [taps Daniel’s head] Karate here.  [taps Daniel’s heart] Karate here. [taps Daniel’s belt] Karate not here.

Master 2: …Anger, fear, aggression; the dark side of the Force are they. Easily they flow, quick to join you in a fight.

If you still don’t know who the teachers are then you have missed a couple of classic movies.  The first teacher is Mr. Miyagi from The Karate Kid and the second one is Yoda from The Empire Strikes Back—and regarding training, you don’t want to be a Mr. Miyagi or a Yoda. That’s because each teacher failed to give their students proper feedback.

For instance Mr Miyagi let Daniel, “Paint the fence” and “Wax the car” for days before giving him any feedback on what or why he was doing these activities. And Yoda, well he was terrible at giving clear concise feedback, “Named must your fear be before banish it you can.”

Seriously Yoda, what the hell does that mean?

This lack of feedback was a key reason for their student’s frustration and initial failures. It is amazing that Daniel and Luke succeeded in this type of training environment at all as most students would have quickly failed and quit. Luke was strong in the force though and maybe that is what saved him here.

So assuming you don’t have skilled pupils, those strong in the force, you will need to give them feedback that is timely and clear.  Let’s take a look at some of these requirements.

Giving Detailed Feedback
I once had an Instructional Designer give me a job-aid that they had developed which outlined feedback requirements.  This job-aid stated that each instance of feedback should indicate:

  • If a choice was the wrong answer
  • Why it was the wrong answer
  • What was the correct answer
  • Why the correct answer was correct and
  • The area in the course where the content was covered.

Now some will agree with that designer and say that giving that level of detail is necessary. They may even proceed to pull out a study or two supporting their position.  And technically they are right—studies will point to the need for detailed feedback.

I however, immediately took this job-aid and threw it away.  I suggest you ignore it as well because few, if any, students will spend their time reading all of that content.  Online learners skim, skip and jump around in our courses—they don’t spend their time reading dense passages.

So instead of spending all that time writing out that extra detail feedback, focus on something that will really engage your users—make your course scenario based, add a good interaction, develop additional graphics to support your content, follow up with remedial activities or whatever.  Just don’t get bogged down in writing out a bunch of content that your users will not read.

What Feedback Should You Cover
Now if detailed feedback isn’t necessary, what then should you cover in your feedback?  My advice here is to tailor your feedback to your needs at the time. If why a choice was wrong is particularly important then hit that and let your students move on. If the correct answer is what is important then cover that well and let your students go.

One point I think that you should include in your feedback is to identify where the content was covered in the course.  Since your content is online you can probably link to the content directly, which will aid your students in their quest to scan through your content.

Limits You May Encounter
Besides your students wandering attention, other things may limit your feedback options. For instance the test engine we use at BJC only allows for 1 feedback option.  So with this limitation you can’t have specific feedback for each choice, rather you get one shot at feedback and it has to account for each choice in your question.  This poses a challenge and in this instance I recommend just indicating the correct answer, why it is correct and where students can find the content in the course.

This last option (where this content is located) may prove to be another limitation as courses and tests are usually separate learning objects within an LMS.  Linking between these separate objects (the course and test) can require advanced programming needs and if this is the case, I suggest referencing the topic or the specific page within the course.

By following these suggestions and giving timely and clear feedback you should help ensure your students success and maybe one day that student will grow up to save the universe or win some silly Karate tournament.

14Mar/10Off

Oh no you didn’t …

I once sat in an interview for a new ID position.  The lady that came in did pretty well—she answered the questions nicely; was friendly and engaging; and demonstrated sufficient knowledge on how to develop instructional materials.

Part of her interview included a sample of her work, in it, the writing was interesting, the content groupings made sense, the choice of media was appropriate, and she had a nice open interface for users. Overall I thought she did a pretty good job and was quite surprised with the team debrief that followed.

She used the verb “understand” as an objective! Tsk, that isn’t a proper verb for an objective.  After all you can’t observe “understanding,” she obviously doesn’t know how to develop …

Yeah objectives are a fundamental skill, if you don’t follow the Mager style you can’t …

At this, I slowly sat back in my chair and rolled my eyes— Ugggh, the ID Nazis were on the march again. It wasn’t going to go nice for the lady that just interviewed.

You see there is a war in our field between those that follow the strict science of Instructional Design and those that follow a more holistic approach to designing instruction.  The lady that just interviewed stepped into the middle of this war and quickly became a casualty in it. Here the Nazis won and she didn’t get the job.

So you may be asking yourself what this has to do with you. Well quite simply it is this:

Be careful about your use of objectives around an Instructional Designer, as that designer may be a Nazi.

But there is more to it then this, as the Nazis have some valid points concerning their zeal for objectives.  After all, objectives are a critical part of a successful training intervention.  Today I’ll give you a start on their use.

Development

As mentioned, objectives are critical to the development of training initiatives.  While developing your content you need to determine what it is you want students to be able to accomplish upon completion of the training.  While at this stage, it helps to start at high-level goals and then break these down into specific objectives.

These course objectives define the what, when, where and how of your course and will drive your content creation efforts. They provide the framework from which you can build the content. They create a checklist of the content that you need to develop for your training. Having a checklist will help ensure that you cover all of the needed content and minimize any extra or unneeded content in your course.  Here objectives force you to focus on “have to know” content and will narrow your development efforts to only the defined goals of your training.

Once you have finished the content creation process, you can pull out your objective checklist and review your content. This type of review will allow you to determine if your content actually addresses the goals of your training intervention.

Evaluation

Objectives will also drive your assessment needs.  As mentioned above, objectives identify the “have to know” content in your training.   Since this is “have to know” content, you should evaluate these items in order to determine if the students have mastered the goals of your training.

So when developing your assessment, you should ensure that each objective has at least one question or interaction associated to it.

Other Tips

There are some further tips/tricks that can be used with objectives, for instance you can use objectives to:

  • Define the sequencing of your content in the development phase,
  • Identify instructional strategies to use in your content creation efforts and
  • Identify the types of evaluation methods to use in your assessment.

However, these and specifics on how to create objectives will be covered in a later post.  For now just watch out for Nazis!