US20150024366A1 - Electronic learning system - Google Patents

Electronic learning system Download PDF

Info

Publication number
US20150024366A1
US20150024366A1 US14/507,471 US201414507471A US2015024366A1 US 20150024366 A1 US20150024366 A1 US 20150024366A1 US 201414507471 A US201414507471 A US 201414507471A US 2015024366 A1 US2015024366 A1 US 2015024366A1
Authority
US
United States
Prior art keywords
query
response
user
instruction
subject matter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/507,471
Inventor
David Jeffrey Graham
Allan A. Sura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deck Chair Learning Systems Inc
Original Assignee
Deck Chair Learning Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deck Chair Learning Systems Inc filed Critical Deck Chair Learning Systems Inc
Priority to US14/507,471 priority Critical patent/US20150024366A1/en
Publication of US20150024366A1 publication Critical patent/US20150024366A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/06Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers
    • G09B7/08Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying further information

Definitions

  • the present disclosure relates to electronic learning systems, such as adaptive and remediative electronic learning systems, and to associated methods, software and computing devices.
  • Electronic learning systems are systems that present a curriculum to a user (e.g. a student) for the purpose of assessing knowledge and skills and teaching the curriculum to the user.
  • Electronic learning systems are typically computer-based.
  • a typical curriculum consists of multiple question items. Each question item typically includes a query or instruction and a response mechanism for receiving a user response to the query or instruction.
  • An exemplary query may be “What is the capital of New York State?” and the corresponding exemplary response mechanism may be a set of individually selectable (multiple choice) answers or an editable text field into which a user response may be typed.
  • the instruction “Identify the capital of New York State” could be substituted for the above query.
  • the query or instruction and response mechanism are typically displayed at the same time, e.g. on a single computer screen.
  • the user's response may be recorded, possibly along with a measured totality of elapsed time between presentation of the question item and the entry of a user response.
  • a user proficiency measure may be generated based on the accuracy of the user's responses.
  • a machine-readable medium storing instructions for presenting a question item of an electronic learning curriculum, said instructions, when executed by a processor of a computing device, causing said computing device to: (a) retrieve from a data store a question item comprising: textual, visual or auditory subject matter; a query or instruction pertaining to said subject matter; and a representation of a response mechanism for receiving a user response to said query or instruction; (b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input; (c) based on said user input, determine: a presentation duration of said subject matter; a presentation duration of said query or instruction; and a presentation duration of said response mechanism; and (d) store indicators of each of said three presentation durations.
  • a computing device having a processor in communication with memory storing instructions for presenting a question item of an electronic learning curriculum which, when executed by said processor, cause said computing device to: (a) retrieve from a data store a question item comprising: textual, visual or auditory subject matter; a query or instruction pertaining to said subject matter; and a representation of a response mechanism for receiving a user response to said query or instruction; (b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input; (c) based on said user input, determine: a presentation duration of said subject matter; a presentation duration of said query or instruction; and a presentation duration of said response mechanism; and (d) store indicators of each of said three presentation durations.
  • a machine-readable medium storing instructions that, when executed by a processor of a computing device, cause said computing device to: (a) calculate a user competency measure for each question item in each of a plurality of question item sets comprising an electronic learning curriculum, said calculating being based on an average response accuracy for said question item and an average completion time for said question item over a plurality of trials; (b) for each question item set of said plurality, calculate a collective user competency measure for the set based on the user competency measures of the question items of said set; and (c) automatically select for repeated presentation the question item set whose collective user competency measure is closest to but below a threshold.
  • FIG. 1 is a schematic diagram illustrating an electronic learning system
  • FIG. 2 is a flowchart illustrating operation for authoring a single question item using the electronic learning system of FIG. 1 ;
  • FIGS. 3A-3C are schematic diagrams illustrating three exemplary display modes for a question item
  • FIG. 4 is a schematic diagram illustrating an exemplary question item record used by the system of FIG. 1 ;
  • FIG. 5 is a flowchart illustrating operation for creating a curriculum using the electronic learning system of FIG. 1 ;
  • FIG. 6 is a schematic diagram illustrating question item set forming part of a curriculum created using the electronic learning system of FIG. 1 ;
  • FIG. 7 is a flowchart illustrating operation for presenting the question items of a question item set
  • FIGS. 8A-8C are schematic diagrams illustrating user interaction with three exemplary question items, each having a different display mode
  • FIG. 9 is a flowchart illustrating operation for computing a user competency measure for a question item
  • FIG. 10 is a table illustrating exemplary user competency measures as may be computed by the system of FIG. 1 ;
  • FIG. 11 is a flowchart illustrating an exemplary curriculum advancement algorithm used by the system of FIG. 1 ;
  • FIG. 12 is a table illustrating advancement of a user from a beginner skill level to an expert skill level according to the algorithm of FIG. 11 .
  • the system 10 includes an authoring workstation 12 for authoring an electronic learning curriculum and a user workstation 14 for using the electronic learning curriculum.
  • the workstations 12 and 14 each communicate with a curriculum server 16 over a network 18 .
  • an authoring stage an author interacts with authoring workstation 12 to create an electronic learning curriculum, which is uploaded to the curriculum server 16 .
  • a user e.g. a student
  • user workstation 14 accesses the curriculum stored at server 16 during one or more sessions for the purpose of learning the curriculum.
  • the authoring and use stages will be described in detail below.
  • Each of the authoring workstation 12 and user workstation 14 is a computing device having a display and at least one processor in communication with memory storing software in the form of processor executable instructions.
  • the workstations 12 and 14 may for example be conventional multipurpose desktop computers, laptop computers, palmtop computers, mobile wireless communication devices or portable digital assistants, modified for operation as described herein.
  • the term “workstation” as used herein should not be understood to connote a particular device architecture.
  • the authoring workstation 12 is not necessarily the same type of computing device as the user workstation 14 .
  • computing device as used herein is understood to encompass any device (electronic or otherwise) capable of performing computations including, but not limited to, the devices enumerated above, and possibly including future computing device such as implants, or computer aided prosthetics.
  • Each of the workstations 12 , 14 has a network interface for communication over network 18 .
  • the software that is stored in memory at the workstations 12 or 14 may be loaded into memory from machine-readable medium 20 or 22 (respectively), which may for example be an optical disk, a form of magnetic storage medium such as a memory stick, hard disk drive, flash drive or another form of tangible storage medium capable of being read by a computing device, or received over a network from a remote source.
  • the software at the author's workstation 12 includes a software application (possibly a web browser) by which the electronic learning curriculum is authored while the software at the user workstation 14 includes a software application (also possibly a web browser) by which the electronic learning by the curriculum is remotely accessed.
  • Curriculum server 16 is a computing device having at least one processor in communication with a data store (e.g. volatile or nonvolatile memory such as RAM, ROM or secondary storage) storing at least one electronic learning curriculum for remote access by users at workstation 14 .
  • a data store e.g. volatile or nonvolatile memory such as RAM, ROM or secondary storage
  • the curriculum server also stores software for presenting the electronic learning curriculum. This may comprise web server software that permits the electronic learning curriculum to be accessed by users at remote workstations executing web browser applications.
  • the curriculum server software may be loaded from a machine-readable medium 24 , which may for example be an optical disk, magnetic storage medium or another form of tangible storage medium capable of being read by a computing device, or received over a network from a remote source.
  • the curriculum server 16 also has a network interface for communication over network 18 .
  • Network 18 is a conventional data network or network of data networks.
  • the primary role of network 18 within system 10 is to permit an author to upload the electronic learning curriculum from authoring workstation 12 to curriculum server 16 upon completion of the authoring stage and to permit a user at user workstation 14 to remotely access the electronic learning curriculum from the curriculum server 16 during the use stage.
  • Network 18 may for example be the Internet, an intranet, or a proprietary data network.
  • an author creates an electronic learning curriculum made up of a plurality of question items.
  • Each question item in the curriculum has three components that shall be presented to the user in a presentation sequence during the use stage.
  • the first component is textual, visual or auditory subject matter, referred to as the presentation component (or “P component”).
  • the P component may be considered to represent the subject matter of the question item, i.e. what the question item is about.
  • the P component may comprise combinations of textual, visual and/or auditory subject matter (e.g. a video clip including text having a sound track, text with accompanying audio, etc.).
  • the second component is a query or instruction pertaining to the subject matter, referred to as the query component (or “Q component”).
  • the third component is a response mechanism for receiving a user response to the query or instruction, referred to as the response component (or “R component”).
  • the R component is a mechanism for receiving a user response.
  • the subject matter may be an X-ray image illustrating a fractured bone; the query or instruction may be “What type of fracture is illustrated?” or “Identify the type of fracture”; and the response mechanism may be a multiple choice response mechanism providing a number of individually-selectable responses, each naming a different type of fracture, and permitting the user's response to be received (e.g. as a selection of, say, a radio button graphical user interface (GUI) element).
  • GUI radio button graphical user interface
  • the author when creating a question item for an electronic learning curriculum, the author specifies P, Q and R components for that question item.
  • the author also selects a display mode for the question item.
  • the display mode determines the presentation sequence for the P, Q and R components.
  • the order of presentation is either “P then Q then R” or “Q then P then R”.
  • the presentation sequence comprises: (i) presenting one of P and Q; then (ii) presenting the other of P and Q; then (iii) presenting R.
  • the display of successive question item components may be either cumulative (i.e. display the succeeding component without removing the preceding component, so that both are shown together when the succeeding component appears) or consecutive (i.e.
  • the display mode may also stipulate whether it is possible to review earlier displayed but no longer visible question item components (in the case of consecutively displayed question item components).
  • Exemplary operation 200 for authoring a single question item is shown in FIG. 2 .
  • author input representing the P component i.e. textual, visual or auditory subject matter (including combinations of textual, visual and/or auditory subject matter)
  • the subject matter may take various forms, including but not limited to text, an image, a video clip, an audio clip, an animation, an interactive animation (such as Adobe® Macromedia® Flash® or Microsoft® PowerPointTM), content created for a third party or commercially available application (e.g. Mathematica® or Excel®), possibly executed by way of a macro or the like.
  • the subject matter input may be provided by the author at the authoring workstation 12 in various ways.
  • It may be loaded from a machine-readable medium 20 , received over a network from a remote source, or directly provided by the author through interaction with the authoring workstation 12 , possibly by way of one or more input devices (e.g. by way of an interconnected keyboard, mouse, scanner, camcorder or the like). Alternatively, it may be identified by way of one or more links (e.g. a hyperlink to web-based content).
  • input devices e.g. by way of an interconnected keyboard, mouse, scanner, camcorder or the like.
  • links e.g. a hyperlink to web-based content
  • the query or instruction may be a textual query or instruction and, as such, it may be typed by the author using a keyboard of authoring workstation 12 . Alternatively, it may be loaded from medium 20 or received over a network from a remote source, possibly in a similar fashion to the P component.
  • the Q component could alternatively be non-textual (e.g. it could be an audio clip, video clip, or any other format in which the P component may be specified). It will be appreciated that the Q component may in some cases be an instruction, e.g. “Select one of the following four cities as the city having the largest population”, and is thus not necessarily in the form of a question.
  • author input representing the R component i.e. a response mechanism for receiving a user response
  • specification of the R component may be facilitated by authoring software that provides a graphical user interface (GUI) element at the authoring workstation 12 inviting the author to select a desired response mechanism from a list of different response mechanisms, such as: multiple choice (with either one or more than one correct answer, e.g. “choose all that apply”); true or false (a specific form of multiple choice); binary (e.g. user selection/non-selection of a single choice, such as “I accept”); fill in the blank (e.g. single word or short phrase typed into a field); multiple blank (e.g.
  • GUI graphical user interface
  • the author may be prompted to indicate how many foils (i.e. choices) there will be and to provide the substance of each choice (e.g. by typing the foil or, if the foils are non-textual such as graphical or auditory, by providing the appropriate input for each foil and to provide the substance of any feedback, in any format, that the user could experience upon selecting the foils).
  • the author also provides the substance of the hints, if any, available for each component and/or for each foil.
  • the response mechanism may be unrelated to whatever user input mechanism may be used to receive input indicative of a user response at the user workstation 14 .
  • Various user input mechanisms may be used to interact with the response mechanism, such as pointing devices (mouse, touchscreen or trackball); keyboards (e.g. press ‘Z’ for yes or ‘M’ for no, or other keys, possibly multiple keys); audio capture of vocalized response (e.g. digital voice recording); measuring of physiological responses (e.g. heart rate, breathing rate, pupil dilation, etc., during a response interval); video capture; haptic response devices; various types of peripheral devices such as joysticks, video game controllers, inertial sensors as in the WiiTM game controller; clicker devices; combinations of these; or others.
  • pointing devices mouse, touchscreen or trackball
  • keyboards e.g. press ‘Z’ for yes or ‘M’ for no, or other keys, possibly multiple keys
  • audio capture of vocalized response e.g. digital voice recording
  • measuring of physiological responses e.g. heart rate,
  • operation 202 , 204 , 206 of FIG. 2 can occur in any order.
  • the author selects a display mode for the question item which determines a presentation sequence for the P, Q and R components when the question item is presented in the use stage.
  • the display mode also determines whether the subject matter, the query or instruction and the response mechanism shall be displayed to a user consecutively, cumulatively or using a combination of these approaches ( 208 , FIG. 2 ).
  • the display mode further determines whether it is possible for a user to review earlier displayed but no longer available question item components (in the case when question item components are consecutively displayed), e.g. by selecting a “review” button to repeat the presenting of an earlier displayed component after it has been replaced by a subsequent component.
  • the author selects a display mode at authoring workstation 12 by selecting from one of eighteen display mode options shown in Table 1 below.
  • the author may or may not have direct, question item by question item control as to whether review shall be available (e.g. in some embodiments, review may always be available to the user for all question items, or it may never be available to the user for any question item; the author may be able to specify review paths globally for the entire question item set).
  • Cumulative display a “.” (period or dot) symbol between letters signifies cumulative display of the two represented components, e.g. “P.Q” means “display P then (later) add 0” (i.e. without removing P).
  • FIGS. 3A-3C For illustration, three different display modes, two having the capacity for review of an earlier displayed question item component and one lacking this capacity, are illustrated in FIGS. 3A-3C .
  • each box is a symbolic representation of what a user will see on a display of user workstation 14 at a particular moment in time, with arrows between boxes illustrating user advancement through the sequential presentation of question item components.
  • FIG. 3A illustrates display mode P:Q:R.
  • the P component is presented (box 304 ), e.g. on the display and/or speakers of user workstation 14 (depending whether P is visual/textual or auditory, respectively).
  • the P component is an audio or video clip, it may be presented with a set of controls (e.g. play, pause, rewind, fast forward, or go to beginning) that may permit the user to manipulate the audio/video, including playing it more than once or stopping it before completion.
  • the Q component replaces the P component (box 306 ) such that the P component is no longer available on the screen for viewing/listening to (e.g.
  • the play, pause etc. controls disappear).
  • the user can return to the P component via review path 308 , if desired, e.g. by selecting a “Review subject matter” button. In the case of an audio/video clip, this may cause the above-described controls to reappear.
  • the R component replaces the Q component (box 310 ) such that neither of the P or Q components is available on the screen.
  • the user can either return to the Q component via review path 312 or return to the P component via review path 314 , if desired, e.g. by selecting a “Review subject matter” button or a “Review query/instruction” button, respectively. It is noted that, if the “:” symbols in “P:Q:R” were each replaced with the “I” symbol, review paths 308 , 312 and 314 would be omitted from FIG. 3A .
  • FIG. 3B which illustrates display mode P.Q.R
  • the question item is initially displayed, only the P component is presented (box 320 ).
  • the Q component is added to the P component, so that both are now presented together (box 322 ).
  • the R component is added to the P and Q components (box 324 ). Because all of the P, Q and R question item components are ultimately presented together, reviewing a previously viewed question item component is meaningless for this display mode, as all of the question item components are already visible. Accordingly, no review paths are available.
  • P or Q is a video or audio clip, it may be presented with a set of controls (e.g. play, pause, rewind, fast forward, or go to beginning) that may permit the user to manipulate (e.g. replay) the audio/video even upon presentation of the R component; thus no “review path” per se is necessary.
  • FIG. 3C which illustrates display mode P:Q.R
  • the question item is initially displayed, only the P component is presented (box 330 ).
  • the Q component replaces the P component (box 332 ).
  • the user can return to viewing/hearing the P component via review path 334 if desired.
  • the R component is added to the Q component (box 336 ), so that Q and R are displayed together. It will be appreciated that review of Q is meaningless in this situation, since Q is already displayed with R. However, the user could return to the P component via review path 338 , if desired.
  • a skill score for the question item which is a user competency measure that is based upon average response accuracy and relative average completion time over a number of trials during which the user answered the same question item.
  • the author thus specifies, on a per-question item basis, completion time determination criteria dictating which of the P, Q and/or R component presentation durations shall constitute the completion time that is used in the skill score calculation. If the P display duration is specified, this means that the author considers the time spent listening to and/or viewing the subject matter to be relevant to assessing user competency. If the Q display duration is specified, this means that the author considers the time spent viewing the query or instruction to be relevant to assessing user competency.
  • the author considers the time spent entering a response, including reading foils (for multiple choice question items), is important.
  • the author may specify that the display duration of just one of the P, Q and R components is to form the basis of the completion time, or that a sum of the display durations of two or more components (e.g. the sum of the P, Q and R display durations) is to form the basis for the completion time.
  • the author further specifies the minimum completion time (CTmin) and/or maximum completion time (CTmax) for the question item ( FIG. 2 , 212 ).
  • This information is used for the purpose of computing the skill score for the question item, in which the user's completion time over a number of trials in answering the same question item is not considered on an absolute scale, but rather is considered relative to either one or both of minimum and maximum completion times for that question item.
  • the author may specify minimum and/or maximum completion times for each of the P, Q and R components or simply minimum and/or maximum completion times for whichever one(s) of the P, Q and R components shall contribute to the completion time, as specified above (in FIG. 2 , 210 ).
  • Minimum completion time may for example be specified in any of the following ways:
  • maximum completion time may for example be set in any of the following ways:
  • Empirically the author may set the maximum completion time based on empirically determined values. In one example, some number of beginners may be asked to answer the question item. Their average completion time and standard deviation (p) are computed. Then the maximum completion time may be set at 3 ⁇ above the mean. This could vary from 0 ⁇ to 5 ⁇ (or more) above the mean in alternative embodiments. In a further example, a database of historical data for many users can be used as the basis for the maximum.
  • the author may optionally also specify whether a hint or help can be viewed during electronic learning ( 214 ) and, if so, what the hint will be.
  • Hints or help may be made available to the user during display of any of the P, Q and R components of the question item, e.g. by way of “hint” or “help” buttons (respectively) presented on the display of user workstation 14 .
  • a hint is typically a textual description that facilitates the user's comprehension of the displayed component(s) or is suggestive of a correct response.
  • Help is typically some sort of textual description that provides basic assistance as to electronic learning curriculum functionality. Hints or help could alternatively or additionally take on forms that are not textual (e.g.
  • the author may also be able to specify the effect of viewing such hints or help upon completion time (i.e. whether time spent viewing hints or help counts towards the P, Q and/or R presentation durations). In some embodiments, this behavior may not be under the author's control (e.g. it may be assumed or hard-coded that time spent viewing hints or help always counts, or never counts, towards completion time).
  • the author also provides input regarding the correct response(s) to the question item that is being authored ( FIG. 2 , 216 ), for use in computing an accuracy component of the skill score for the question item.
  • the author input may simply be an indication of which foil (i.e. which choice) is correct.
  • the author input may be a sample response that is considered to be correct.
  • multiple correct responses may be entered by the author.
  • a degree of correctness e.g. 100%, 75%, 50% etc. may be specified for each response.
  • the author may optionally also specify whether feedback is to be provided to the user during the use stage ( 218 ) and, if so, the substance of the feedback.
  • Feedback may be made available automatically to the user after the user has provided a response to a question item, or e.g. by way of a “feedback” button that appears after the response has been provided, which displays feedback upon selection by the user.
  • the author-specified feedback is typically textual and explains why a selected answer is correct or incorrect. Feedback may for example be provided for each different foil of a multiple choice question item. Time spent viewing feedback for a question item usually is recorded but does not count towards completion time for that question item, although this too may be author-controlled.
  • record 400 includes a field for each of the aspects of the question item for which the author has provided input.
  • the record 400 includes a subject matter field 402 representing the P component (which may store text, an image, a video clip, an audio clip, or virtually any other form of subject matter), a query/instruction field 404 representing the Q component, a response mechanism field 406 representing the R component, a display mode field 408 representing one of the eighteen above-described display modes for the illustrated embodiment, a completion time determination criteria field 410 storing information as to which of the P, Q and/or R presentation durations are to be used in determining completion time, a minimum and/or maximum completion times field 412 , a hint information field 414 including hints for the P, Q and/or R components, a correct response(s) field 416 , and a feedback information field 418 .
  • a subject matter field 402 representing the P component (which may store text, an image, a video clip, an audio clip, or virtually any other form of subject matter)
  • response mechanism field 406 does not necessarily store a response mechanism per se (e.g. a multiple choice response with selectable foils or an editable text field), but rather may store a representation of the response mechanism that should be presented during the use stage (e.g. a representation of each foil of a multiple choice response, an indicator that an editable text field should be displayed, etc.). Additional fields (not expressly shown) may be provided.
  • the record 400 may for example be a conventional database record.
  • Additional question items are composed in the manner shown in FIG. 2 . These question items are then used to create a curriculum, as shown in FIGS. 5 and 6 .
  • a question item set is a set of question items ( 1 to n) that may be: (1) intended to be presented together during electronic learning in one or more sittings; (2) all related to specific topic; (3) all of a specific difficulty level; or (4) otherwise classified within the curriculum.
  • the author repeats 502 to create one or more other question item sets ( 504 ) which collectively form the curriculum.
  • an author may generate four question item sets 602 , 604 , 606 and 608 pertaining to addition, subtraction, multiplication and division, respectively.
  • the author may stipulate whether the question items should be presented sequentially or in random order during the use stage. These elections may be stored as part of the electronic learning curriculum. The number of question items in each set may vary. In some embodiments, an electronic learning curriculum may contain only one question item set.
  • the author specifies three different skill score thresholds T1, T2 and T3 to be used during electronic learning ( FIG. 5 , 506 ). These thresholds dictate when a user of the curriculum has reached beginner, intermediate and expert levels of competency, respectively, for a particular question item set: when a collective (e.g. average) user competency measure for the question item set meets or exceeds a threshold, that level of competency is considered to be attained.
  • the thresholds T1-T3 may be specified on a per-set basis or curriculum-wide. The role of threshold T1, T2 and T3 will become apparent from the subsequent description of the curriculum advancement algorithm that is applied during the use stage.
  • All of the author-specified curriculum criteria i.e. the groupings of question items into sets and the T1-T3 thresholds, are ultimately stored as part of the electronic learning curriculum ( 508 ).
  • the electronic learning curriculum may thereafter be transferred from the authoring workstation 12 to curriculum server 16 , e.g. as one or more electronic files, possibly compressed for efficient transmission over network 18 or encrypted for security.
  • a user interacting with a user workstation 14 accesses the curriculum server 16 over network 18 .
  • Access may for example involve invocation of a web browser application at the user workstation 14 , navigating to a predetermined Uniform Resource Locator (URL) where the electronic learning curriculum is posted or maintained, and entry of user account information.
  • the user account information may permit records of any past interaction with the electronic learning curriculum (e.g. previous completion of one or more question item sets or question item sets and past skill score values) to be accessed.
  • a desired or current question item set is identified, e.g.
  • the user workstation 14 communicates with the curriculum server 16 to begin displaying the question items of the current question item set, question item by question item.
  • the question items of each set are presented either in sequential or random order, as earlier specified by the author during the authoring stage.
  • stage (i) may later be repeated upon user request (although not necessarily for the same duration as it was originally presented).
  • the duration of the repeated stage (i) i.e. the duration of review
  • stage (i) commences with user selection of a “start” button or similar GUI control which causes the first component be presented (although, in some embodiments, the presenting of stage (i) may commence automatically, e.g. after the preceding question item is completed or, e.g. after a countdown: “Next question in 3 . . . 2 . . . 1 . . . 0 seconds”.
  • Stage (i) ends upon user selection of a “next” button or similar GUI control.
  • the time spent reviewing is typically added to the initial presentation duration of the relevant question item component. For example, any time spent by the user viewing/listening to the P component for the first time will certainly contribute to presentation duration of the P component, but any time spent by the user reviewing the P component after the Q and/or R component have replaced P can also, at the author's election, be deemed to contribute to the presentation duration of P.
  • the presentation duration of review may be separately stored from the presentation duration of initial display, e.g. for possible later use in the computation of metrics pertaining to user review times.
  • time spent reading hints or help for a question item component may contribute to the presentation duration of that component and may be separately stored.
  • the manner of ascertaining the presentation durations of the P, Q and R components for a question item may depend upon the operative display mode.
  • a timeline 800 showing measurement of presentation durations (here, in seconds) of P, Q and R during a user's interaction with a question item having a display mode of PIQIR is illustrated.
  • the actions of the user in interacting with the question item's components during their presentation are indicated above the timeline 800 ; the curved arrows 802 , 804 and 806 represent stages (i), (ii) and (iii) of the presentation sequence, respectively.
  • the P component presentation duration includes both the time spent initially viewing the component and any time spent reviewing the component.
  • the P component presentation duration is determined by summing the time intervals 812 and 816 , i.e. 50 seconds plus 60 seconds, for a total of 110 seconds.
  • the Q component presentation duration is similarly determined by summing the time intervals 814 and 818 , i.e. 15 seconds plus 7 seconds, for a total of 22 seconds (the interval 818 , which represents the second time Q is presented, is not considered “review” of Q per se as it merely results from the user's review of P during interval 816 ; thus both of intervals 814 and 818 are labeled stage (ii) in FIG. 8B ).
  • the R component presentation duration simply corresponds to time interval 820 (i.e. stage (iii)), which is 25 seconds.
  • the second way is to measure the time that P is presented until the next question item component appears (i.e. the time between the onset of the earlier-displayed question item component and the onset of the subsequent question item component, which in this example is Q).
  • the presentation duration for each component reflects the time that the user's attention was likely focused upon that component.
  • the presentation durations of each of the P, Q and R components are determined as distinct values, and an indicator of each said distinct value (e.g. an elapsed time or one or more time stamps indicative of the presentation duration) is stored ( FIG. 7 , 706 ). This is so that, regardless of which of these components' presentation durations has been specified during authoring to comprise the completion time for the question item, if it is later desired to change the presentation duration(s) comprising the completion time, the necessary data for doing so is available (even after the user has completed electronic learning).
  • an indicator of each said distinct value e.g. an elapsed time or one or more time stamps indicative of the presentation duration
  • the presentation duration of P may be measured in various ways. If user input for advancing to the next question item component is received prior to the complete playing of the recording (e.g. at 18 seconds), the presentation duration for P may be set to the duration that had elapsed when the user input was received (i.e. 18 seconds). If user input for advancing to the next question item component is received some time after the recording has been fully played (e.g. at 23 seconds), the presentation duration for P may be set either to the duration that had elapsed when the user input was received (i.e. 23 seconds) or to the fixed recording duration (i.e. 20 seconds). If the user has manipulated controls to cause the audio to be played twice (i.e. 40 seconds) over the course of a 45 second duration, the duration may be set to 40 or 45 seconds. Various alternatives are possible.
  • Operation 900 for computing a skill score for a current trial is illustrated in FIG. 9 .
  • a response accuracy for the current trial is determined based on the user response and at least one predetermined correct response, and is stored ( FIG. 9 , 902 ).
  • the response accuracy is a value in the range of zero to one (or 0% to 100%), where zero represents an incorrect user response, one represents a correct user response, and values between zero and one indicate partially correct responses.
  • a completion time for the current trial is determined ( 904 ).
  • the completion time is based on the measured presentation duration(s) of whichever one(s) of the P, Q and/or R question item components have been specified by the author (at FIG. 2 , 210 ) as comprising the completion time for this question item.
  • an average response accuracy (RAavg) for the question item is determined based on the response accuracies S trials in which the same question item has been presented to the user ( FIG. 9 , 906 ), where S is a positive integer.
  • the average response accuracy thus takes into account the user's accuracy in answering the same question item over a window of S attempts.
  • the calculation of an average response accuracy entails calculating a weighted average of S stored response accuracies.
  • the user has been presented with the same question item S times, and each time the user's response and completion time was stored for later use in computing the skill score for that question item.
  • TABLE 2 below represents stores data associated with the five trials of responding to an exemplary question item, i.e. question item
  • each trial for question item 1 is represented by a table row.
  • the first row represents the least recent trial and the last row represents the most recent trial.
  • the values in the second column indicate the correctness of the user response for each trial.
  • the response accuracies in the third column are equal to one for a correct response and zero for an incorrect response.
  • the weights in column four represent the relative weight of each response and completion time for the trial represented by the row in relation to the trials represented by other rows.
  • the values in column five represent the completion times for each trial.
  • the average response accuracy may be a straight (i.e. non-weighted) average of actual response accuracies in relation to the accuracies that would result for S correct responses, i.e. (1+0+1+1+0)/(1+1+1+1+1) or 0.600. It is understood that the term “average” as used herein may encompass other types of calculations for assessing a representative measure of completion time based on S trials, such as calculating a median.
  • an average completion time (CTavg) for the question item is determined based on the completion times of only the ones of those S trials in which a correct user response was given ( FIG. 9 , 908 ).
  • the average completion time may be a straight average of the completion times of the last S attempts for which correct responses were given, i.e. (3.4 sec+3.0 sec+2.6 sec)/3 or 3.00 seconds. It is understood that the term “average” as used herein may encompass other types of calculations for assessing a representative measure of completion time based on S trials, such as calculating a median.
  • a relative average completion time (CTrelavg) for the current trial is calculated ( FIG. 9 , 910 ). This essentially involves comparing the average completion time (CTavg) computed above to the minimum completion time (CTmin) and/or maximum completion time (CTmax) for the question item, as specified by the author during the authoring stage.
  • CTmin minimum completion time
  • CTmax maximum completion time
  • relative completion time is calculated based on both CTmin and CTmax, according to equation (I) below:
  • CTrelavg ((CTmax ⁇ CTavg)/(CTmax ⁇ CTmin)) ⁇ p
  • the difference between the maximum completion time and the average completion time, as determined in block 908 is divided by the difference between the maximum and minimum completion times, and the resulting value is exponentiated, i.e. raised to an exponent p, where p is a number greater than zero.
  • the exponent serves to differentially weight speed improvements as the user gets faster with training.
  • p is greater than one
  • the choice of a value for the exponent p allows the author to selectively weight those times that are closer to CTmin more heavily than those that are closer to CTmax.
  • relative average completion time is based on the average completion time as well as both of the author-specified minimum and maximum completion times CTmin and CTmax.
  • the relative average completion time may be based on the average completion time and only one or the other (not both) of CTmin and CTmax.
  • CTrelavg (CTmin/CTavg) ⁇ p (I′)
  • CTrelavg (1 ⁇ (CTavg/CTmax)) ⁇ p (I′′)
  • the equation that is used to convert completion time to a number from 0 to 1 may differ from equations (I), (I′) and (I′′) above.
  • the conversion could also by done by one or more tables.
  • the average response accuracy is scaled by a constant C1 (i.e. scalar C1) and relative average completion time is scaled by a constant C2 (i.e. scalar C2).
  • the scaled results may be referred to as the “X” and “Y” components of the skill score, respectively.
  • the X and Y values are summed to yield the overall skill score.
  • Scaling the accuracy and relative completion time values may have two benefits. Firstly, it may give the score greater relevance to a typical user. For example, the skill score of the present embodiment ranges between zero (for a completely incorrect response answered in the maximum amount of time) and two (for a 100% correct response answered in the minimum amount of time).
  • the skill score may be made resonate with certain users as comparable to a baseball batting average.
  • scaling may allow the relative importance of accuracy versus relative completion time to be set by the author. For example, by setting C1 to 900 and C2 to 100, the skill score may be weighted much more heavily towards accuracy of response than towards speed of response. Any desired weighting may thus be attained by the author.
  • FIG. 10 contains a table 1000 illustrating exemplary skill scores as may be computed by equation (II) above for various user response accuracy and completion time scenarios.
  • the table 1000 includes a body 1002 of four hundred twenty cells (twenty one rows by twenty columns). Each cell within the body 1002 contains a skill score that would result, based on equation (II), for a unique user response accuracy and completion time scenario, with exemplary C1 and C2 values of 500 being assumed.
  • the average response accuracy associated with a particular cell is indicated in column 1004 , in the row occupied by that cell.
  • average completion time associated with a particular cell is indicated in row 1006 , in the column occupied by that cell.
  • cell 1008 within body 1002 represent the skill score that results when the user responds 95% accurately ( 1010 ) with an average completion time of 1 second ( 1012 ).
  • Column 1004 contains response accuracies ranging from 100% to 0%, at 5% intervals.
  • the corresponding X values that would be generated according to equation II i.e. average response accuracy scaled by the exemplary C1 value of 500
  • Row 1006 shows completion times ranging from 0 seconds (the minimum completion time) to 20 seconds (the maximum completion time) at one second intervals.
  • the corresponding Y values that would be generated according to equation II i.e. relative average completion time scaled by the exemplary C2 value of 500 are shown in column 1016 .
  • the skill score value in any cell of body 1002 represents the sum of the X value from column 1014 in the column occupied by that cell and the Y value from row 1016 in the column occupied by that cell.
  • a perfect skill score of 1000 (100% correct response in the minimum time of zero seconds) is shown in the upper left hand corner of body 1002
  • a minimum skill score of 0 (0% correct response in the maximum time of twenty seconds) is shown in the lower right hand corner of body 1002 .
  • operation 700 of FIG. 7 represents a single pass by a user through a set of question items. However, it will be appreciated that multiple passes may be required in order for a user to be considered to have truly learned the curriculum.
  • a curriculum advancement algorithm is applied. This algorithm is exemplified by operation 1100 of FIG. 11 .
  • operation 1100 illustrates an exemplary algorithm for advancement of a user from a beginner skill level to an expert skill level for an exemplary electronic learning curriculum comprising multiple question item sets.
  • a collective user competency measure referred to herein as a collective skill score (CSS)
  • a CSS is a value that reflects the user's competency across multiple question items.
  • a CSS is typically computed by averaging the skill scores for all of the question items of the set.
  • calculating the CSS may involve combining the average skill score for the set (based on the user's most recent pass through the set) with one or more previously computed average skill scores for the set (based on the user's prior pass(es) through the set). Combining may entail weighting the most recent average more heavily that the previously computed average(s). Some embodiments could average the sets' speeds and accuracies separately.
  • the CSS may initially have a default value of zero.
  • the collective skill scores computed for each question item set are then used to determine what set to present next. If the collective skill score for any set of the question item set is below a first threshold T1 ( 1104 ), where T1 is a threshold CSS value representing a “beginner” skill level, the set having the collective skill score closest to but below T1 is presented next ( 1106 ).
  • T1 is a threshold CSS value representing a “beginner” skill level
  • the rationale for this approach which may be referred to as the “race up” approach, is to prioritize question item sets whose collective skill are the highest but are not yet at threshold T1, so that each set has a baseline of exposure before more stringent thresholds must be met. In this case the user has at least a beginner level of competence in all of the sets before advancing further.
  • the race up approach emphasizes breadth over depth.
  • the CSS for the set is recomputed ( 1108 ). Assuming that the number of correct responses and/or completion times are better in this pass than they were during the user's last pass through the question item set, the CSS value will improve, possibly meeting or exceeding the threshold CSS value T1.
  • T2 is a threshold CSS value representative of an “intermediate” skill level
  • T2 is a threshold CSS value representative of an “intermediate” skill level
  • the set having the lowest collective skill score of all sets in the question item set is presented next ( 1112 ).
  • the rationale for this approach which may be referred to as the “bubble up” approach (since, as the set with the lowest CSS improves and exceeds another set's CSS, its CSS will not improve further until the set with the new lowest CSS improves and exceeds another set's CSS), is to prioritize question item sets whose collective skill are the lowest despite being at least T1.
  • the rationale is to ensure that all sets are learned at a predetermined level before attention is given to the weakest items across the sets.
  • the CSS for the set is recomputed ( 1114 ). Assuming that the number of correct responses and/or completion times are better than they were during the user's last pass through the question item set, the CSS value will improve, possibly meeting or exceeding the threshold CSS value T2. Bubble up may suit learning environments in which a user may alternate learning among many sets, possibly allowing the user to perceive interconnections and relations among question items across the set topic context.
  • Operation then repeats from 1104 .
  • the reason that operation repeats from 1104 rather than from 1110 is to account for the possibility that the CSS for a question item set which previously met or exceeded T1 could drop below T1, requiring operation at 1106 - 1108 to be repeated.
  • N individual question items (where N is a positive integer) whose skill scores are the lowest are identified ( 1118 ).
  • the N question items are selected regardless of whether they belong to a set whose CSS is less than T3 or greater than or equal to T3.
  • M maintenance question items (where M is a positive integer different from or the same as N) are also selected ( 1120 ).
  • the N question items identified in 1118 are then presented along with the M question items selected in 1120 ( 1122 ). When this has been done, the CSS for all sets from which at least one of the N or M question items was drawn is recomputed ( 1124 ).
  • Operation then repeats from 1104 .
  • the reason that operation repeats from 1104 rather than from 1116 is to account for the possibility that the CSS for a question item set which previously met or exceeded T1 could theoretically drop below T1 or T2, requiring operation at 1106 - 1108 or 1112 - 1114 to be repeated.
  • FIG. 12 contains a table 1200 illustrating advancement of a user from a beginner skill level to an expert skill level according to algorithm 1100.
  • advancement is shown for four exemplary electronic learning curricula 1-4, each comprising four question item sets (1a-1d, 2a-2d, 3a-3d, and 4a-4d, respectively).
  • Each row in the table 1200 represents a distinct user session at user workstation 14 , while each column in the table represents a question item set.
  • a session is a single work period in which the user completes one or more sets of question items. It may be alternatively be thought of as the work accomplished in a single login period.
  • the user's progress from an initial session s0 to a final session s25 is shown, with the algorithm illustrated in FIG. 11 being applied once for each of the electronic curricula, in sequence.
  • Each cell within table 1200 contains a collective skill score (CSS) value for the relevant question item set indicated by the containing column at the completion of the relevant session indicated by the containing row.
  • CCS collective skill score
  • operation proceeds to 1110 .
  • set 2c has the lowest CSS value of 410, it is presented ( 1112 ) and its CSS recomputed ( 1114 ) to a value 590 in session s7.
  • set 2d now has the lowest CSS value of 480, it is presented ( 1112 ) next and its CSS recomputed ( 1114 ) to a value 630 in session s8.
  • Operation 1110 - 1112 is repeated for set 2c in session s9, given that it is the only remaining set with a CSS below T2. The resulting CSS of 670 exceeds T2.
  • operation 1116 - 1124 is repeated three times in sessions s10 to s12, until all of the CSS values exceed T3. At that stage, the second curriculum is considered to have been learned.
  • the curriculum advancement algorithm is applied question item set by question item set, wherein the sets form part of an electronic learning curriculum. It will be appreciated that the curriculum advancement algorithm could be applied to any plurality of question items. For example, question item sets could be subdivided in to subsets, and the algorithm could be applied subset by subset.
  • the “mastery” approach illustrated at 1118 - 1124 of FIG. 11 may be applied on a per-set basis, e.g. when a particular set's CSS has exceeded T2 but is still below T3, rather than across an electronic learning curriculum only when the CSS for all sets has exceeded T2. In that case, the pool of question items from which question items are drawn may be limited to that set.
  • the sets may be combined to create a new, larger set, and operation 1100 may be repeated for that new set, possibly with higher threshold values.
  • the CSS for a question item set may be recomputed after each question item is completed, rather than just after all question items have been completed (as at 1108 , 1114 and 1124 of FIG. 11 ), with the user being permitted to advance to the next competency level or complete the question item set immediately upon meeting or exceeding the operative CSS threshold for the current competency level. This may avoid user frustration in having to repeat all of the question items of a set every time the set is presented.
  • the software application by which the electronic learning curriculum is authored at the workstation 12 may simply be a web browser application which presents authoring software that is actually hosted by a web server comprising curriculum server 16 .
  • the actual curriculum may be created at the curriculum server 16 based on remote author instructions from authoring workstation 12 . In that case, uploading of the curriculum after the authoring stage may be unnecessary, since the curriculum may already be situated at the curriculum server 16 , within a data store.
  • a user of an electronic learning curriculum may be able to revisit a previously answered question item within a set of question items, e.g., through selection.
  • the response accuracy for computing a skill score will usually be 0 or 1 on a single trial (unless partial credit marks are specified by the author as being available), but as question item sets are repeated the accuracy component can effectively become a weighted average of the last n trials in some embodiments.
  • values between 0% and 100% represent partially leaned items. For example, responding correctly on the last time may be worth a weight of 4, on the second to last time may be worth a weight of 3, third to last time may be worth a weight of 2, and fourth to last may be worth a weight of 1. This is based on memory phenomena such as recency and forgetting.
  • the X and Y components of equation (II) may be combined in ways other than summing.
  • historical data is averaged over the last n trials (where n is an integer, typically 4 to 10).
  • the number of display modes may vary in alternative embodiments. Some embodiments may not have display modes per se. For example, a fixed presentation sequence (e.g. P, Q, R) may be used for all question items.
  • a fixed presentation sequence e.g. P, Q, R
  • the number of thresholds used in a curriculum advancement algorithm may be less than or greater than three.
  • a “Software as a Service” (SaaS) model is used, in which both workstations 12 and 14 may use a web browser or other application having a GUI for accessing and using software stored in the memory of the curriculum server 16 .
  • SaaS Software as a Service
  • neither of workstations 12 or 14 has any dedicated software installed on the local hard drive per se.
  • a dedicated authoring application may be installed on workstation 12 .
  • a curriculum generated by such an application may be stored at server 16 and downloaded to the workstation 14 upon user request. There, the curriculum could be executed by a dedicated local application for presenting the curriculum.
  • question items can include instructions, facts to be memorized, and non-evaluative material such as demographic data from surveys that have no correct answer. For some question items, there may be no “correct answer”.

Abstract

A question item comprising: textual, visual or auditory subject matter; a query or instruction pertaining to the subject matter; and a representation of a response mechanism for receiving a user response to the query or instruction, may be retrieved from a data store. The subject matter, query or instruction and response mechanism may be presented in a presentation sequence, wherein a presentation duration of each of the subject matter, query or instruction and response mechanism is controlled by user input. Based on the user input, a presentation duration of each of the subject matter, query or instruction, and response mechanism may be determined. Each of the three presentation durations may be stored. A completion time may be determined based on one or more of the presentation durations and may be stored along with response accuracy, for use in calculating a user competency measure for the question item. An algorithm for advancing through an electronic learning curriculum comprising multiple question items may also be provided.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/949,734 filed Jul. 24, 2013, which claims priority from U.S. patent application Ser. No. 12/342,281 filed Dec. 23, 2008, U.S. Pat. No. 8,506,305, the contents of both of which are incorporated herein by reference.
  • FIELD OF TECHNOLOGY
  • The present disclosure relates to electronic learning systems, such as adaptive and remediative electronic learning systems, and to associated methods, software and computing devices.
  • BACKGROUND
  • Electronic learning systems are systems that present a curriculum to a user (e.g. a student) for the purpose of assessing knowledge and skills and teaching the curriculum to the user. Electronic learning systems are typically computer-based. A typical curriculum consists of multiple question items. Each question item typically includes a query or instruction and a response mechanism for receiving a user response to the query or instruction. An exemplary query may be “What is the capital of New York State?” and the corresponding exemplary response mechanism may be a set of individually selectable (multiple choice) answers or an editable text field into which a user response may be typed. Alternatively, the instruction “Identify the capital of New York State” could be substituted for the above query.
  • When an electronic learning system presents a question item to a user, the query or instruction and response mechanism are typically displayed at the same time, e.g. on a single computer screen. The user's response may be recorded, possibly along with a measured totality of elapsed time between presentation of the question item and the entry of a user response. A user proficiency measure may be generated based on the accuracy of the user's responses.
  • An improved electronic learning system would be desirable.
  • SUMMARY
  • In one aspect of the present invention, there is provided a machine-readable medium storing instructions for presenting a question item of an electronic learning curriculum, said instructions, when executed by a processor of a computing device, causing said computing device to: (a) retrieve from a data store a question item comprising: textual, visual or auditory subject matter; a query or instruction pertaining to said subject matter; and a representation of a response mechanism for receiving a user response to said query or instruction; (b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input; (c) based on said user input, determine: a presentation duration of said subject matter; a presentation duration of said query or instruction; and a presentation duration of said response mechanism; and (d) store indicators of each of said three presentation durations.
  • In another aspect of the present invention, there is provided a computing device having a processor in communication with memory storing instructions for presenting a question item of an electronic learning curriculum which, when executed by said processor, cause said computing device to: (a) retrieve from a data store a question item comprising: textual, visual or auditory subject matter; a query or instruction pertaining to said subject matter; and a representation of a response mechanism for receiving a user response to said query or instruction; (b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input; (c) based on said user input, determine: a presentation duration of said subject matter; a presentation duration of said query or instruction; and a presentation duration of said response mechanism; and (d) store indicators of each of said three presentation durations.
  • In yet another aspect of the present invention, there is provided a machine-readable medium storing instructions that, when executed by a processor of a computing device, cause said computing device to: (a) calculate a user competency measure for each question item in each of a plurality of question item sets comprising an electronic learning curriculum, said calculating being based on an average response accuracy for said question item and an average completion time for said question item over a plurality of trials; (b) for each question item set of said plurality, calculate a collective user competency measure for the set based on the user competency measures of the question items of said set; and (c) automatically select for repeated presentation the question item set whose collective user competency measure is closest to but below a threshold.
  • DESCRIPTION OF THE DRAWINGS
  • Aspects and features of the disclosed embodiments will become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments in conjunction with the accompanying figures. In the figures which illustrate at least one example embodiment:
  • FIG. 1 is a schematic diagram illustrating an electronic learning system;
  • FIG. 2 is a flowchart illustrating operation for authoring a single question item using the electronic learning system of FIG. 1;
  • FIGS. 3A-3C are schematic diagrams illustrating three exemplary display modes for a question item;
  • FIG. 4 is a schematic diagram illustrating an exemplary question item record used by the system of FIG. 1;
  • FIG. 5 is a flowchart illustrating operation for creating a curriculum using the electronic learning system of FIG. 1;
  • FIG. 6 is a schematic diagram illustrating question item set forming part of a curriculum created using the electronic learning system of FIG. 1;
  • FIG. 7 is a flowchart illustrating operation for presenting the question items of a question item set;
  • FIGS. 8A-8C are schematic diagrams illustrating user interaction with three exemplary question items, each having a different display mode;
  • FIG. 9 is a flowchart illustrating operation for computing a user competency measure for a question item;
  • FIG. 10 is a table illustrating exemplary user competency measures as may be computed by the system of FIG. 1;
  • FIG. 11 is a flowchart illustrating an exemplary curriculum advancement algorithm used by the system of FIG. 1; and
  • FIG. 12 is a table illustrating advancement of a user from a beginner skill level to an expert skill level according to the algorithm of FIG. 11.
  • DETAILED DESCRIPTION
  • Referring to FIG. 1, an exemplary electronic learning system 10 is illustrated. The system 10 includes an authoring workstation 12 for authoring an electronic learning curriculum and a user workstation 14 for using the electronic learning curriculum. The workstations 12 and 14 each communicate with a curriculum server 16 over a network 18. During an authoring stage, an author interacts with authoring workstation 12 to create an electronic learning curriculum, which is uploaded to the curriculum server 16. During a subsequent use stage, a user (e.g. a student) interacting with user workstation 14 accesses the curriculum stored at server 16 during one or more sessions for the purpose of learning the curriculum. The authoring and use stages will be described in detail below.
  • Each of the authoring workstation 12 and user workstation 14 is a computing device having a display and at least one processor in communication with memory storing software in the form of processor executable instructions. The workstations 12 and 14 may for example be conventional multipurpose desktop computers, laptop computers, palmtop computers, mobile wireless communication devices or portable digital assistants, modified for operation as described herein. The term “workstation” as used herein should not be understood to connote a particular device architecture. The authoring workstation 12 is not necessarily the same type of computing device as the user workstation 14. The term “computing device” as used herein is understood to encompass any device (electronic or otherwise) capable of performing computations including, but not limited to, the devices enumerated above, and possibly including future computing device such as implants, or computer aided prosthetics. Each of the workstations 12, 14, has a network interface for communication over network 18. The software that is stored in memory at the workstations 12 or 14 may be loaded into memory from machine-readable medium 20 or 22 (respectively), which may for example be an optical disk, a form of magnetic storage medium such as a memory stick, hard disk drive, flash drive or another form of tangible storage medium capable of being read by a computing device, or received over a network from a remote source. The software at the author's workstation 12 includes a software application (possibly a web browser) by which the electronic learning curriculum is authored while the software at the user workstation 14 includes a software application (also possibly a web browser) by which the electronic learning by the curriculum is remotely accessed.
  • Curriculum server 16 is a computing device having at least one processor in communication with a data store (e.g. volatile or nonvolatile memory such as RAM, ROM or secondary storage) storing at least one electronic learning curriculum for remote access by users at workstation 14. In the present embodiment, the server 16 stores only one exemplary electronic learning curriculum, however it will be appreciated that many electronic learning curricula could be stored at curriculum server 16 in alternative embodiments. The curriculum server also stores software for presenting the electronic learning curriculum. This may comprise web server software that permits the electronic learning curriculum to be accessed by users at remote workstations executing web browser applications. The curriculum server software may be loaded from a machine-readable medium 24, which may for example be an optical disk, magnetic storage medium or another form of tangible storage medium capable of being read by a computing device, or received over a network from a remote source. The curriculum server 16 also has a network interface for communication over network 18.
  • Network 18 is a conventional data network or network of data networks. The primary role of network 18 within system 10 is to permit an author to upload the electronic learning curriculum from authoring workstation 12 to curriculum server 16 upon completion of the authoring stage and to permit a user at user workstation 14 to remotely access the electronic learning curriculum from the curriculum server 16 during the use stage. Network 18 may for example be the Internet, an intranet, or a proprietary data network.
  • Authoring Stage
  • During the authoring stage, an author creates an electronic learning curriculum made up of a plurality of question items. Each question item in the curriculum has three components that shall be presented to the user in a presentation sequence during the use stage. The first component is textual, visual or auditory subject matter, referred to as the presentation component (or “P component”). The P component may be considered to represent the subject matter of the question item, i.e. what the question item is about. The P component may comprise combinations of textual, visual and/or auditory subject matter (e.g. a video clip including text having a sound track, text with accompanying audio, etc.). The second component is a query or instruction pertaining to the subject matter, referred to as the query component (or “Q component”). The third component is a response mechanism for receiving a user response to the query or instruction, referred to as the response component (or “R component”). The R component is a mechanism for receiving a user response. These three components are not necessarily presented in the order P, Q, R.
  • In an exemplary question item forming part of an electronic learning curriculum for medical students, the subject matter may be an X-ray image illustrating a fractured bone; the query or instruction may be “What type of fracture is illustrated?” or “Identify the type of fracture”; and the response mechanism may be a multiple choice response mechanism providing a number of individually-selectable responses, each naming a different type of fracture, and permitting the user's response to be received (e.g. as a selection of, say, a radio button graphical user interface (GUI) element).
  • In overview, when creating a question item for an electronic learning curriculum, the author specifies P, Q and R components for that question item. The author also selects a display mode for the question item. The display mode determines the presentation sequence for the P, Q and R components. In the present embodiment, the order of presentation is either “P then Q then R” or “Q then P then R”. In other words, the presentation sequence comprises: (i) presenting one of P and Q; then (ii) presenting the other of P and Q; then (iii) presenting R. The display of successive question item components may be either cumulative (i.e. display the succeeding component without removing the preceding component, so that both are shown together when the succeeding component appears) or consecutive (i.e. replace the preceding component with the succeeding component, so that the components are shown one at a time). Although not required in all embodiments, the display mode may also stipulate whether it is possible to review earlier displayed but no longer visible question item components (in the case of consecutively displayed question item components).
  • Exemplary operation 200 for authoring a single question item is shown in FIG. 2. Referring to FIG. 2, author input representing the P component, i.e. textual, visual or auditory subject matter (including combinations of textual, visual and/or auditory subject matter), is received (202). The subject matter may take various forms, including but not limited to text, an image, a video clip, an audio clip, an animation, an interactive animation (such as Adobe® Macromedia® Flash® or Microsoft® PowerPoint™), content created for a third party or commercially available application (e.g. Mathematica® or Excel®), possibly executed by way of a macro or the like. The subject matter input may be provided by the author at the authoring workstation 12 in various ways. It may be loaded from a machine-readable medium 20, received over a network from a remote source, or directly provided by the author through interaction with the authoring workstation 12, possibly by way of one or more input devices (e.g. by way of an interconnected keyboard, mouse, scanner, camcorder or the like). Alternatively, it may be identified by way of one or more links (e.g. a hyperlink to web-based content).
  • Next, author input representing the Q component, i.e. a query or instruction pertaining to the P component, is received (204). The query or instruction may be a textual query or instruction and, as such, it may be typed by the author using a keyboard of authoring workstation 12. Alternatively, it may be loaded from medium 20 or received over a network from a remote source, possibly in a similar fashion to the P component. The Q component could alternatively be non-textual (e.g. it could be an audio clip, video clip, or any other format in which the P component may be specified). It will be appreciated that the Q component may in some cases be an instruction, e.g. “Select one of the following four cities as the city having the largest population”, and is thus not necessarily in the form of a question.
  • Thereafter, author input representing the R component, i.e. a response mechanism for receiving a user response, is received (206). During authoring, specification of the R component may be facilitated by authoring software that provides a graphical user interface (GUI) element at the authoring workstation 12 inviting the author to select a desired response mechanism from a list of different response mechanisms, such as: multiple choice (with either one or more than one correct answer, e.g. “choose all that apply”); true or false (a specific form of multiple choice); binary (e.g. user selection/non-selection of a single choice, such as “I accept”); fill in the blank (e.g. single word or short phrase typed into a field); multiple blank (e.g. a “fill in the blanks” questionnaire); short answer (e.g. a sentence or paragraph); essay answer (similar to short answer but possibly spanning multiple paragraphs). When “multiple choice” is selected, the author may be prompted to indicate how many foils (i.e. choices) there will be and to provide the substance of each choice (e.g. by typing the foil or, if the foils are non-textual such as graphical or auditory, by providing the appropriate input for each foil and to provide the substance of any feedback, in any format, that the user could experience upon selecting the foils). The author also provides the substance of the hints, if any, available for each component and/or for each foil.
  • It will be appreciated that the response mechanism may be unrelated to whatever user input mechanism may be used to receive input indicative of a user response at the user workstation 14. Various user input mechanisms may be used to interact with the response mechanism, such as pointing devices (mouse, touchscreen or trackball); keyboards (e.g. press ‘Z’ for yes or ‘M’ for no, or other keys, possibly multiple keys); audio capture of vocalized response (e.g. digital voice recording); measuring of physiological responses (e.g. heart rate, breathing rate, pupil dilation, etc., during a response interval); video capture; haptic response devices; various types of peripheral devices such as joysticks, video game controllers, inertial sensors as in the Wii™ game controller; clicker devices; combinations of these; or others.
  • It will be appreciated that operation 202, 204, 206 of FIG. 2 can occur in any order.
  • Next, the author selects a display mode for the question item which determines a presentation sequence for the P, Q and R components when the question item is presented in the use stage. In the present embodiment (although not necessarily in all embodiments), the display mode also determines whether the subject matter, the query or instruction and the response mechanism shall be displayed to a user consecutively, cumulatively or using a combination of these approaches (208, FIG. 2).
  • In the present embodiment (although not necessarily in all embodiments), the display mode further determines whether it is possible for a user to review earlier displayed but no longer available question item components (in the case when question item components are consecutively displayed), e.g. by selecting a “review” button to repeat the presenting of an earlier displayed component after it has been replaced by a subsequent component. In the present embodiment (although not necessarily in all embodiments), the author selects a display mode at authoring workstation 12 by selecting from one of eighteen display mode options shown in Table 1 below. Table 1 adopts a notation wherein the display mode is represented by an expression containing the letters P, Q and R (representing the presentation, query/instruction and response mechanism question item components, respectively) with a symbol between adjacent letters signifying consecutive or cumulative display of the components represented by the adjacent letters, as follows:
  • Consecutive display: a “:” (colon) symbol between letters signifies consecutive display of the two represented components, e.g. “P:Q” means “display P then (later) replace P with 0”. In the present embodiment, the colon symbol also connotes that review of the earlier displayed component is available prior to entry of a user response (e.g. upon detection of a user input event indicating that review is desired, P shall be presented anew so as to replace Q). A “I” (vertical bar) symbol between letters is the same as a “:” symbol, i.e. it also signifies consecutive display of the two represented components, except that review of the earlier displayed component prior to entry of a user response is precluded. Depending upon the embodiment, the author may or may not have direct, question item by question item control as to whether review shall be available (e.g. in some embodiments, review may always be available to the user for all question items, or it may never be available to the user for any question item; the author may be able to specify review paths globally for the entire question item set).
  • Cumulative display: a “.” (period or dot) symbol between letters signifies cumulative display of the two represented components, e.g. “P.Q” means “display P then (later) add 0” (i.e. without removing P).
  • Using this notation, eighteen display modes are possible in the present embodiment, as follows:
  • TABLE 1
    Display Modes
    # Notation Meaning
    1. P:Q:R display P; then, replace P with Q; then, replace Q with R
    (review of P, Q available)
    2. Q:P:R display Q; then, replace Q with P; then, replace P with R
    (review of Q, P available)
    3. P:Q|R display P; then, replace P with Q (review of P available);
    then, replace Q with R (at which time review of P, Q is unavailable)
    4. Q:P|R display Q; then, replace Q with P (review of Q available);
    then, replace P with R (at which time review of P, Q is unavailable)
    5. P:Q.R display P; then, replace P with Q; add R then (review of P available)
    6. Q:P.R display Q; then, replace Q with P; add R then (review of
    Q available)
    7. P|Q:R display P; then, replace P with Q; then, replace Q with R
    (review of only Q available)
    8. Q|P:R display Q; then, replace Q with P; then, replace P with R
    (review of only P available)
    9. P|Q|R display P; then, replace P with Q (review of P unavailable); then
    replace Q with R (review of P, Q unavailable)
    10. Q|P|R display Q; then, replace Q with P (review of Q unavailable); then
    replace P with R (review of Q, P unavailable)
    11. P|Q.R display P; then, replace P with Q; add R then (review of P unavailable)
    12. Q|P.R display Q; then, replace Q with P; add R then (review of Q unavailable)
    13. P.Q:R display P; then, add Q; then, replace both with R
    (review of P, Q available)
    14. Q.P:R display Q; then, add P; then, replace both with R
    (review of Q, P available)
    15. P.Q|R display P; then, add Q; then, replace both with R (review of
    P, Q unavailable)
    16. Q.P|R display Q; then, add P; then, replace both with R (review of
    Q, P unavailable)
    17. P.Q.R display P; then add Q; then, add R
    18. P.Q.R display Q; then add P; then, add R
  • It will be appreciated that some display modes provide for the review of an earlier displayed but no longer visible question item component while others do not. Review of an earlier displayed question item component is only available when the display mode involves consecutive display of two (or more) of the P, Q and R components. The reason is that, when only cumulative display mode is used, all of the P, Q and R elements are, or will ultimately be, displayed together, obviating any need for review.
  • For illustration, three different display modes, two having the capacity for review of an earlier displayed question item component and one lacking this capacity, are illustrated in FIGS. 3A-3C. In these figures, each box is a symbolic representation of what a user will see on a display of user workstation 14 at a particular moment in time, with arrows between boxes illustrating user advancement through the sequential presentation of question item components.
  • FIG. 3A, illustrates display mode P:Q:R. Initially, only the P component is presented (box 304), e.g. on the display and/or speakers of user workstation 14 (depending whether P is visual/textual or auditory, respectively). If the P component is an audio or video clip, it may be presented with a set of controls (e.g. play, pause, rewind, fast forward, or go to beginning) that may permit the user to manipulate the audio/video, including playing it more than once or stopping it before completion. When the user advances to the next component (typically by selecting a “next” button), the Q component replaces the P component (box 306) such that the P component is no longer available on the screen for viewing/listening to (e.g. the play, pause etc. controls disappear). At this stage, the user can return to the P component via review path 308, if desired, e.g. by selecting a “Review subject matter” button. In the case of an audio/video clip, this may cause the above-described controls to reappear. Alternatively, upon further advancement, the R component replaces the Q component (box 310) such that neither of the P or Q components is available on the screen. At this stage, the user can either return to the Q component via review path 312 or return to the P component via review path 314, if desired, e.g. by selecting a “Review subject matter” button or a “Review query/instruction” button, respectively. It is noted that, if the “:” symbols in “P:Q:R” were each replaced with the “I” symbol, review paths 308, 312 and 314 would be omitted from FIG. 3A.
  • Turning to FIG. 3B, which illustrates display mode P.Q.R, it can be seen that when the question item is initially displayed, only the P component is presented (box 320). Upon advancement, the Q component is added to the P component, so that both are now presented together (box 322). Upon further advancement, the R component is added to the P and Q components (box 324). Because all of the P, Q and R question item components are ultimately presented together, reviewing a previously viewed question item component is meaningless for this display mode, as all of the question item components are already visible. Accordingly, no review paths are available. If either of P or Q is a video or audio clip, it may be presented with a set of controls (e.g. play, pause, rewind, fast forward, or go to beginning) that may permit the user to manipulate (e.g. replay) the audio/video even upon presentation of the R component; thus no “review path” per se is necessary.
  • Finally, in FIG. 3C, which illustrates display mode P:Q.R, it can be seen that when the question item is initially displayed, only the P component is presented (box 330). Upon advancement, the Q component replaces the P component (box 332). At this stage, the user can return to viewing/hearing the P component via review path 334 if desired. Assuming this is not done, upon further advancement the R component is added to the Q component (box 336), so that Q and R are displayed together. It will be appreciated that review of Q is meaningless in this situation, since Q is already displayed with R. However, the user could return to the P component via review path 338, if desired.
  • As a further aspect of authoring a question item in the present embodiment (although not necessarily in all embodiments), the author, who is assumed to know that the presentation duration of each of the P, Q and R question item components will be timed during use of the electronic learning curriculum (as described below), also specifies which of the P, Q and/or R component presentation durations should be used for determining a completion time for the question item (FIG. 2, 210), i.e. how long the user took to respond to the question item or to complete some aspect of the question item (e.g. viewing the P or Q component). The completion time is indicative of the user's speed of response. It will be used in computing a skill score for the question item, which is a user competency measure that is based upon average response accuracy and relative average completion time over a number of trials during which the user answered the same question item. The author thus specifies, on a per-question item basis, completion time determination criteria dictating which of the P, Q and/or R component presentation durations shall constitute the completion time that is used in the skill score calculation. If the P display duration is specified, this means that the author considers the time spent listening to and/or viewing the subject matter to be relevant to assessing user competency. If the Q display duration is specified, this means that the author considers the time spent viewing the query or instruction to be relevant to assessing user competency. If the R display duration is specified, this means that the author considers the time spent entering a response, including reading foils (for multiple choice question items), is important. The author may specify that the display duration of just one of the P, Q and R components is to form the basis of the completion time, or that a sum of the display durations of two or more components (e.g. the sum of the P, Q and R display durations) is to form the basis for the completion time.
  • In the present embodiment, the author further specifies the minimum completion time (CTmin) and/or maximum completion time (CTmax) for the question item (FIG. 2, 212). This information is used for the purpose of computing the skill score for the question item, in which the user's completion time over a number of trials in answering the same question item is not considered on an absolute scale, but rather is considered relative to either one or both of minimum and maximum completion times for that question item. The author may specify minimum and/or maximum completion times for each of the P, Q and R components or simply minimum and/or maximum completion times for whichever one(s) of the P, Q and R components shall contribute to the completion time, as specified above (in FIG. 2, 210).
  • Minimum completion time may for example be specified in any of the following ways:
    • (a) Manually—the author may simply set the minimum completion time to a desired value (e.g. the fastest response for an expert or the fastest possible time in which a user can physically specify the correct answer and click a “submit” button or the like)
    • (b) Empirically—the author may set the minimum completion time based on empirically determined values. In one example, some number of experts may be asked to answer the question item for which minimum completion time is being determined. Their average completion time and the standard deviation (μ) are computed. The minimum completion time may then be set at 4μ below the mean. This could vary from 0 to 5μ (or more) below the mean in alternative embodiments (e.g. any score that is more than three standard deviations, i.e. 3μ, from the mean may be considered an “outlier” in statistical terminology, thus using 4μ could be considered conservative for some embodiments). In another example, in the case where the sum of the P, Q and R presentation durations is to be used as the completion time, the minimum completion time may be set based on total number of words in the entire question item (including each of the P, Q and R components) and a typical user's reading speed. In a further example, a database of historical data for many users can be used as the basis for the minimum.
    • (c) Dynamically—the minimum value may be set to automatically adjust at run time based on users' performance (e.g. adjust to match a new minimum whenever a new minimum is achieved by any user), with the minimum initially being set to an arbitrary initial value. For example, durations collected from past user performances can be collected and divided into quintiles for each component of each question item. The mean and standard deviation of the best users (e.g. the top 20%) may be used to set the minimum completion time (e.g. set minimum completion time to three times that standard deviation below that mean).
  • Similarly, maximum completion time may for example be set in any of the following ways:
    • (a) Manually—set by the author to some large value that even the least experienced beginners should be able to beat (or above which there is doubt that the user was paying attention).
  • (b) Empirically—the author may set the maximum completion time based on empirically determined values. In one example, some number of beginners may be asked to answer the question item. Their average completion time and standard deviation (p) are computed. Then the maximum completion time may be set at 3μ above the mean. This could vary from 0μ to 5μ (or more) above the mean in alternative embodiments. In a further example, a database of historical data for many users can be used as the basis for the maximum.
    • (c) Dynamically—the maximum value may be set to automatically adjust at run time based on users' performance (e.g. adjust to match a new maximum whenever a new maximum is achieved by any user), with the maximum initially being set to an arbitrary initial value. For example, durations collected from past user performances can be collected and divided into quintiles for each component of each question item. The mean and standard deviation of the worst users (e.g. the bottom 20%) may be used to set the maximum completion time (e.g. set maximum completion time to three times that standard deviation above that mean).
  • As yet another aspect of authoring the question item, the author may optionally also specify whether a hint or help can be viewed during electronic learning (214) and, if so, what the hint will be. Hints or help may be made available to the user during display of any of the P, Q and R components of the question item, e.g. by way of “hint” or “help” buttons (respectively) presented on the display of user workstation 14. A hint is typically a textual description that facilitates the user's comprehension of the displayed component(s) or is suggestive of a correct response. Help is typically some sort of textual description that provides basic assistance as to electronic learning curriculum functionality. Hints or help could alternatively or additionally take on forms that are not textual (e.g. graphical, cinematographic, auditory, etc.). The author may also be able to specify the effect of viewing such hints or help upon completion time (i.e. whether time spent viewing hints or help counts towards the P, Q and/or R presentation durations). In some embodiments, this behavior may not be under the author's control (e.g. it may be assumed or hard-coded that time spent viewing hints or help always counts, or never counts, towards completion time).
  • The author also provides input regarding the correct response(s) to the question item that is being authored (FIG. 2, 216), for use in computing an accuracy component of the skill score for the question item. In the case of a multiple choice R component, the author input may simply be an indication of which foil (i.e. which choice) is correct. In the case of a typed response (e.g. short answer or essay), the author input may be a sample response that is considered to be correct. For some question items, multiple correct responses may be entered by the author. A degree of correctness (e.g. 100%, 75%, 50% etc.) may be specified for each response.
  • As still another aspect of authoring the question item, the author may optionally also specify whether feedback is to be provided to the user during the use stage (218) and, if so, the substance of the feedback. Feedback may be made available automatically to the user after the user has provided a response to a question item, or e.g. by way of a “feedback” button that appears after the response has been provided, which displays feedback upon selection by the user. The author-specified feedback is typically textual and explains why a selected answer is correct or incorrect. Feedback may for example be provided for each different foil of a multiple choice question item. Time spent viewing feedback for a question item usually is recorded but does not count towards completion time for that question item, although this too may be author-controlled.
  • All of the above-noted author input pertaining to the question item, i.e. P, Q and R components, display mode, completion time determination criteria, minimum and/or maximum completion times, hint information, correct answer(s) and feedback, is stored in memory for each question item (220). In the result, a record for the question item (or “question item record”) is created and stored within memory. If the electronic learning curriculum is being created locally at authoring workstation 12, the record will be stored in memory at the authoring workstation 12. If the electronic learning curriculum is being created at curriculum server 16 under the control of the author from authoring workstation 12, then the record will be stored at curriculum server 16. An exemplary record is illustrated in FIG. 4.
  • Referring to FIG. 4, it can be seen that record 400 includes a field for each of the aspects of the question item for which the author has provided input. Thus, the record 400 includes a subject matter field 402 representing the P component (which may store text, an image, a video clip, an audio clip, or virtually any other form of subject matter), a query/instruction field 404 representing the Q component, a response mechanism field 406 representing the R component, a display mode field 408 representing one of the eighteen above-described display modes for the illustrated embodiment, a completion time determination criteria field 410 storing information as to which of the P, Q and/or R presentation durations are to be used in determining completion time, a minimum and/or maximum completion times field 412, a hint information field 414 including hints for the P, Q and/or R components, a correct response(s) field 416, and a feedback information field 418. It will be appreciated that response mechanism field 406 does not necessarily store a response mechanism per se (e.g. a multiple choice response with selectable foils or an editable text field), but rather may store a representation of the response mechanism that should be presented during the use stage (e.g. a representation of each foil of a multiple choice response, an indicator that an editable text field should be displayed, etc.). Additional fields (not expressly shown) may be provided. The record 400 may for example be a conventional database record.
  • Additional question items are composed in the manner shown in FIG. 2. These question items are then used to create a curriculum, as shown in FIGS. 5 and 6.
  • Referring to FIG. 5, operation 500 for creating a curriculum is illustrated. Initially, the author groups multiple question items into question item sets (502). A question item set is a set of question items (1 to n) that may be: (1) intended to be presented together during electronic learning in one or more sittings; (2) all related to specific topic; (3) all of a specific difficulty level; or (4) otherwise classified within the curriculum. The author repeats 502 to create one or more other question item sets (504) which collectively form the curriculum.
  • For example, referring to FIG. 6, to create an electronic learning curriculum 600 pertaining to simple mathematics, an author may generate four question item sets 602, 604, 606 and 608 pertaining to addition, subtraction, multiplication and division, respectively. Each set may contain one hundred question items based on the one hundred different permutations of two-operand equations that are possible for single-digit operands 0-9 for the mathematical operation in question (e.g. for addition: 0+0=?; 0+1=?; . . . 0+9=?; 1+0=?; 1+1=?; . . . 9+8=?; 9+9=?). For each question item set, the author may stipulate whether the question items should be presented sequentially or in random order during the use stage. These elections may be stored as part of the electronic learning curriculum. The number of question items in each set may vary. In some embodiments, an electronic learning curriculum may contain only one question item set.
  • As a further aspect of authoring a curriculum, the author specifies three different skill score thresholds T1, T2 and T3 to be used during electronic learning (FIG. 5, 506). These thresholds dictate when a user of the curriculum has reached beginner, intermediate and expert levels of competency, respectively, for a particular question item set: when a collective (e.g. average) user competency measure for the question item set meets or exceeds a threshold, that level of competency is considered to be attained. The thresholds T1-T3 may be specified on a per-set basis or curriculum-wide. The role of threshold T1, T2 and T3 will become apparent from the subsequent description of the curriculum advancement algorithm that is applied during the use stage.
  • All of the author-specified curriculum criteria, i.e. the groupings of question items into sets and the T1-T3 thresholds, are ultimately stored as part of the electronic learning curriculum (508).
  • Authoring of the curriculum is thus completed. The electronic learning curriculum may thereafter be transferred from the authoring workstation 12 to curriculum server 16, e.g. as one or more electronic files, possibly compressed for efficient transmission over network 18 or encrypted for security.
  • Use Stage
  • To use the curriculum for electronic learning, a user interacting with a user workstation 14 accesses the curriculum server 16 over network 18. Access may for example involve invocation of a web browser application at the user workstation 14, navigating to a predetermined Uniform Resource Locator (URL) where the electronic learning curriculum is posted or maintained, and entry of user account information. The user account information may permit records of any past interaction with the electronic learning curriculum (e.g. previous completion of one or more question item sets or question item sets and past skill score values) to be accessed. A desired or current question item set is identified, e.g. based upon the user's progress to date through the electronic learning curriculum (as may be apparent from a graphical “dashboard” summary of user performance to date through the curriculum that may exist in some embodiments), impending test dates, manual selection by a user or other criteria. When the user has indicated a readiness to proceed, e.g. by pressing a “begin session” user interface button or the like, the user workstation 14 communicates with the curriculum server 16 to begin displaying the question items of the current question item set, question item by question item. The question items of each set are presented either in sequential or random order, as earlier specified by the author during the authoring stage.
  • Operation 700 for presenting the question items of a current question item set is illustrated in FIG. 7. Based on the user's progress through the curriculum to date, a next question item is obtained (702). Typically, the next question item is sequentially or randomly drawn from the question items of a current set that are as-yet unanswered. The question item is retrieved from a data store, which may comprise volatile or non-volatile memory. The P, Q and R components of the question item (i.e. the subject matter, query/instruction and response mechanism) are then presented at user workstation 14 in a presentation sequence. As earlier described, the presentation sequence for these components in the present embodiment comprises:
  • (i) presenting one of P and Q; then
  • (ii) presenting the other of P and Q; then
  • (iii) presenting R.
  • Each of (i), (ii) and (iii) may be referred to as a “stage” in the presentation sequence. The specific sequence (i.e. which of P or Q is first) determined by the display mode is specified by the author during the authoring stage (704). Any requested hints or help (if available) are also displayed.
  • When the display mode specifies consecutive presentation of the question item components in stages (i) and (ii) (i.e. when the display mode incorporates “P:Q”, “P|Q”, “Q:P” or “Q|P”), then whichever one of P or Q is presented in stage (i) is replaced by the other of P and Q that is presented in stage (ii) upon user advancement from stage (i) to stage (ii).
  • For display modes permitting review of the component presented in stage (i) (i.e. incorporating “P:Q” or “Q:P”), stage (i) may later be repeated upon user request (although not necessarily for the same duration as it was originally presented). The duration of the repeated stage (i) (i.e. the duration of review) may optionally be stored separately from the initial presentation duration of stage (i), even if both durations shall contribute to (e.g. are summed for the purpose of determining) the presentation duration for the question item component that is presented in stage (i), as described below.
  • Similarly, when the display mode specifies consecutive presentation of the question item components in stages (ii) and (iii) (i.e. when the display mode incorporates “Q:R”, “Q|R”, “P:R” or “P|R”), then whichever one of P or Q is presented in stage (ii) is replaced by R upon advancement from stage (ii) to stage (iii). For display modes permitting review of the component presented in stage (ii) (i.e. incorporating “Q:R” or “P:R”), the presenting of stage (ii) may be repeated upon user request (although not necessarily for the same duration as it was originally presented). The duration of the repeated stage (ii) (i.e. the duration of review of whatever was presented in stage (ii)) may optionally be stored separately from the initial presentation duration of stage (ii), even if both durations contribute to (e.g. are summed for the purpose of determining) the presentation duration for the question item component that is presented in stage (ii), as described below.
  • The presentation duration of each of P, Q and R is determined by the user-controlled progression through stages (i), (ii) and (iii). Stage (i) commences with user selection of a “start” button or similar GUI control which causes the first component be presented (although, in some embodiments, the presenting of stage (i) may commence automatically, e.g. after the preceding question item is completed or, e.g. after a countdown: “Next question in 3 . . . 2 . . . 1 . . . 0 seconds”. Stage (i) ends upon user selection of a “next” button or similar GUI control. Likewise, stage (ii) commences upon user selection of the “next” button and ends with user selection of another “next” button or similar GUI control. Finally, stage (iii) commences upon user selection of the second “next” button and ends upon entry of a user response. In some embodiments, entry of a user response constitutes selection of a multiple choice foil or the typing of an answer into an editable text field. In other embodiments, entry of a user response may require user selection of a “submit” button or similar GUI control after specification of a response, or possibly manipulation of an external or peripheral device described above. Based on these start and end events, the durations of stages (i), (ii) and (iii) may be measured.
  • As previously noted, if P and/or Q components are “temporal” in nature, e.g. are audio or video clips or animations, they may be presented with a set of controls (e.g. play, pause, rewind, fast forward, or go to beginning) that may permit the user to manipulate the audio/video, including playing it more than once or stopping it before completion, if desired.
  • If the user reviews P and/or Q for display modes permitting such review, then the time spent reviewing is typically added to the initial presentation duration of the relevant question item component. For example, any time spent by the user viewing/listening to the P component for the first time will certainly contribute to presentation duration of the P component, but any time spent by the user reviewing the P component after the Q and/or R component have replaced P can also, at the author's election, be deemed to contribute to the presentation duration of P. The presentation duration of review may be separately stored from the presentation duration of initial display, e.g. for possible later use in the computation of metrics pertaining to user review times. Similarly, time spent reading hints or help for a question item component may contribute to the presentation duration of that component and may be separately stored.
  • The manner of ascertaining the presentation durations of the P, Q and R components for a question item may depend upon the operative display mode. For example, referring to FIG. 8A, a timeline 800 showing measurement of presentation durations (here, in seconds) of P, Q and R during a user's interaction with a question item having a display mode of PIQIR is illustrated. The actions of the user in interacting with the question item's components during their presentation are indicated above the timeline 800; the curved arrows 802, 804 and 806 represent stages (i), (ii) and (iii) of the presentation sequence, respectively.
  • At t=0 (FIG. 8A), the first question item component (P) is presented, e.g. in response to user selection of a “start” button. From t=0 to t=50, the user views the presented P component (for this example, it is assumed that P is a textual or visual component and is thus “viewed”). At t=50, user input, e.g. selection of a “next” button, causes P to be replaced with Q. From t=50 to t=65, the user reads the Q component. At t=50, user input, e.g. selection of another “next” button, causes Q to be replaced with R. From t=65 to t=77, the user reads the foils of the R component, i.e. the possible multiple choice answers and considers which response is correct (the question items of FIGS. 8A to 8C are all assumed to have multiple choice R components). Finally, at time t=77, the user completes response entry, e.g. by pressing a “submit” button after having selected a foil. Thus, the stages (i), (ii) and (iii) represented by arrows 802, 804 and 806 correspond to the presentation durations for P, Q and R, respectively, which are therefore 50 seconds, 15 seconds and 12 seconds long.
  • In FIG. 8B, which adopts similar conventions to FIG. 8A, the effect of user review of the first displayed question item component (i.e. repetition of stage (i)) upon presentation duration calculation is shown. FIG. 8B may best be understood when viewed in conjunction with FIG. 3B, which illustrates the stages of displaying a question item for display mode P:Q:R. At t=0, the first question item component (P) is presented. At t=50, the user advances from the P component to the Q component (as per FIG. 3B, 309), e.g., by selecting the “next” button. At t=65, the user reverts to the P component (as per FIG. 3B, 308) e.g. by selecting a “review” button. At t=125, the user again advances to the Q component (as per FIG. 3B, 309). At t=132, the user advances to the R component (as per FIG. 3B, 313). Finally, at t=157, the user enters a response, e.g. by selecting a “submit” button or pressing “enter” after selecting a foil.
  • It will be appreciated that, in the example of FIG. 8B, the P component presentation duration includes both the time spent initially viewing the component and any time spent reviewing the component. Thus, the P component presentation duration is determined by summing the time intervals 812 and 816, i.e. 50 seconds plus 60 seconds, for a total of 110 seconds. The Q component presentation duration is similarly determined by summing the time intervals 814 and 818, i.e. 15 seconds plus 7 seconds, for a total of 22 seconds (the interval 818, which represents the second time Q is presented, is not considered “review” of Q per se as it merely results from the user's review of P during interval 816; thus both of intervals 814 and 818 are labeled stage (ii) in FIG. 8B). There is no review of the R component, thus the R component presentation duration simply corresponds to time interval 820 (i.e. stage (iii)), which is 25 seconds.
  • FIGS. 8A and 8B illustrate consecutive display of P, Q and R (the former without review and the latter with review). FIG. 8C, on the other hand, illustrates cumulative presentation of P, Q and R.
  • Turning to FIG. 8C, the measurement of presentation durations for P, Q and R in display mode P.Q.R is illustrated. In this display mode, the user workstation 14 detects when the user advances from the P component to the Q component (at time t=50) and from the Q component to the R component (at time t=132) as well as when entry of a response is complete (t=157). The presentation duration of the P question item component can be determined in one of two ways. The first way is to measure the totality of time that P is presented, regardless of when the subsequent question item component is later presented. Using this approach, the presentation duration for P would be the sum of the time intervals 842, 844 and 846, i.e. 157 seconds. The second way is to measure the time that P is presented until the next question item component appears (i.e. the time between the onset of the earlier-displayed question item component and the onset of the subsequent question item component, which in this example is Q). In the second approach, the presentation duration for each component reflects the time that the user's attention was likely focused upon that component.
  • The presentation durations of each of the P, Q and R components are determined as distinct values, and an indicator of each said distinct value (e.g. an elapsed time or one or more time stamps indicative of the presentation duration) is stored (FIG. 7, 706). This is so that, regardless of which of these components' presentation durations has been specified during authoring to comprise the completion time for the question item, if it is later desired to change the presentation duration(s) comprising the completion time, the necessary data for doing so is available (even after the user has completed electronic learning). For example, even after a question item of a set has been deemed to have been learned by a student based on the student's accurate response to that question item within an “acceptable” first completion time that was based solely on the presentation duration of R (i.e. how quickly the student entered a response), the student's user competency for that question item may later be reassessed, e.g. by an administrator of the electronic learning curriculum. This may be done by using the student's presentation duration for P, as previously stored, as the “new” significant completion time for that question item. A relative completion time for that question item and student may thereafter be determined, e.g. based on different minimum and/or maximum completion times which account for P rather than R being the significant presentation duration. Based on this “new” relative completion time and the previously stored response accuracy, a second user competency measure for the same question item could be calculated and stored, possibly in addition to the first user competency measure. The second competency measure would reflect how quickly the student viewed/listened to P, rather than how long the student took to enter a response as in the first user competency measure. In 706, the presentation duration of any displayed hints and/or help may also be determined and stored.
  • For accuracy, the presentation durations are measured at the user workstation 14 (e.g. using a system clock of the computing device, which is typically driven by an oscillator crystal), rather than at the curriculum server 16. This is to avoid inaccuracies which may result from such factors as network propagation delays over network 18 for example. To minimize potential inaccuracies the user workstation 14 may avoid running other concurrent applications in the background. Further, in cases where timing accuracy is critical (e.g. real-time brain scan responses or the like), the user workstation 14 could use specialized hardware and/or execute a real-time operating system to maximize timing accuracy.
  • It is noted that, when P comprises a recording of auditory subject matter having a fixed duration of, say, 20 seconds, the presentation duration of P may be measured in various ways. If user input for advancing to the next question item component is received prior to the complete playing of the recording (e.g. at 18 seconds), the presentation duration for P may be set to the duration that had elapsed when the user input was received (i.e. 18 seconds). If user input for advancing to the next question item component is received some time after the recording has been fully played (e.g. at 23 seconds), the presentation duration for P may be set either to the duration that had elapsed when the user input was received (i.e. 23 seconds) or to the fixed recording duration (i.e. 20 seconds). If the user has manipulated controls to cause the audio to be played twice (i.e. 40 seconds) over the course of a 45 second duration, the duration may be set to 40 or 45 seconds. Various alternatives are possible.
  • Whatever display mode is operative, ultimately the user provides a response to the question item using whatever response mechanism has been specified for the question item (multiple choice, fill in the blank, etc.) using some user input mechanism (e.g. mouse, keyboard, peripheral controller, etc, as described above). This response is received (FIG. 7, 708) and stored, along with indicators of the presentation durations of the P, Q and R components (710) and a computed completion time for the question item that is based on one or more of the measured presentation durations for the P, Q and R components (equal to, e.g., one of the presentation durations, a sum of two of the presentation durations, or a sum of all three of the presentation durations, as specified by the author during the authoring stage). Thereafter, a user competency measure for the question item, which is alternatively referred to herein as a skill score, is computed and stored (712). Skill score computation is based on the user's performance over a number of trials in which the same question is presented at least one, but typically more than once, and is described in greater detail below in conjunction with FIG. 9. If any unanswered question items in the set remain (FIG. 7, 714), the process is repeated.
  • Operation 900 for computing a skill score for a current trial, as in block 712 of FIG. 7, is illustrated in FIG. 9. Initially, a response accuracy for the current trial is determined based on the user response and at least one predetermined correct response, and is stored (FIG. 9, 902). In the present embodiment, the response accuracy is a value in the range of zero to one (or 0% to 100%), where zero represents an incorrect user response, one represents a correct user response, and values between zero and one indicate partially correct responses.
  • Thereafter, a completion time for the current trial is determined (904). The completion time is based on the measured presentation duration(s) of whichever one(s) of the P, Q and/or R question item components have been specified by the author (at FIG. 2, 210) as comprising the completion time for this question item. For example, if the author had specified that the presentation duration of just the Q and R components (representing the time spent by the user in viewing the query/instruction and viewing the response mechanism/entering a response) are to be used for determining completion time for this question item, then completion time is determined in 904 by summing the user's measured Q and R presentation durations, with any time spent reviewing the Q component (if applicable) or reviewing hints or help for either of the Q or R components (if applicable) optionally forming part of the presentation durations for these question item components, at the author's discretion.
  • Next, an average response accuracy (RAavg) for the question item is determined based on the response accuracies S trials in which the same question item has been presented to the user (FIG. 9, 906), where S is a positive integer. The average response accuracy thus takes into account the user's accuracy in answering the same question item over a window of S attempts.
  • In the present embodiment, the calculation of an average response accuracy entails calculating a weighted average of S stored response accuracies. In other words, it is presumed that the user has been presented with the same question item S times, and each time the user's response and completion time was stored for later use in computing the skill score for that question item. For example, TABLE 2 below represents stores data associated with the five trials of responding to an exemplary question item, i.e. question item
  • In TABLE 2, each trial for question item 1 is represented by a table row. The first row represents the least recent trial and the last row represents the most recent trial. The values in the second column indicate the correctness of the user response for each trial. The response accuracies in the third column are equal to one for a correct response and zero for an incorrect response. The weights in column four represent the relative weight of each response and completion time for the trial represented by the row in relation to the trials represented by other rows. The values in column five represent the completion times for each trial.
  • TABLE 2
    S Trials of Completing Question Item 1 (S = 5)
    TRIAL RESPONSE COMPLETION
    NO. CORRECT? ACCURACY WEIGHT TIME
    1 Yes 1 1 3.4 seconds
    2 No 0 2 1.1 seconds
    3 Yes 1 3 3.0 seconds
    4 Yes 1 4 2.6 seconds
    5 No 0 5 5.6 seconds
  • In the present embodiment, the calculation of an average response accuracy for a question item comprises calculating a weighted average. More specifically, each response accuracy (column 3) is weighted by its corresponding weight (column 4) and the resulting weighted accuracies are summed. For the data of TABLE 2 above, this would entail performing the following calculation: 1*1+0*2+1*3+1*4+0*5=8. This sum of weighted accuracies is then divided by a sum of weighted accuracies that would result for S correct responses, i.e. 1*1+1*2+1*3+1*4+1*5=15. The resulting average response accuracy is 8/15 or 0.533.
  • In some embodiments, the average response accuracy may be a straight (i.e. non-weighted) average of actual response accuracies in relation to the accuracies that would result for S correct responses, i.e. (1+0+1+1+0)/(1+1+1+1+1) or 0.600. It is understood that the term “average” as used herein may encompass other types of calculations for assessing a representative measure of completion time based on S trials, such as calculating a median.
  • Subsequently, an average completion time (CTavg) for the question item is determined based on the completion times of only the ones of those S trials in which a correct user response was given (FIG. 9, 908). The average completion time of the present embodiment is computed as a weighted average. More specifically, each completion time (column 5) for a trial in which a correct response was given (i.e. for rows 1, 3 and 4 but not rows 2 or 5) is weighted by its corresponding weight (column 4), and the resulting weighted completion times are summed. For the data of TABLE 2 above, this would entail performing the following calculation: 3.4 sec*1+3.0 sec*3+2.6 sec*4=22.8 seconds. This sum is then divided by a sum of the weights for the trials in which a correct response was given, i.e. 1+3+4=8. The resulting average completion time is 22.8 seconds/8 or 2.85 seconds.
  • In some embodiments, the average completion time may be a straight average of the completion times of the last S attempts for which correct responses were given, i.e. (3.4 sec+3.0 sec+2.6 sec)/3 or 3.00 seconds. It is understood that the term “average” as used herein may encompass other types of calculations for assessing a representative measure of completion time based on S trials, such as calculating a median.
  • Thereafter, a relative average completion time (CTrelavg) for the current trial is calculated (FIG. 9, 910). This essentially involves comparing the average completion time (CTavg) computed above to the minimum completion time (CTmin) and/or maximum completion time (CTmax) for the question item, as specified by the author during the authoring stage. In the present embodiment, relative completion time is calculated based on both CTmin and CTmax, according to equation (I) below:

  • CTrelavg=((CTmax−CTavg)/(CTmax−CTmin))̂p
  • That is, the difference between the maximum completion time and the average completion time, as determined in block 908, is divided by the difference between the maximum and minimum completion times, and the resulting value is exponentiated, i.e. raised to an exponent p, where p is a number greater than zero. The exponent serves to differentially weight speed improvements as the user gets faster with training. When p is greater than one, the choice of a value for the exponent p allows the author to selectively weight those times that are closer to CTmin more heavily than those that are closer to CTmax. The larger the exponent, the heavier the weighting. For example, as between two embodiments in which p=3 and p=5 respectively, the latter would require a user to get much closer to the minimum completion time (i.e. expert speed) before increase in speed is evidenced in user competency measure improvements. When p is between 0 and 1, on the other hand, improvements near CTmax (i.e. beginner speeds) are weighted more heavily than those near CTmin. The value of CTrelavg resulting from the exponentiation ranges from zero to one in the present embodiment, with zero representing a user completion time equal to the maximum response time and one representing a user completion time equal to the minimum response time. Thus, the faster the user's completion time, the greater the relative completion time value as computed by equation (I).
  • When the average completion time is smaller than the minimum completion time it may be increased to match the minimum completion time. Similarly, when average completion time is larger than the maximum completion time, it may be reduced to match the maximum completion time. This may avoid negative values in the base of the exponent for equations above which could result in undesirable negative CTrelavg values for certain values of P.
  • In equation (I) above, relative average completion time is based on the average completion time as well as both of the author-specified minimum and maximum completion times CTmin and CTmax. In alternative embodiments, the relative average completion time may be based on the average completion time and only one or the other (not both) of CTmin and CTmax. Two alternative equations (I′) and (I″) for calculating relative completion time in such a fashion are provided below.

  • CTrelavg=(CTmin/CTavg)̂p  (I′)

  • CTrelavg=(1−(CTavg/CTmax))̂p  (I″)
  • In some embodiments, the equation that is used to convert completion time to a number from 0 to 1 may differ from equations (I), (I′) and (I″) above. For instance, in some embodiments, it may be acceptable for the function to be linear rather than exponential. The conversion could also by done by one or more tables.
  • Thereafter, an overall skill score for the current trial is computed (FIG. 9, 912). In the present embodiment, this is achieved by way of equation (II), as specified below:
  • Skill Score = RAavg * C 1 + CTrelavg * C 2 = X + Y ( II )
  • In equation (II), the average response accuracy is scaled by a constant C1 (i.e. scalar C1) and relative average completion time is scaled by a constant C2 (i.e. scalar C2). The scaled results may be referred to as the “X” and “Y” components of the skill score, respectively. The X and Y values are summed to yield the overall skill score. Scaling the accuracy and relative completion time values may have two benefits. Firstly, it may give the score greater relevance to a typical user. For example, the skill score of the present embodiment ranges between zero (for a completely incorrect response answered in the maximum amount of time) and two (for a 100% correct response answered in the minimum amount of time). However, by choosing constants C1 and C2 that sum to 1000 (in which case a perfect skill score would be 1000), the skill score may be made resonate with certain users as comparable to a baseball batting average. Secondly, scaling may allow the relative importance of accuracy versus relative completion time to be set by the author. For example, by setting C1 to 900 and C2 to 100, the skill score may be weighted much more heavily towards accuracy of response than towards speed of response. Any desired weighting may thus be attained by the author. Once computed, the skill score is stored (910).
  • FIG. 10 contains a table 1000 illustrating exemplary skill scores as may be computed by equation (II) above for various user response accuracy and completion time scenarios. The table 1000 includes a body 1002 of four hundred twenty cells (twenty one rows by twenty columns). Each cell within the body 1002 contains a skill score that would result, based on equation (II), for a unique user response accuracy and completion time scenario, with exemplary C1 and C2 values of 500 being assumed.
  • In FIG. 10, the average response accuracy associated with a particular cell is indicated in column 1004, in the row occupied by that cell. Similarly, average completion time associated with a particular cell is indicated in row 1006, in the column occupied by that cell. For example, cell 1008 within body 1002 represent the skill score that results when the user responds 95% accurately (1010) with an average completion time of 1 second (1012).
  • Column 1004 contains response accuracies ranging from 100% to 0%, at 5% intervals. The corresponding X values that would be generated according to equation II (i.e. average response accuracy scaled by the exemplary C1 value of 500) are shown in column 1014. Row 1006 shows completion times ranging from 0 seconds (the minimum completion time) to 20 seconds (the maximum completion time) at one second intervals. The corresponding Y values that would be generated according to equation II (i.e. relative average completion time scaled by the exemplary C2 value of 500) are shown in column 1016. It will be appreciated that the skill score value in any cell of body 1002 represents the sum of the X value from column 1014 in the column occupied by that cell and the Y value from row 1016 in the column occupied by that cell. A perfect skill score of 1000 (100% correct response in the minimum time of zero seconds) is shown in the upper left hand corner of body 1002, while a minimum skill score of 0 (0% correct response in the maximum time of twenty seconds) is shown in the lower right hand corner of body 1002.
  • As earlier described, operation 700 of FIG. 7 represents a single pass by a user through a set of question items. However, it will be appreciated that multiple passes may be required in order for a user to be considered to have truly learned the curriculum. To guide the user from being a beginner to being an expert in a particular curriculum, a curriculum advancement algorithm is applied. This algorithm is exemplified by operation 1100 of FIG. 11.
  • Referring to FIG. 11, operation 1100 illustrates an exemplary algorithm for advancement of a user from a beginner skill level to an expert skill level for an exemplary electronic learning curriculum comprising multiple question item sets. Initially, a collective user competency measure, referred to herein as a collective skill score (CSS), is computed for each question item set within the curriculum (1102). A CSS is a value that reflects the user's competency across multiple question items. A CSS is typically computed by averaging the skill scores for all of the question items of the set. In some embodiments, calculating the CSS may involve combining the average skill score for the set (based on the user's most recent pass through the set) with one or more previously computed average skill scores for the set (based on the user's prior pass(es) through the set). Combining may entail weighting the most recent average more heavily that the previously computed average(s). Some embodiments could average the sets' speeds and accuracies separately.
  • If a question item set has not yet been presented to the user in accordance with operation 700 of FIG. 7, then the CSS may initially have a default value of zero.
  • The collective skill scores computed for each question item set are then used to determine what set to present next. If the collective skill score for any set of the question item set is below a first threshold T1 (1104), where T1 is a threshold CSS value representing a “beginner” skill level, the set having the collective skill score closest to but below T1 is presented next (1106). The rationale for this approach, which may be referred to as the “race up” approach, is to prioritize question item sets whose collective skill are the highest but are not yet at threshold T1, so that each set has a baseline of exposure before more stringent thresholds must be met. In this case the user has at least a beginner level of competence in all of the sets before advancing further. The race up approach emphasizes breadth over depth. Once the question item set has been presented (1106), the CSS for the set is recomputed (1108). Assuming that the number of correct responses and/or completion times are better in this pass than they were during the user's last pass through the question item set, the CSS value will improve, possibly meeting or exceeding the threshold CSS value T1.
  • Operation then repeats from 1104. It will be appreciated that, if the CSS value for the presented set has not yet met or exceeded T1, it may be presented again (assuming that its CSS remains the CSS value that is closest to but below T1) upon repetition of 1106.
  • Ultimately, when the CSS values for each question item set meets or exceeds the threshold CSS value T1, the user is considered to have attained a “beginner” skill level for the question item set as a whole. Operation then advances to 1110.
  • At 1110, if the collective skill score for any question item set is below a second threshold CSS value T2, where T2 is a threshold CSS value representative of an “intermediate” skill level, the set having the lowest collective skill score of all sets in the question item set is presented next (1112). The rationale for this approach, which may be referred to as the “bubble up” approach (since, as the set with the lowest CSS improves and exceeds another set's CSS, its CSS will not improve further until the set with the new lowest CSS improves and exceeds another set's CSS), is to prioritize question item sets whose collective skill are the lowest despite being at least T1. The rationale is to ensure that all sets are learned at a predetermined level before attention is given to the weakest items across the sets. Once the question item set has been presented (1112), the CSS for the set is recomputed (1114). Assuming that the number of correct responses and/or completion times are better than they were during the user's last pass through the question item set, the CSS value will improve, possibly meeting or exceeding the threshold CSS value T2. Bubble up may suit learning environments in which a user may alternate learning among many sets, possibly allowing the user to perceive interconnections and relations among question items across the set topic context.
  • Operation then repeats from 1104. The reason that operation repeats from 1104 rather than from 1110 is to account for the possibility that the CSS for a question item set which previously met or exceeded T1 could drop below T1, requiring operation at 1106-1108 to be repeated.
  • Upon repetition of 1112, it will be appreciated that, if the CSS value for a previously presented set has not yet met or exceeded T2, it may be presented again (assuming that its CSS remains the lowest CSS value of all sets in the curriculum while still exceeding T1). Ultimately, when the CSS values for each question item set meets or exceeds the threshold CSS value T2, the user is considered to have attained an “intermediate” skill level for the curriculum as a whole. Operation advances to 1116.
  • At this stage, individual question items whose skill scores are the weakest (regardless of set) are presented, along with some “maintenance question items” (i.e. randomly selected question items being repeated simply to maintain an existing level of user competency) mixed in, until a third threshold T3 is reached for each set, where T3 is a CSS value representative of an “expert” skill level. The curriculum is then considered to have been learned. To promote revisiting of all question items, the maintenance question items could be chosen based on date last tested, with the least recent being chosen first. The oldest items are more likely to be forgotten and therefore should be randomly sampled more often.
  • More specifically, at 1116, if the collective skill score for any question item set is below the third threshold value T3, N individual question items (where N is a positive integer) whose skill scores are the lowest are identified (1118). The N question items are selected regardless of whether they belong to a set whose CSS is less than T3 or greater than or equal to T3. The rationale for this approach is to focus on question items for which the user has demonstrated the least user competency, with any improvements in user competency for those question items resulting in an improvement in user competency for the question item sets to which they belong. M maintenance question items (where M is a positive integer different from or the same as N) are also selected (1120). The N question items identified in 1118 are then presented along with the M question items selected in 1120 (1122). When this has been done, the CSS for all sets from which at least one of the N or M question items was drawn is recomputed (1124).
  • Operation then repeats from 1104. The reason that operation repeats from 1104 rather than from 1116 is to account for the possibility that the CSS for a question item set which previously met or exceeded T1 could theoretically drop below T1 or T2, requiring operation at 1106-1108 or 1112-1114 to be repeated.
  • Ultimately, when the CSS values for each question item set meets or exceeds the threshold value T3, the user is considered to have attained an “expert” skill level for the curriculum as a whole. Operation 1100 is thus completed.
  • FIG. 12 contains a table 1200 illustrating advancement of a user from a beginner skill level to an expert skill level according to algorithm 1100. In FIG. 12, advancement is shown for four exemplary electronic learning curricula 1-4, each comprising four question item sets (1a-1d, 2a-2d, 3a-3d, and 4a-4d, respectively). Each row in the table 1200 represents a distinct user session at user workstation 14, while each column in the table represents a question item set. A session is a single work period in which the user completes one or more sets of question items. It may be alternatively be thought of as the work accomplished in a single login period. The user's progress from an initial session s0 to a final session s25 is shown, with the algorithm illustrated in FIG. 11 being applied once for each of the electronic curricula, in sequence.
  • Each cell within table 1200 contains a collective skill score (CSS) value for the relevant question item set indicated by the containing column at the completion of the relevant session indicated by the containing row. In table 1200, cells containing a bolded CSS value indicate that the CSS value has changed from the previous session above. For the purposes of FIG. 12, it is assumed that T1=400, T2=600 and T3=800.
  • Beginning with session s0, which represents an initial assessment of the user's competency, collective skill score values of 0 are set for each of the sixteen question item sets, reflecting an initial assumption of a complete lack of user competency in the relevant curricula.
  • Beginning with curriculum 1, in a first session s1, the user completes each of the four contained question item sets 1a-1d once, with the resulting CSS values for each question item set being shown in bold in the relevant row of table 1200. In view of the fact that the resulting CSS values exceed threshold T2 for each set 1a-1d but are less than T3 for at least one of the sets, operation 1100 proceeds directly to 1116 (FIG. 11) in the next session s2. In this example, it is assumed that the N weakest question items and M maintenance question items are drawn from each of the four sets 1a-1d. When operation 1118 to 1124 of FIG. 11 is performed, the CSS for three of the four question item sets increase. However, because they remain below threshold T3, operation 1116 to 1124 is repeated in session s3. In this pass, all of the CSS values exceed T3, and the first curriculum is therefore considered to have been learned.
  • Turning to curriculum 2, in session s4 the user completes each of the four contained question item sets 2a-2d once, with the resulting CSS values for each question item set being shown in bold in the relevant row. In view of the fact the resulting CSS values for sets 2c and 2d are below threshold T1, operation 1100 proceeds to 1106 in the next session s5. Set 2c is identified as the set whose CSS is closest to but below T1, so it is presented (1106) and its CSS recomputed (1108) to a value 410, which exceeds T1. Accordingly, in the next session s6, the only set 2d whose CSS remains below T1 is presented (1106) and its CSS recomputed (1108) to a value 480.
  • Now that that the CSS values for all sets 2a-2d exceeds T1, operation proceeds to 1110. Because set 2c has the lowest CSS value of 410, it is presented (1112) and its CSS recomputed (1114) to a value 590 in session s7. Because set 2d now has the lowest CSS value of 480, it is presented (1112) next and its CSS recomputed (1114) to a value 630 in session s8. Operation 1110-1112 is repeated for set 2c in session s9, given that it is the only remaining set with a CSS below T2. The resulting CSS of 670 exceeds T2.
  • At this stage operation 1116-1124 is repeated three times in sessions s10 to s12, until all of the CSS values exceed T3. At that stage, the second curriculum is considered to have been learned.
  • From the foregoing, it will be apparent that the same approach is used for advancing curriculum 3 from beginner through to expert level in sessions s13 to s20 and for advancing curriculum 4 from beginner through to expert level in sessions s21 to s25.
  • As will be appreciated by those skilled in the art, various modifications can be made to the above-described embodiments. For example, in the above description, the curriculum advancement algorithm is applied question item set by question item set, wherein the sets form part of an electronic learning curriculum. It will be appreciated that the curriculum advancement algorithm could be applied to any plurality of question items. For example, question item sets could be subdivided in to subsets, and the algorithm could be applied subset by subset.
  • It will also be appreciated that the operation illustrated in FIG. 11 at 1106-1108 (the “race up” approach) could be swapped for the operation described at 1112-1114 (the “bubble up” approach) in some embodiments. Alternatively, in some embodiments any of the “race up”, “bubble up” and “mastery” approaches could be applied entirely independently of the others. Such embodiments may have only one operative CSS threshold.
  • In some embodiments, the “mastery” approach illustrated at 1118-1124 of FIG. 11 may be applied on a per-set basis, e.g. when a particular set's CSS has exceeded T2 but is still below T3, rather than across an electronic learning curriculum only when the CSS for all sets has exceeded T2. In that case, the pool of question items from which question items are drawn may be limited to that set.
  • In some embodiments, the threshold values T1, T2 and/or T3 used during operation 1100 may be dynamically adjustable between iterations of the curriculum advancement algorithm. For example, once the user has advanced to an expert level based on initial threshold values of, say, T1=400, T2=600 and T3=800, the user may be required to repeat operation 1100 with T1=500, T2=750 and T3=950. Alternatively, the higher threshold values may be set prior to user completion of operation 1100, based on a favorable initial assessment of user competency for example.
  • In some embodiments, after completion of operation 1100 for two (or more) question item sets, the sets may be combined to create a new, larger set, and operation 1100 may be repeated for that new set, possibly with higher threshold values.
  • To expedite operation 1100, the CSS for a question item set may be recomputed after each question item is completed, rather than just after all question items have been completed (as at 1108, 1114 and 1124 of FIG. 11), with the user being permitted to advance to the next competency level or complete the question item set immediately upon meeting or exceeding the operative CSS threshold for the current competency level. This may avoid user frustration in having to repeat all of the question items of a set every time the set is presented.
  • In some embodiments, the software application by which the electronic learning curriculum is authored at the workstation 12 may simply be a web browser application which presents authoring software that is actually hosted by a web server comprising curriculum server 16. In such embodiments, although authoring of the electronic learning curriculum is still controlled by the author, the actual curriculum may be created at the curriculum server 16 based on remote author instructions from authoring workstation 12. In that case, uploading of the curriculum after the authoring stage may be unnecessary, since the curriculum may already be situated at the curriculum server 16, within a data store.
  • It will be appreciated that implementation of any aspect of the above-described system in “software” could be effected in firmware or hardware, or using combination of software, firmware and/or hardware.
  • In some embodiments, a user of an electronic learning curriculum may be able to revisit a previously answered question item within a set of question items, e.g., through selection.
  • The response accuracy for computing a skill score will usually be 0 or 1 on a single trial (unless partial credit marks are specified by the author as being available), but as question item sets are repeated the accuracy component can effectively become a weighted average of the last n trials in some embodiments. In such embodiments, values between 0% and 100% represent partially leaned items. For example, responding correctly on the last time may be worth a weight of 4, on the second to last time may be worth a weight of 3, third to last time may be worth a weight of 2, and fourth to last may be worth a weight of 1. This is based on memory phenomena such as recency and forgetting.
  • In some embodiments, the X and Y components of equation (II) may be combined in ways other than summing. The skill score could also be a family of functions or tables, weighting the accuracy and speed components, e.g. user competency measure=accuracy*speed*1000. In one embodiment, historical data is averaged over the last n trials (where n is an integer, typically 4 to 10).
  • The number of display modes may vary in alternative embodiments. Some embodiments may not have display modes per se. For example, a fixed presentation sequence (e.g. P, Q, R) may be used for all question items.
  • In some embodiments, the number of thresholds used in a curriculum advancement algorithm may be less than or greater than three.
  • It should be appreciated that the software for authoring or using an electronic learning curriculum can be deployed in numerous ways. In one embodiment, a “Software as a Service” (SaaS) model is used, in which both workstations 12 and 14 may use a web browser or other application having a GUI for accessing and using software stored in the memory of the curriculum server 16. In this approach, neither of workstations 12 or 14 has any dedicated software installed on the local hard drive per se. Alternatively, a dedicated authoring application may be installed on workstation 12. A curriculum generated by such an application may be stored at server 16 and downloaded to the workstation 14 upon user request. There, the curriculum could be executed by a dedicated local application for presenting the curriculum.
  • In some embodiments, question items can include instructions, facts to be memorized, and non-evaluative material such as demographic data from surveys that have no correct answer. For some question items, there may be no “correct answer”.
  • Other modifications will be apparent to those skilled in the art and, therefore, the invention is defined in the claims.

Claims (17)

What is claimed is:
1. A machine-readable medium storing instructions for presenting a question item of an electronic learning curriculum, said instructions, when executed by a processor of a computing device, causing said computing device to:
(a) retrieve from a data store a question item comprising:
textual, visual or auditory subject matter;
a query or instruction pertaining to said subject matter; and
a representation of a response mechanism for receiving a user response to said query or instruction;
(b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input;
(c) based on said user input, determine:
a presentation duration of said subject matter;
a presentation duration of said query or instruction; and
a presentation duration of said response mechanism; and
(d) store indicators of each of said three presentation durations.
2. The machine-readable medium of claim 1 wherein said presentation sequence comprises:
(i) presenting one of said subject matter and said query or instruction; then
(ii) presenting the other of said subject matter and said query or instruction; then
(iii) presenting said response mechanism.
3. The machine-readable medium of claim 2 wherein said instructions further cause said computing device to retrieve from said data store a display mode specific to said question item which determines said presentation sequence for said question item.
4. The machine-readable medium of claim 3 wherein said display mode requires the other of said subject matter and said query or instruction that is presented in (ii) to replace the one of said subject matter and said query or instruction that is presented in (i).
5. The machine-readable medium of claim 4 wherein said display mode precludes repeating (i) after (ii) prior to entry of a user response.
6. The machine-readable medium of claim 4 wherein said instructions further cause said computing device to repeat (i) after (ii) upon user request and to store a measured duration of the repeated (i).
7. The machine-readable medium of claim 3 wherein said display mode requires the response mechanism presented in (iii) to replace the other of said subject matter and said query or instruction presented in (ii).
8. The machine-readable medium of claim 7 wherein said display mode precludes repeating (i) or (ii) after (iii) prior to entry of a user response.
9. The machine-readable medium of claim 7 said instructions further cause said computing device to repeat (i) or (ii) after (iii) upon user request and to store a measured duration of the repeated (i) or (ii).
10. The machine-readable medium of claim 3 wherein said display mode requires the other of said subject matter and said query or instruction that is presented in (ii) to be presented cumulatively with the one of said subject matter and said query or instruction that is presented in (i).
11. The machine-readable medium of claim 10 wherein said determining of the presentation duration of the one of said subject matter and said query or instruction that is presented in (i) comprises either:
measuring a duration of (i) but not (ii); or
measuring a duration of both (i) and (ii).
12. The machine-readable medium of claim 3 wherein said display mode requires the response mechanism presented in (iii) to be presented cumulatively with the other of said subject matter and said query or instruction that is presented in (ii).
13. The machine-readable medium of claim 12 wherein said determining of the presentation duration of the other of said subject matter and said query or instruction that is presented in (ii) comprises either:
measuring a duration of (ii) but not (iii); or
measuring a duration of both (ii) and (iii).
14. The machine-readable medium of claim 1 wherein said instructions further cause said computing device to:
(e) receive a user response to said query or instruction via said response mechanism;
(f) ascertain a response accuracy based on said response and at least one predetermined correct response;
(g) determine a completion time for said question item based on one or more of said presentation duration of said subject matter, said presentation duration of said query or instruction and said presentation duration of said response mechanism; and
(h) store said response accuracy and said completion time for said current trial.
15. A computing device having a processor in communication with memory storing instructions for presenting a question item of an electronic learning curriculum which, when executed by said processor, cause said computing device to:
(a) retrieve from a data store a question item comprising:
textual, visual or auditory subject matter;
a query or instruction pertaining to said subject matter; and
a representation of a response mechanism for receiving a user response to said query or instruction;
(b) present said subject matter, said query or instruction and said response mechanism in a presentation sequence, wherein a presentation duration of each of said subject matter, said query or instruction and said response mechanism in said presentation sequence is controlled by user input;
(c) based on said user input, determine:
a presentation duration of said subject matter;
a presentation duration of said query or instruction; and
a presentation duration of said response mechanism; and
(d) store indicators of each of said three presentation durations.
16. The computing device claim 15 wherein said presentation sequence comprises:
(i) presenting one of said subject matter and said query or instruction; then
(ii) presenting the other of said subject matter and said query or instruction; then
(iii) presenting said response mechanism.
17. The computing device of claim 15 wherein said instructions further cause said computing device to:
(e) receive a user response to said query or instruction via said response mechanism;
(f) ascertain a response accuracy based on said response and at least one predetermined correct response;
(g) determine a completion time for said question item based on one or more of said presentation duration of said subject matter, said presentation duration of said query or instruction and said presentation duration of said response mechanism; and
(h) store said response accuracy and said completion time for said current trial.
US14/507,471 2008-12-23 2014-10-06 Electronic learning system Abandoned US20150024366A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/507,471 US20150024366A1 (en) 2008-12-23 2014-10-06 Electronic learning system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/342,281 US8506305B2 (en) 2008-12-23 2008-12-23 Electronic learning system
US13/949,734 US8851900B2 (en) 2008-12-23 2013-07-24 Electronic learning system
US14/507,471 US20150024366A1 (en) 2008-12-23 2014-10-06 Electronic learning system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/949,734 Continuation US8851900B2 (en) 2008-12-23 2013-07-24 Electronic learning system

Publications (1)

Publication Number Publication Date
US20150024366A1 true US20150024366A1 (en) 2015-01-22

Family

ID=42266657

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/342,281 Active 2031-07-01 US8506305B2 (en) 2008-12-23 2008-12-23 Electronic learning system
US13/949,734 Active US8851900B2 (en) 2008-12-23 2013-07-24 Electronic learning system
US14/507,471 Abandoned US20150024366A1 (en) 2008-12-23 2014-10-06 Electronic learning system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US12/342,281 Active 2031-07-01 US8506305B2 (en) 2008-12-23 2008-12-23 Electronic learning system
US13/949,734 Active US8851900B2 (en) 2008-12-23 2013-07-24 Electronic learning system

Country Status (3)

Country Link
US (3) US8506305B2 (en)
CA (1) CA2745993C (en)
WO (1) WO2010071979A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100323336A1 (en) * 2009-06-19 2010-12-23 Alert Life Sciences Computing, S.A. Electronic system for assisting the study and practice of medicine
US20120040325A1 (en) * 2010-08-10 2012-02-16 Perfect800, Inc. Method and system for online learning with customized answer explanations based on student responses
US20120052472A1 (en) * 2010-08-20 2012-03-01 Lafrance Benjamin Lesson Plan Generator
US20120075122A1 (en) * 2010-09-24 2012-03-29 Honeywell International Inc. Alert generation and related aircraft operating methods
US8699941B1 (en) * 2010-10-08 2014-04-15 Amplify Education, Inc. Interactive learning map
JP5987315B2 (en) * 2011-03-18 2016-09-07 株式会社リコー Display control apparatus and program
US10049594B2 (en) * 2011-04-27 2018-08-14 Atlas, Inc. Systems and methods of competency assessment, professional development, and performance optimization
US20120322043A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Adaptively-spaced repetition learning system and method
JP5884312B2 (en) * 2011-06-27 2016-03-15 株式会社リコー Information display device and information display system
US20130019157A1 (en) * 2011-07-13 2013-01-17 International Business Machines Corporation Defect form quality indication
US20130036360A1 (en) * 2011-08-01 2013-02-07 Turning Technologies, Llc Wireless audience response device
US20130224718A1 (en) * 2012-02-27 2013-08-29 Psygon, Inc. Methods and systems for providing information content to users
WO2014181190A2 (en) * 2013-03-13 2014-11-13 Mindmarker BV Method and system for creating interactive training and reinforcement programs
US20140335485A1 (en) * 2013-05-09 2014-11-13 Xerox Corporation Methods and systems for training a crowdworker
WO2015106103A1 (en) * 2014-01-11 2015-07-16 Iams Kenneth W Apparatus and method for grading unstructured documents using automated field recognition
US10679512B1 (en) 2015-06-30 2020-06-09 Terry Yang Online test taking and study guide system and method
US10964224B1 (en) * 2016-03-15 2021-03-30 Educational Testing Service Generating scores and feedback for writing assessment and instruction using electronic process logs
US20170345326A1 (en) * 2016-05-27 2017-11-30 Regents Of The University Of Minnesota Process for using test item response times to improve measurement of cognitive ability
US11881123B2 (en) * 2018-06-27 2024-01-23 Gened Corp. System and method for generative assessment item development, encoding and analysis
US11238751B1 (en) * 2019-03-25 2022-02-01 Bubble-In, LLC Systems and methods of testing administration by mobile device application
CN110852925B (en) * 2019-11-20 2023-01-24 中国联合网络通信集团有限公司 Abnormal learning detection method, device, equipment and storage medium
US20210192973A1 (en) * 2019-12-19 2021-06-24 Talaera LLC Systems and methods for generating personalized assignment assets for foreign languages
US20210201701A1 (en) * 2019-12-25 2021-07-01 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for medical diagnosis training
US20220068152A1 (en) * 2020-08-25 2022-03-03 Clarence R. Johnson System, apparatus, method for assessment of an athlete's performance in an athletic sport
US20220375600A1 (en) * 2021-05-19 2022-11-24 Roobrik, Inc. Dynamically updating platform for age-related lifestyle and care decisions with predictive analytics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5616033A (en) * 1994-08-03 1997-04-01 Kerwin; Patrick A. Speed learning system computer based training
US20020115048A1 (en) * 2000-08-04 2002-08-22 Meimer Erwin Karl System and method for teaching
US20030044760A1 (en) * 2001-08-28 2003-03-06 Ibm Corporation Method for improved administering of tests using customized user alerts
US20030049593A1 (en) * 1996-09-25 2003-03-13 Anna Parmer Language-based computer generated instructional material
US6738787B2 (en) * 1999-07-22 2004-05-18 At&T Corp. Method and apparatus for displaying and header scrolling a hierarchical data structure
US20050181348A1 (en) * 2004-02-17 2005-08-18 Carey Tadhg M. E-learning system and method

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US39435A (en) * 1863-08-04 Improvement in rail-couplings for railroads
US3671668A (en) * 1968-11-18 1972-06-20 Leonard Reiffel Teaching system employing a television receiver
EP0760138A4 (en) * 1994-04-26 1998-04-01 Raya Systems Inc Modular microprocessor-based diagnostic measurement system for psychological conditions
US5827071A (en) * 1996-08-26 1998-10-27 Sorensen; Steven Michael Method, computer program product, and system for teaching or reinforcing information without requiring user initiation of a learning sequence
US6986664B1 (en) * 1997-03-03 2006-01-17 Robolaw Corporation Method and apparatus for improving performance on multiple-choice exams
US6022221A (en) * 1997-03-21 2000-02-08 Boon; John F. Method and system for short- to long-term memory bridge
US6112049A (en) * 1997-10-21 2000-08-29 The Riverside Publishing Company Computer network based testing system
US6077085A (en) * 1998-05-19 2000-06-20 Intellectual Reserve, Inc. Technology assisted learning
US6282404B1 (en) * 1999-09-22 2001-08-28 Chet D. Linton Method and system for accessing multimedia data in an interactive format having reporting capabilities
US7063536B1 (en) * 2000-03-14 2006-06-20 Fustolo Steven C System for delivering an educational program to a remote site and for monitoring participation therein
US6551109B1 (en) * 2000-09-13 2003-04-22 Tom R. Rudmik Computerized method of and system for learning
WO2002027693A2 (en) * 2000-09-28 2002-04-04 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
US6461166B1 (en) 2000-10-17 2002-10-08 Dennis Ray Berman Learning system with learner-constructed response based testing methodology
EP1362337A1 (en) * 2001-01-09 2003-11-19 Prep4 Ltd Training system and method for improving user knowledge and skills
US6978115B2 (en) * 2001-03-29 2005-12-20 Pointecast Corporation Method and system for training in an adaptive manner
SE0101683D0 (en) * 2001-05-14 2001-05-14 Torkel Klingberg New method
US7052277B2 (en) * 2001-12-14 2006-05-30 Kellman A.C.T. Services, Inc. System and method for adaptive learning
US7056124B1 (en) * 2002-08-20 2006-06-06 Ctb/Mcgraw-Hill Method and system for creating, administering and automating scoring of dimensional modeling constructed response items
US7059860B2 (en) * 2003-05-29 2006-06-13 Book And Brain Consulting, Inc. Method and tools for teaching reading for test-taking
US20050221265A1 (en) 2004-04-01 2005-10-06 Tashiro Jay S Method to assess a person's knowledge of a subject area
US20060003296A1 (en) * 2004-06-21 2006-01-05 David Dockterman System and method for assessing mathematical fluency
US7318052B2 (en) * 2004-10-15 2008-01-08 Sap Ag Knowledge transfer evaluation
US20060110718A1 (en) * 2004-11-23 2006-05-25 Lee Yong T System and method for automatically administering a test, analysing test results and formulating study strategies in response thereto
US7335028B2 (en) * 2004-12-08 2008-02-26 Charles Sun System and method for creating an individualized exam practice question set
JP4563247B2 (en) * 2005-04-28 2010-10-13 富士通株式会社 Learning support system and learning support program
US20090263776A1 (en) 2005-11-01 2009-10-22 Scate Protocol Llc Course Development Program
US20070248938A1 (en) * 2006-01-27 2007-10-25 Rocketreader Pty Ltd Method for teaching reading using systematic and adaptive word recognition training and system for realizing this method.
US20080003558A1 (en) * 2006-06-09 2008-01-03 Posit Science Corporation Cognitive Training Using Multiple Stimulus Streams With Response Inhibition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5616033A (en) * 1994-08-03 1997-04-01 Kerwin; Patrick A. Speed learning system computer based training
US20030049593A1 (en) * 1996-09-25 2003-03-13 Anna Parmer Language-based computer generated instructional material
US6738787B2 (en) * 1999-07-22 2004-05-18 At&T Corp. Method and apparatus for displaying and header scrolling a hierarchical data structure
US20020115048A1 (en) * 2000-08-04 2002-08-22 Meimer Erwin Karl System and method for teaching
US20030044760A1 (en) * 2001-08-28 2003-03-06 Ibm Corporation Method for improved administering of tests using customized user alerts
US20050181348A1 (en) * 2004-02-17 2005-08-18 Carey Tadhg M. E-learning system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US10565997B1 (en) 2011-03-01 2020-02-18 Alice J. Stiebel Methods and systems for teaching a hebrew bible trope lesson
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world
US11380334B1 (en) 2011-03-01 2022-07-05 Intelligible English LLC Methods and systems for interactive online language learning in a pandemic-aware world

Also Published As

Publication number Publication date
US20100159433A1 (en) 2010-06-24
CA2745993A1 (en) 2010-07-01
US20130302772A1 (en) 2013-11-14
US8851900B2 (en) 2014-10-07
US8506305B2 (en) 2013-08-13
WO2010071979A1 (en) 2010-07-01
CA2745993C (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US8851900B2 (en) Electronic learning system
Chew et al. The cognitive challenges of effective teaching
US10373279B2 (en) Dynamic knowledge level adaptation of e-learning datagraph structures
Cayton-Hodges et al. Tablet-based math assessment: What can we learn from math apps?
Walker et al. Adaptive intelligent support to improve peer tutoring in algebra
US20060246411A1 (en) Learning apparatus and method
US20100005413A1 (en) User Interface for Individualized Education
US20090186328A1 (en) Method for teaching rapid recall of facts
US20090325140A1 (en) Method and system to adapt computer-based instruction based on heuristics
US20080286737A1 (en) Adaptive Engine Logic Used in Training Academic Proficiency
US20130224697A1 (en) Systems and methods for generating diagnostic assessments
US10403163B2 (en) Method and system for providing collaborative learning
US20140193795A1 (en) Dynamic generation of electronic educational courses
WO2019163907A1 (en) Learning schedule generation device, method and program
US20080248453A1 (en) Method of Delivering a Test to a Candidate
US20170243502A1 (en) Adaptive learning system using automatically-rated problems and pupils
de Kock et al. Can teachers in primary education implement a metacognitive computer programme for word problem solving in their mathematics classes?
US20140272906A1 (en) Mastery-based online learning system
Kealy et al. Assessment certitude as a feedback strategy for learners' constructed responses
US11887506B2 (en) Using a glicko-based algorithm to measure in-course learning
US20160111013A1 (en) Learning content management methods for generating optimal test content
Isibika et al. Microlearning training intervention for librarians: design and development
WO2010002395A1 (en) Method and system to adapt computer-based instruction based on heuristics
Samson et al. Relationships Between Digital Measures of Student Engagement and Exam Scores: Is the LMS Enough
US20130017529A1 (en) Method and apparatus for generating educational content

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION