Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Summary

The Assessment Engine allows LearningBuilder to administer on-line, low-stakes assessments that are typically used for Practice Exams and other supporting evaluations.

Page Propertiesinfo
Target releaseEpic
Document status
Status
titleDRAFT
Document owner
DesignerDevelopersQA

Goals

  • Promote learning by delivering formative assessments

  • Increase engagement by adding interaction to learning activities

  • Benchmark performance before and after learning activities

Capability Summary

  • Support multi-section Assessments with multiple navigation modes ranging from "free range" navigation to tightly controlled, one-question-at-a-time, forward-only progression.

  • "Reflective Feedback" allows answer-level feedback to be presented to the user, which can be used as an educational aid.
    Question Item weighting allows different questions, and different answers within each question, to deliver variable points towards the overall score.

  • User can save progress and return later, picking up where they left off.

  • Supports an optional, per-question time limit.

  • Responsive UI.

Background and strategic fit

LearningBuilder is an assessment platform. One of the modes of assessment is to administer on-line, low-stakes assessments. These assessments are typically used for Practice Exams and to validate learning for situations in which organizations curate learning activities but wish to promote and/or measure engagement with learning materials. Adoption has been slow, but we have a handful of customers who use the assessment engine (ABO and ABOM). Other customers have inquired about it and it comes up frequently with prospects (e.g., NBCRNA). - May 2018.

Assumptions

...

 

Limitations

  • Although measures have been taken to protect data integrity and to prevent tampering, Assessments are not designed to manage highly secure tests.

    We want to make it annoying to hack the assessments, but not dedicate significant resources to preventing cheating.

Requirements

...

  •  

Sample Use Cases

  • Practice Exams

  • Journal Article Review, as part of ongoing Continuing Education

  • Surveys

User Stories

Title

User Story

Importance

Implemented?

Notes

1

Create an Assessment

As an Administrator, I create an Assessment so that I can evaluate a person's knowledge.

Existing 

(tick) Yes

2

Create a Section

As an Administrator, I create a Section of questions that groups a set of questions around common content.

Existing 

(tick) Yes

3

Create a Question

As an Administrator, I create a Question so that I can measure a specific component of knowledge.

Existing 

(tick) Yes

4

Re-Order Questions

As an Administrator, I order questions so that there is a sequential experience of questions.

Existing 

(tick) Yes

5

Create a Response

As an Administrator, I create a Response so that a person can answer a question.

Existing (tick)Probably move to Question page

(tick) Yes

6

Specify a Cut score

As an Administrator, I create a cut score to establish the score necessary for a passing result.

Existing 

(tick) Yes

7

Specify a Competency Model

As an Administrator, I assign a Competency Model to the Assessment to govern the scope of Competencies available to assign to Questions.

Existing 

(tick) Yes

8

Define Result

As an Administrator, I manage a Results page so that the results can be tailored to the kind of assessment being taken.

Existing 

 (tick) Yes

9

Publish an Assessment

As an Administrator, I publish an Assessment so that it is available to be completed.

Existing 

(tick) Yes

10

Un-publish an Assessment

As an Administrator, I un-publish an Assessment so that it is not available to be completed while changes are made.

Existing 

(tick) Yes

11

Automated Scoring

As an Administrator, by indicating the "correct" answer for each question (or by assigning each answer a value and specifying a Cut Score for the Assessment as a whole), I can create an Assessment that is immediately and automatically scored so that the user receives immediate realtime feedback about their performance.

Existing 

(tick) Yes

12

Manual Scoring

As an Administrator, I indicate that certain questions (such as essays or other complex interaction types) require human evaluation before a score can be assigned. 

Not implemented yetThis is a requirement for the NBCRNA

(minus) Not yet

Designed, but not implemented

13

Manage Forms

As an Administrator, I create multiple Forms within a single Assessment so that I can administer the Assessment multiple times without duplicating the Assessment Experience.

Must Have15Specify that the Assessment will be TimedAs an Administrator, I specify that an Assessment is Timed, so that the system

(tick) Yes

14

Specify that the Assessment has a Time Limit

As an Administrator, I specify the amount of time in minutes a person is allowed to complete the Assessment so that the Assessment is not open indefinitely.

Must HaveThis is a requirement for both NBCRNA and ABO

(warning) Partially

Current supports per-question time limit only

15

Track assessment-level timing metrics

System records the amount of time

as a person

it takes to

complete This is a requirement for NBCRNA

answer each Question.

Must HaveNBCRNA has listed this requirement as Optional

(tick) Yes

Tracked, but not displayed in the UI anywhere

16

Specify that the Assessment will use Confidence Ratings

As an Administrator, I specify that questions will include a Confidence Rating to understand how confident the Test Taker is in the select answer during an Assessment

High

(minus) No

Can be modeled using custom Questions, but no way to present those questions only a percentage of the time

17

Specify the Confidence Rating Scale

As an Administrator, I specify the Rating Scale used for the Assessment so that my ratings can use language and specificity I am comfortable with.

HighThis can probably by a System Administrator setting, but I expect it to work well as a Custom List. it is probably common to the system as a whole instead of specific to an AssessmentNBCRNA has listed this requirement as Optional

(warning) Partially

Scales are managed per-question. There is no facility for centrally defining a scale and then reusing it.

18

Specify that the Assessment will use Relevance Ratings

As an Administrator, I specify that questions will include a Relevance Rating to understand how relevant the Test Taker believes the content is to his or her practice.

High

(minus) No

Can be modeled using custom Questions, but no way to present those questions only a percentage of the time

19

Specify the Relevance Rating Scale

As an Administrator, I specify the Rating Scale used for the Assessment so that my ratings can use language and specificity I am comfortable with.

High

(warning) Partially

Same as Confidence

rating

Rating

20

Answer Confidence Rating

As a Test Taker, when I answer a Question and Confidence Ratings are turned on, I will answer my Confidence Rating so that I can share my level of confidence in my answer with the Board

HighNBCRNA has listed this requirment as Optional. It is also possible that we could satisfy this requirement by adding a question to the Assessment specifically as a confidence rating for the previous question, but I fear that would get complicated, especially with respect to using Forms).

(minus) No

21

Answer Relevance Rating

As a Test Taker, when I answer a Question and Relevance Ratings are turned on, I will answer my Relevance Rating so that I can share my belief in the relevance of the Question to my practice.

HighNBCRNA lists this as a Requirement

(minus) No

Same as Confidence.

22

View Time to Complete

As a Test Taker or an Administrator, I can see how long it took to complete a sitting of an Assessment

Must Have

(warning) Partially

Timing data are tracked, but not displayed. (Can be exposed using custom reports)

23

View Time on each Question

As an Administrator, I can see how long each test taker took to complete each Question.

Must HaveNBCRNA lists this as a requirement. I'm not sure where this would go.

(warning) Partially

Timing data are tracked, but not displayed. (Can be exposed using custom reports)

24

Record Item Response Strings

As the System, I record the responses to each Question in the QTI standard for the Item type so that the results can be shared with any system using the QTI standard

Must HaveNBCRNA does not reference the QTI standard, but does reference "Item response strings," for which there is a standard, even if we don't follow it

(minus) Not yet

A QTI export could be constructed using custom reports or the Integration Hub, but no native QTI representation exists.

25

Record Post-Test Surveys

As a Test Taker, I can complete a Post-Test Survey so that I can provide feedback to the Board regarding the Assessment Experience.

Must HaveWe may be able to do this using Workflow

(warning) Partially

This can be done using unscored questions or  Workflow Questions instead of

needing

a 1st class feature.

User interaction and design

Questions

Below is a list of questions to be addressed as a result of this requirements document:

...

Filter by label (Content by label)
maxCheckboxfalse
sorttitle
reversefalse
cqllabel = "assessments"