Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

Definition

For OGHR, Platform needs to enable user to undertake exams. Platform should be capable of processing the user responses and evaluate these responses with relevant applicable scores. Outcome of this design should enable the platform users to take up examination with server side evaluation of results.

Background

In the present system, Question Set is processed as follows:

Entities

Assessment User : Active seeker of Assessment in the platform. Any Collection in Sunbird Ecosystem needs to be trackable in order to track the progress of the user. User’s state with the content is updated using Content State Update API. All of the Information on Content State Read API is retrieved against a Collection & User.

QuML Player : QuML Player has capability to play the questions in QuML format. It also has capability to locally validate the user input. Response Validation,Score Computing is completely handled in player. Once User Submits the overall response, client validated scores and response are updated using Content State Update API.

Flink Jobs : Flink Jobs aggregates the content state using Collection Activity Aggregator, Collection Assessment Aggregator, Cert Pre-processor, Cert Generator jobs.

Question Types:

a) MCQ (Multiple Choice Questions)

b) MSQ (Multiple Select Questions)

c) MTF (Match The Following)

d) FTB (Fill in The Blanks)

Excluded Types

a) VSA - (Very Short Answer)

b) SA - (Short Answer)

c) LA - (Long Answer)

Building Blocks :

Building Blocks

API

Flink Jobs

Sunbird InQuiry

Question Set Hierarchy API

Question List Read API

Sunbird Lern

Content State Read

Collection Activity Aggregate

Content State Update

Collection Assessment Aggregate

Enrollment List API

Cert Pre Processer

Cert Generator

Sunbird RC

Cert Registry Download API

Design PreRequisites :

  • Allow the response processing of question to be done on server rather than client as happening today.

  • Scalable Response Processing Solution for Question Sets.

  • Score Calculation Based on Content State Update.

  • Solution to the question needs to be masqueraded or excluded from the Question Read API/ Question List API.

  • Response Processing can happen in two ways:

    • Entire Question Set Response Processing

    • Question by Question Response Processing

Technical Design Details:

QuestionSet Hierarchy API : A new attribute in QuestionSet to introduced to have evaluated response declaration. (“evaluable“)

"questionSet": {
  "timeLimits": "{\"maxTime\":\"3600\"}",
  "evaluable": true //#true for Server Side Valuation Default:#false for client side validation
}

Question Read API : Any Question Associated with evaluable Question Set to trim off response declaration from Question Set. Instead a responseKey to be shared along with responseDeclaration.

"evaluable": true,
"responseDeclaration": {
          "response1": {
            "maxScore": 1,
            "cardinality": "single",
            "type": "integer",
            -- To be Trimmed off ----
            "correctResponse": {
              "value": "0",
              "outcomes": {
                "SCORE": 1
              }
            },
            -- To be Trimmed off --
            #Newly Introduced Attribute
            "responseKey": "#Computed Hash Value of the result"
          }
        },

Question Creation Impact:
It is possible to permanently mark the question to be “evaluable“ in nature. Any Evaluable Question would automatically qualify for evaluable question Set. These Questions can be part of Evaluable QuestionSet only. Question Read of Evaluable Questions should exhibit the behaviour mentioned above.

Response Evaluation:

Response Processing Steps:

  • No labels