Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Definition

For OGHR, Platform needs to enable user to undertake exams. Question set evaluation is currently on the client side and this implies that the correct answer is available to the client. We need to introduce the capability to process the user responses, evaluate and compute scores on the server side so that the correct answer does not have to be sent to the client side.

...

Background

In the present system, Question Set consist of following :

...

QuML Player : QuML Player has capability to play the questions in QuML format. It also has capability to evaluate and compute score. Response Validation, Score Computing is completely handled in player as of now. Once User Submits the overall response, client validated scores and response are sent to the backend as ASSESS events using Content State Update API.

...

Building Blocks

API

Flink Jobs

Sunbird InQuiry

Question Set Hierarchy API

Question List Read API

QuestionSet Create API

Question Create API

Sunbird Lern

Content State Read

Collection Activity Aggregate

Content State Update

Collection Assessment Aggregate

Enrollment List API

Cert Pre Processer

Cert Generator

Sunbird RC

Cert Registry Download API

...

...

Solution of Problem :

  • Allow the assessment evaluation to be done on server also for question sets that are marked accordinglychosen to be server evaluable.

  • Solution Answers to the question needs to be excluded from the Question Read API/ Question List API for server side assessment evaluation questions.

...

Current Workflow

...

Solution Proposed

...

A new attribute in QuestionSet to be introduced to mark it for server side evaluation. The editor need to allow setting this flag on a questionSet/questions. The creation APIs would be updated to support this attribute. Any question that is marked as “eval” : { “mode“ : “server“ } can only be part of a questionSet that is also marked as “eval” : { “mode“ : “server“ } . Also a question set that is marked “eval” : { “mode“ : “server“ } should only contain questions with “eval” : { “mode“ : “server“ }.

(“evaluate at server “).

Code Block
languagejson
//Question Set Object
"questionSet": {
  “eval” : { “mode“ : “server“ } //#true for Server Side Valuation Default:#false for client side validation
}
//Question Object
"question" : {
  “eval” : { “mode“ : “server“ }
}

...

There are two modes of accessing questionSet post this proposed change:
a) clientevaluable-evaluable mode : default ( In case eval attribute does not exist or mode = client)

In this mode, theplayer uses the new POST GET questionSetHierarchy to fetch the hierarchy, existing question list API to fetch question body and the existing content state update API to submit the ASSESS events. JWT questionSetToken need to be passed to content state update, which assist the content state update to validate request is of client or server mode. Calculation of assess score remains at client side.

b) Evaluable evaluable-mode : server

Info

Content Compatibility needs to be set to higher value so that discovery on older clients dont happen for this questionSet

We are proposing that, in this mode, the player uses a the new API for questionSetHierarchyPOST questionSetHierarchy to fetch the hierarchy, the existing question list API and the existing Content State Update API without passing “score” & “pass”. Content State Update will fetch “score“ & “pass“ using new Inquiry Assessment API which introduced as part of this feature.

...

The current questionsetHierarchy API is a get call and does not take in arguments. Introduce a new POST method for QuestionSet Hierarchy API that can take in request body. This API will have payload as follows

Code Block
languagejson
{
    "request": {
        "questionset": { {
            "contentID": "",
        "contentID": "",   "collectionID": "",
            "userID": "",
            "attemptID": ""
      ,  }
"evalMode" : "server/client"  }
}

This API would handle shuffling of options , selection of a subset of questions and randomisation (currently done by player) as indicated by the metadata in the questionSet. The API will also return a “QuestionSetToken“ which is a signed token contains user-id, content-id, collection-id,attempt-id+selected_questionid_list recieved received , eval mode as part of hierarchy payload. This token will further be passed to Content State Update & to new submitAssessment API from Content State Update.“QuestionSetToken“ will be validated by submitAssessment API call.

Code Block
languagejson
"questionSet": {
  "timeLimits": "{\"maxTime\":\"3600\"}",
  "questionSetToken": "", //#Question Set token to be generated at hierarchy read API with combination of "Question Set ID + userID"
  “eval” : { “mode“ "serverEvaluable": “server“ true} //#true for Server Side Valuation Default:#false for client side validation
}

...