...
For OGHR, Platform needs to enable user to undertake exams. Platform should be capable of processing Question set evaluation is currently on the client side and this implies that the correct answer is available to the client. We need to introduce the capability to process the user responses and evaluate these responses with relevant applicable scores. Outcome of this design should enable the platform users to take up examination with server side evaluation of results, evaluate and compute scores on the server side so that the correct answer does not have to be sent to the client side.
...
Background
In the present system, Question Set consist of following :
...
QuML Player : QuML Player has capability to play the questions in QuML format. It also has capability to locally validate the user inputevaluate and compute score. Response Validation, Score Computing is completely handled in player as of now. Once User Submits the overall response, client validated scores and response are updated sent to the backend as ASSESS events using Content State Update API.
Flink Jobs : Flink Jobs aggregates the content state using Collection Activity Aggregator, Collection Assessment Aggregator, Cert Pre-processor, Cert Generator jobs.
Question Types
Objective Types:
a) MCQ (Multiple Choice Questions)
b) MSQ (Multiple Select Questions)
c) MTF (Match The Following)
d) FTB (Fill in The Blanks)
Subjective Types
a) VSA - (Very Short Answer)
b) SA - (Short Answer)
c) LA - (Long Answer)
Building Blocks :
Building Blocks | API | Flink Jobs |
---|---|---|
Sunbird InQuiry | Question Set Hierarchy API | |
Question List Read API | ||
QuestionSet Create API | ||
Question Create API | ||
Sunbird Lern | Content State Read | Collection Activity Aggregate |
Content State Update | Collection Assessment Aggregate | |
Enrollment List API | Cert Pre Processer | |
Cert Generator | ||
Sunbird RC | Cert Registry Download API |
...
Design Problems :
Allow the response processing of question assessment evaluation to be done on server rather than client as happening today.
Scalable Response Processing Solution for Question Sets.
Score Calculation Based on Content State Update need to be shifted to server side evaluation.
Solution to the question needs to be masqueraded or also for question sets that are marked accordingly.
Solution to the question needs to be excluded from the Question Read API/ Question List API for server side assessment evaluation.
Scalable server side response processing for Question Sets.
Content State Update needs to happen from the server side response processing API.
Response Processing can happen in two ways:
Entire Question Set Response Processing (Current solution scope)
Question by Question Response Processing
...
Current Workflow
...
Solution Proposed Proposed
...
Technical Design Details:Step 1
: Creation side Enhancements:
Introduce “serverEvaluable“ attribute at question and questionSet Object
A new attribute in QuestionSet to be introduced to have evaluated response on server. (“serverEvaluable“).
...
mark it for server side evaluation. The editor need to allow setting this flag on a questionSet/questions. The creation APIs would be updated to support this attribute. Any question that is marked as serverEvaluable:true can only be part of a questionSet that is also marked as serverEvaluable:true. Also a question set that is marked serverEvaluable:true should only contain questions with serverEvaluable:true.
(“serverEvaluable“).
Code Block |
---|
//Question Set Object
"questionSet": {
"serverEvaluable": true //#true for Server Side Valuation Default:#false for client side validation
}
//Question Object
"question" : {
"serverEvaluable": true
} |
Question Create API:
It is possible to permanently mark the question to be “serverEvaluable“ in nature. Any serverEvaluable question would automatically qualify for serverEvaluable question Set. These Questions can be part of Evaluable QuestionSet only.
Info |
---|
KeySet (Futuristic): Any Responses on Question Set can be persisted separately to ensure the keyset can be used for administrative purposes. |
There are two modes of accessing questionSet as per design:
a) non-evaluable mode : Player invokes questionSetHierarchy GET API.
b) evaluable mode : Player would invoke questionSetHierarchy POST API (New API). This change would be used to manage requirements like Question Randomization, Question Token generation etc.
Step 2: Introduce new POST method for QuestionSet Hierarchy API.Enable QuestionSetToken in Question Set Hierarchy API (New API). Mark Question and QuestionSet Objects as “serverEvaluable“
QuestionSet Hierarchy API (POST) : A new API method to be introduced for QuestionSet Hierarchy of Exam Question SetsConsumption side Enhancements
There are two modes of accessing questionSet post this proposed change:
a) client-evaluable mode : default (serverEvaluable attribute does not exist or is false)
In this mode, the player uses the questionSetHierarchy to fetch the hierarchy, question list API to fetch question body and the content state update API to submit the ASSESS events. There are no changes to this processing as part of this change.
b) server-evaluable mode : serverEvaluable:true
We are proposing that, in this mode, the player uses a new API for questionSetHierarchy, the existing question list API and a new submitAssessment API.
QuestionSet Hierarchy API (new POST API)
The current questionsetHierarchy API is a get call and does not take in arguments. Introduce a new POST method for QuestionSet Hierarchy API that can take in request body. This API will have payload as follows
Code Block |
---|
"request": { "contentID": "", "collectionID": "", "userID": "", "attemptID": "" } |
Every QuestionSet Hierarchy will attach “QuestionSetToken“ based on encrypted value of This API would handle selection of a subset of questions and randomization (currently done by player) as indicated by the metadata in the questionSet. The API will also return a “QuestionSetToken“ which is a signed token which has user-id, content-id, collection-id,attempt-id+selected_questionid_list recieved as part of hierarchy payload. This token will further be used to validate the request during response processing the submitAssessment API call on server.
Code Block |
---|
"questionSet": { "timeLimits": "{\"maxTime\":\"3600\"}", "questionSetToken": "", //#Question Set token to be generated at hierarchy read API with combination of "Question Set ID + userID" "serverEvaluable": true //#true for Server Side Valuation Default:#false for client side validation } |
Info |
---|
QuestionSetToken : This key is almost equivalent of jwt token created as follows: “questionSetToken“ = > { “timestamp”: epoch |
Step 2: Trim Off correctResponse from Question List Updates to QuestionList API
Question Read API : Any Question Associated with serverEvaluable behaviour to trim off response declaration from Question Set.
...
There are multiple attributes which persists correct answer in QuML
a) responseDeclaration: (Shown above)
b) answer
c) editorState
Step 3: Introduce QuestionReponseValidate SubmitAssessment API to process evaluate user responses to QuestionSet& score.
QuestionResponseValidateAPI (
...
Sync API Behaviour):
QuestionSetToken generated in Hierarchy is sent as part of this request. This token will ensure the user’s attempt against questionSetID on a collectionhelp validate that the responses are submitted for the questions that were given out to this user and also verify the time of submission.
API accepts the request payload similar to content state update API.
...