Definition
For OGHR, Platform needs to enable user to undertake exams. Platform should be capable of processing the user responses and evaluate these responses with relevant applicable scores. Outcome of this design should enable the platform users to take up examination with server side evaluation of results.
Background
In the present system, Question Set is processed as follows:
Entities
Assessment User : Active seeker of Assessment in the platform. Any Collection in Sunbird Ecosystem needs to be trackable in order to track the progress of the user. User’s state with the content is updated using Content State Update API. All of the Information on Content State Read API is retrieved against a Collection & User.
QuML Player : QuML Player has capability to play the questions in QuML format. It also has capability to locally validate the user input. Response Validation,Score Computing is completely handled in player. Once User Submits the overall response, client validated scores and response are updated using Content State Update API.
Flink Jobs : Flink Jobs aggregates the content state using Collection Activity Aggregator, Collection Assessment Aggregator, Cert Pre-processor, Cert Generator jobs.
Question Types:
a) MCQ (Multiple Choice Questions)
b) MSQ (Multiple Select Questions)
c) MTF (Match The Following)
d) FTB (Fill in The Blanks)
Excluded Types
a) VSA - (Very Short Answer)
b) SA - (Short Answer)
c) LA - (Long Answer)
Building Blocks :
Building Blocks | API | Flink Jobs |
---|---|---|
Sunbird InQuiry | Question Set Hierarchy API | |
Question List Read API | ||
QuestionSet Create API | ||
Question Create API | ||
Sunbird Lern | Content State Read | Collection Activity Aggregate |
Content State Update | Collection Assessment Aggregate | |
Enrollment List API | Cert Pre Processer | |
Cert Generator | ||
Sunbird RC | Cert Registry Download API |
Design PreRequisites :
Allow the response processing of question to be done on server rather than client as happening today.
Scalable Response Processing Solution for Question Sets.
Score Calculation Based on Content State Update.
Solution to the question needs to be masqueraded or excluded from the Question Read API/ Question List API.
Response Processing can happen in two ways:
Entire Question Set Response Processing
Question by Question Response Processing
Technical Design Details:
QuestionSet Hierarchy API : A new attribute in QuestionSet to introduced to have evaluated response declaration. (“evaluable“)
"questionSet": { "timeLimits": "{\"maxTime\":\"3600\"}", "evaluable": true //#true for Server Side Valuation Default:#false for client side validation }
Question Read API : Any Question Associated with evaluable Question Set to trim off response declaration from Question Set. Instead a responseKey to be shared along with responseDeclaration.
"evaluable": true, "responseDeclaration": { "response1": { "maxScore": 1, "cardinality": "single", "type": "integer", -- To be Trimmed off ---- "correctResponse": { "value": "0", "outcomes": { "SCORE": 1 } }, -- To be Trimmed off -- #Newly Introduced Attribute "responseKey": "#Computed Hash Value of the result" } },
Question Creation Impact:
It is possible to permanently mark the question to be “evaluable“ in nature. Any Evaluable Question would automatically qualify for evaluable question Set. These Questions can be part of Evaluable QuestionSet only. Question Read of Evaluable Questions should exhibit the behaviour mentioned above.
Response Evaluation:
QuestionResponseValidateAPI:
API accepts the request payload similar to content state update API. Along with responseKeys in the assessment.
{ "request": { "userId": "843a9940-720f-43ed-a415-26bbfd3da9ef", "assessments":[ { "assessmentTs": 1681284869464, "batchId": "0132677340746629120", "collectionId": "do_213267731619962880127", "userId": "843a9940-720f-43ed-a415-26bbfd3da9ef", "attemptId": "5486724f41afb4997118e6d97695684f", "contentId": "do_2129959063404544001107" }, responses:[{ "identifier":"<question-id>", "questionType": "", "userResponse":[""], "responseDeclaration": {}, "responseKey":["<response-key>"] }] }], "contents": [ { "contentId": "do_2132671468826214401203", "batchId": "0132677340746629120", "status": 2, "courseId": "do_213267731619962880127", "lastAccessTime": "2023-04-12 12:56:45:687+0530" }, ] } }
Question Set Response Processing flow
console.time("dbsave"); var crypto = require('crypto'); var hashList = []; var string = "sdgxsoksgdaodjqwdhuwdh"; var count = 0; var salt = "salty"; for(var i=0;i<1000000;i++) { var hash = crypto.pbkdf2('secret'+i, salt, 10, 64, 'sha512', (err, derivedKey) => { if (err) throw err; }); hashList.push(hash); if(string === hash) { count++; } } console.log(count); console.log(hashList.length); console.timeEnd("dbsave");
Response Times for Benchmarking
1000000 hash generation dbsave: 4011.281ms 100000 hash generation dbsave: 377.208ms 10000 hash generation dbsave: 57.326ms