Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

This wiki details explains how to enable usage details of the assets/contents contributed and published from via the sourcing to consumptionplatform.

Background & Problem statement

Currently, the Sourcing platform doesn't does not know the usage details of the assets/contents published for consumption. Due to this, the contributor doesn't have visibility of how many assets that were contributed are being used , and how much they are used.

Key Design Problems:

Enable usage details of the assets/contents published from sourcing to consumption.

Note: Currently, Sourcing(VidyaDaan) and Consumption(DIKSHA) we are having two different do_ids for the same content.

Proposed Design:

Approach 1: Enhance existing

...

Update Content Rating data product:

The existing content updater Update Content Rating data product gets the content consumption metrics and writes it them to Elastic Search(ES). It can be enhanced to filter out vdn VidyaDaan specific(Sourcing) contents and make a new call to write the vdn VidyaDaan contents to the dock elastic searchES.

Image RemovedImage Added

Approach 2: Using the transaction events & processing that transaction event

Image Added

  • Currently, transaction event is generated as part of neo4j transaction gets pushed into learning.graph.event kafka topic by logstash.

  • We can write a Flink job at the consumption layer which will

    • Read learning.graph.event Kafka topic of consumption layer.

    • Search the object from the consumption layer and find the origin id

    • if the origin id exists in the sourcing database, the metric specific data will be updated in the sourcing database object.

Conclusion:

<TODO>