Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Introduction

This wiki details how to enable usage details of the assets/contents contributed and published via the sourcing platform.

Background & Problem statement

Currently, the Sourcing platform doesn't know the usage details of the assets/contents published for consumption. Due to this, the contributor doesn't have visibility of how many assets contributed are used, how much they are used.

Key Design Problems:

Enable usage details of the assets/contents published from sourcing to consumption.

Proposed Design:

Approach 1: Enhance existing Update Content Rating data product:

The existing Update Content Rating data product gets the content consumption metrics writes them to ES. It can be enhanced to filter out VidyaDaan specific(Sourcing) contents and make a new call to write the VidyaDaan contents to the dock elastic search.

Approach 2: Using the transaction events & processing that transaction event

  • Currently, transaction event generated as part of neo4j transaction gets pushed into learning.graph.event kafka topic by logstash.

  • We can write a Flink job at the consumption layer which will

    • Read learning.graph.event Kafka topic of consumption layer.

    • Search the object from the consumption layer and find the origin id

    • if the origin id exists in the sourcing database, the metric specific data will be updated in the sourcing database object.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.