This script is a custom ModelOp Center Monitor designed to run within a ModelOp runtime environment. It automates the calculation of governance scores for your entire Use Case inventory and generates visualization payloads for the ModelOp Dashboard.
When attached to a Monitoring Job, this script performs the following:
- Inventory Scan: Fetches all unique Use Cases in the environment.
- Score Calculation: Computes the Governance Score (Passing Controls / Total Controls) for each Use Case.
- Visualization Output:
- Donut Charts: Generates a pass/fail breakdown chart for every individual Use Case.
- Timeline Graph: Plots the aggregate Governance Score over time for all Use Cases on a single timeline.
This monitor requires specific Job Parameters to authenticate and fetch data. You must provide these when creating the Job in ModelOp Center.
| Parameter Name | Description | Example |
|---|---|---|
MOC_INSTANCE_URL |
The base URL of your ModelOp Center environment. | https://your-company.modelop.center |
MOC_USERNAME |
Service account or username for API access. | [email protected] |
MOC_PASSWORD |
Password or Secret for the account. | ******** |
Security Note: In a production environment, it is recommended to manage credentials via ModelOp's Secrets Management rather than plain text parameters, if available.
The runtime environment (or local testing environment) requires the following Python packages:
-
Public Packages:
pip install pandas requests
-
Private ModelOp SDK:
- This script utilizes
modelop-sdkandmodelop.utils. - Action Required: Ensure the environment is configured to pull from ModelOp's private PyPi repository, or that the
.whlfile is installed.
- This script utilizes
[cite_start]The script outputs a JSON dictionary containing visualization configurations compatible with the ModelOp Dashboard[cite: 5].
- Purpose: Visualizes the Pass/Fail ratio for a specific Use Case.
- Key Format:
[UseCaseName]_donut(Sanitized) - Visual: Green segment for "Passing Controls", Red segment for "Failing Controls".
- Purpose: Tracks the Governance Score (%) of all Use Cases over time.
- Key:
governance_score_timeline - Visual: A multi-series line graph where the X-axis is the timestamp of the job run, and the Y-axis is the compliance percentage.
You can test this script locally before deploying it to ModelOp Center.
- Open
gov_score_monitor.py. - Scroll to the
main()function at the bottom. - Update the
test_paramsdictionary with your actual environment URL and credentials. - Run the script:
python mtr_gov_scores.py
- Review the console output to verify that:
- Authentication is successful.
- Inventory is fetched.
- JSON output contains keys like
*_donutandgovernance_score_timeline.