What is Trust Score?
Trust Score is IndyKite's data quality assessment system. It evaluates how trustworthy your data is based on configurable dimensions like freshness, origin, and verification status.
Key benefits:
- Data quality visibility: Know which data is fresh, verified, and complete.
- Risk-based decisions: Use trust scores in authorization policies (KBAC) and queries (ContX IQ).
- Automated recalculation: Scores update on a schedule as data ages or changes.
- Flexible weighting: Prioritize the dimensions that matter for your use case.
How does it work?
Trust Score involves three components:
- Trust Score Profile: Configuration defining which dimensions to evaluate and their weights.
- Node metadata: Properties on nodes that provide input for scoring (e.g.,
source,verification_time). - _TrustScore node: Automatically created in the IKG, linked to the scored node via
_HASrelationship.
Scoring flow
- Create a Trust Score Profile specifying dimensions and weights
- Ingest nodes with metadata (source, verification_time, etc.)
- IndyKite calculates scores based on the configured schedule
- A
_TrustScorenode is created/updated for each scored node - Query trust scores via ContX IQ or use in KBAC policies
What credentials do I need?
- Creating profiles: Service Account credentials (Config API)
- Ingesting nodes with metadata: AppAgent credentials (Capture API)
- Querying trust scores: AppAgent credentials + optional user access token (ContX IQ)
Configuration methods:
- Terraform: indykite_trust_score_profile resource
- REST API: Config API documentation
Trust Score Dimensions
Five dimensions can be used to evaluate data quality:
| Dimension | Description | Input metadata |
FRESHNESS |
How recent is the data? Older data scores lower. | Property update timestamps |
ORIGIN |
Where did the data come from? Trusted sources score higher. | source metadata on properties |
VALIDITY |
Does the data comply with expected formats and rules? | Format validation results |
COMPLETENESS |
Are all critical fields present? | Presence of required properties |
VERIFICATION |
Has the data been verified/confirmed? | verification_time metadata |
Each dimension has a weight (0-1). The weighted combination produces the overall trust score.
Schedule Options
Trust scores are recalculated periodically based on the schedule:
| Schedule Value | Description |
HOURLY |
Recalculate every hour |
THREE_HOURS |
Recalculate every 3 hours |
SIX_HOURS |
Recalculate every 6 hours |
TWELVE_HOURS |
Recalculate every 12 hours |
DAILY |
Recalculate once per day |
Trust Score Profile Configuration
REST API Endpoints
| Operation | Method | Endpoint |
| Create | POST | /configs/v1/trust-score-profiles |
| Read by ID | GET | /configs/v1/trust-score-profiles/{id} |
| Read by name | GET | /configs/v1/trust-score-profiles/{name}?location={project_id} |
| List all | GET | /configs/v1/trust-score-profiles?project_id={id} |
| Update | PUT | /configs/v1/trust-score-profiles/{id} |
| Delete | DELETE | /configs/v1/trust-score-profiles/{id} |
Create Request Syntax
{
"project_id": "<string>",
"name": "<string>",
"display_name": "<string>",
"description": "<string>",
"node_classification": "<string>",
"schedule": "<string>",
"dimensions": [
{
"name": "<string>",
"weight": <number>
}
]
}
What does each field mean?
Required fields
project_id: The GID of the project where the profile will be created.name: Unique, immutable identifier. Must start with a lowercase letter, contain only lowercase letters, numbers, and hyphens.node_classification: The node type (label) to score. Must be PascalCase (e.g.,Person,Organization,Asset).schedule: How often to recalculate scores. See schedule options above.dimensions: Array of at least one dimension with name and weight.
Optional fields
display_name: Human-readable name (can be updated).description: Description of the profile (max 65000 UTF-8 bytes).
Dimension object
name: One ofFRESHNESS,ORIGIN,VALIDITY,COMPLETENESS,VERIFICATION.weight: A number between 0 and 1 indicating the importance of this dimension.
Example: Create Trust Score Profile
POST /configs/v1/trust-score-profiles
{
"project_id": "gid:AAAABbbbCCCC...",
"name": "person-trust-profile",
"display_name": "Person Trust Score",
"description": "Evaluates trustworthiness of Person nodes",
"node_classification": "Person",
"schedule": "TWELVE_HOURS",
"dimensions": [
{
"name": "FRESHNESS",
"weight": 0.3
},
{
"name": "ORIGIN",
"weight": 0.4
},
{
"name": "VERIFICATION",
"weight": 0.3
}
]
}
Example: Update Trust Score Profile
PUT /configs/v1/trust-score-profiles/{id}
{
"display_name": "Updated Person Trust Score",
"schedule": "THREE_HOURS",
"dimensions": [
{
"name": "FRESHNESS",
"weight": 1
},
{
"name": "ORIGIN",
"weight": 1
}
]
}
Read Response
When reading a profile with full_fetch=true, additional execution information is returned:
{
"id": "gid:AAAABbbbCCCC...",
"name": "person-trust-profile",
"display_name": "Person Trust Score",
"description": "Evaluates trustworthiness of Person nodes",
"node_classification": "Person",
"schedule": "TWELVE_HOURS",
"dimensions": [
{
"name": "FRESHNESS",
"weight": 0.3
}
],
"organization_id": "gid:...",
"project_id": "gid:...",
"create_time": "2024-01-15T10:30:00Z",
"update_time": "2024-01-15T10:30:00Z",
"created_by": "gid:...",
"updated_by": "gid:...",
"last_run_id": "gid:...",
"last_run_start_time": "2024-01-15T12:00:00Z",
"last_run_end_time": "2024-01-15T12:00:05Z",
"dimensions_execution_times": {...}
}
Ingesting Nodes with Metadata
For trust scoring to work, nodes must have metadata on their properties. Use the Capture API to ingest nodes with metadata.
Metadata fields for Trust Score
| Metadata field | Used by dimension | Description |
source |
ORIGIN | Where the data came from (e.g., "passport", "HR_System") |
verification_time |
VERIFICATION, FRESHNESS | When the data was last verified (ISO 8601 timestamp) |
assurance_level |
VERIFICATION | Confidence level (numeric) |
Capture API Request with Metadata
POST /capture/v1/nodes
{
"nodes": [
{
"external_id": "jane-doe",
"type": "Person",
"is_identity": true,
"properties": [
{
"type": "name",
"value": "Jane Doe",
"metadata": {
"source": "passport",
"verification_time": "2024-01-10T14:30:00Z"
}
},
{
"type": "email",
"value": "jane@example.com",
"metadata": {
"source": "self_reported",
"verification_time": "2024-01-08T09:00:00Z"
}
},
{
"type": "passport_id",
"value": "A67897XYZ",
"metadata": {
"source": "passport",
"assurance_level": 3,
"verification_time": "2024-01-10T14:30:00Z"
}
}
]
}
]
}
Properties with source: "passport" and recent verification_time will score higher than self-reported data with older timestamps.
Trust Score in the IKG
After the trust score profile runs, a _TrustScore node is created and linked to each scored node:
(Person:jane-doe)-[:_HAS]->(_TrustScore)
The _TrustScore node contains:
- Overall trust score value
- Individual dimension scores
- Calculation timestamp
Querying Trust Scores with ContX IQ
Use ContX IQ to query trust scores and use them in authorization decisions.
CIQ Policy including Trust Score
{
"meta": {
"policy_version": "1.0-ciq"
},
"subject": {
"type": "Person"
},
"condition": {
"cypher": "MATCH (subject:Person)-[:_HAS]->(ts:_TrustScore)"
},
"allowed_reads": {
"nodes": [
"subject",
"subject.*",
"ts",
"ts.*"
]
}
}
Knowledge Query for Trust Score
{
"nodes": [
"subject.property.name",
"subject.property.email",
"ts"
]
}
Using Trust Score in KBAC
Trust scores can be used in KBAC policies to make authorization decisions based on data quality:
{
"meta": {
"policyVersion": "1.0-indykite"
},
"subject": {
"type": "Person"
},
"actions": ["access_sensitive_data"],
"resource": {
"type": "Document"
},
"condition": {
"cypher": "MATCH (subject:Person)-[:_HAS]->(ts:_TrustScore) WHERE ts.score > 0.8"
}
}
This policy only allows access if the person's trust score exceeds 0.8.
Terraform Configuration
Use the indykite_trust_score_profile resource:
resource "indykite_trust_score_profile" "person_trust" { location = indykite_application_space.my_app_space.id name = "person-trust-profile" display_name = "Person Trust Score" description = "Evaluates trustworthiness of Person nodes" node_classification = "Person" schedule = "UPDATE_FREQUENCY_TWELVE_HOURS"dimension { name = “NAME_FRESHNESS” weight = 0.3 }
dimension { name = “NAME_ORIGIN” weight = 0.4 }
dimension { name = “NAME_VERIFICATION” weight = 0.3 } }
Terraform Arguments Reference
| Argument | Required | Description |
location |
Yes | Application Space ID where the profile is created |
name |
Yes | Unique, immutable identifier |
node_classification |
Yes | Node type to score (PascalCase) |
schedule |
Yes | Recalculation frequency |
dimension |
Yes | At least one dimension block |
display_name |
No | Human-readable name |
description |
No | Profile description |
Terraform Schedule Values
UPDATE_FREQUENCY_HOURLYUPDATE_FREQUENCY_THREE_HOURSUPDATE_FREQUENCY_SIX_HOURSUPDATE_FREQUENCY_TWELVE_HOURSUPDATE_FREQUENCY_DAILY
Terraform Dimension Names
NAME_FRESHNESSNAME_ORIGINNAME_VALIDITYNAME_COMPLETENESSNAME_VERIFICATION
Error Handling
| HTTP Code | Meaning | Common Cause |
| 400 | Bad Request | Invalid JSON, missing required fields |
| 401 | Unauthorized | Invalid or missing Bearer token |
| 403 | Forbidden | Insufficient permissions for the project |
| 404 | Not Found | Profile ID/name doesn't exist |
| 412 | Precondition Failed | ETag mismatch (concurrent modification) |
| 422 | Unprocessable Entity | Validation error (invalid name, missing dimensions, etc.) |
Common validation errors
invalid field name: is not valid name- Name must start with lowercase letter, contain only lowercase letters, numbers, and hyphens.invalid field project_id: identifier is not of PROJECT- Must use a valid project GID.missing field node_classification- Node type is required.missing field schedule- Schedule is required.missing field dimensions- At least one dimension is required.
Best Practices
Dimension weighting
- Weight dimensions based on your use case requirements.
- For identity verification: prioritize
VERIFICATIONandORIGIN. - For real-time data: prioritize
FRESHNESS. - For compliance: prioritize
COMPLETENESSandVALIDITY.
Schedule selection
- Use
HOURLYfor data that changes frequently and freshness is critical. - Use
DAILYfor stable data where frequent recalculation adds overhead. TWELVE_HOURSis a good default for most use cases.
Metadata ingestion
- Always include
sourcemetadata to enable ORIGIN scoring. - Update
verification_timewhen data is re-verified. - Use consistent source names across your data pipeline.
Next Steps
- ContX IQ guide: ContX IQ Guide
- KBAC guide: Dynamic Authorization Guide
- Terraform provider: Trust Score Profile Resource
- Credentials guide: Credentials Guide