Back to all guides
Outbound Events

Outbound Events / Signal guide

What is needed to configure Events for the IndyKite platform.

Outbound events are designed to notify external systems about important changes within the IndyKite Knowledge Graph (IKG). These external systems may require real-time synchronization or need to react to changes occurring in the platform.

There can be only one configuration per AppSpace (Project).

How It Works

An outbound event (event sink) configuration determines providers and routes.

The providers are the destinations of the events.

The routes defines which event types will be sent.

  1. Configuration - Administrators define the event types they want to subscribe to to define event routing and specify the providers (Kafka, Azure Event Grid and Azure Service Bus) where the events should be published. Multi-destination is supported.
  1. Event Generation - When a CRUD operation is executed, it generates an IndyKite Signal message that conforms to the CloudEvents standard specification.
  1. Event Publishing - The service orchestrates the delivery of the event to the configured providers.

After the events are published to the configured destination(s), customers are responsible for implementing any additional business logic for further filtering and processing.

The order of routes in the OutboundEventsConfig is important because event routing follows a sequential evaluation model. This means that an event is checked against each route in order, and it can match multiple routes before being sent to its final destination(s).

  • If an event type matches multiple routes, it will be sent to multiple destinations, unless explicitly stopped.
  • If a general wildcard () route exists, it will catch all events that were not handled by earlier routes.
  • More specific event types should be defined before wildcard routes to ensure they are processed correctly.

The stop_processing flag controls whether event propagation should stop after a match is found.

  • If stop_processing = true, once the event matches this route, it will not be evaluated against any further routes.
  • If stop_processing = false (default behavior), the event will continue matching additional routes and may be sent to multiple destinations.

Kafka brokers:

A Kafka Broker is a Kafka Server responsible for receiving, permanently storing, and distributing data streams (called "events" or "messages") between producers (data writers) and consumers (data readers).

A Kafka cluster is made up of one or more brokers working together. The main responsibilities of a single broker are:

  • Data Storage: It stores the messages on disk in structures called Partitions.
  • Message Handling: It manages all client requests:
    • Producers send messages to the broker.
    • Consumers request and fetch messages from the broker.
  • Data Replication: To ensure high availability and fault tolerance, the broker replicates partition data to other brokers in the cluster.
  • Cluster Coordination: Brokers work together to maintain the state of the cluster.

The primary reason for using an array with multiple brokers is resilience. If the first broker in the list is unavailable, the client will simply try the next one in the array until it finds an active broker to get the cluster metadata. It's a best practice to include at least two brokers to ensure that your client can always connect to the cluster, even if one broker is down for maintenance or has failed.

Configure Outbound Events

Example 1:

I want to sent an event to my kafka topic "topic_signal" each time a crud operation is executed on the Capture API (upsert and delete nodes and relationships).

Terraform:

Configure events using the indykite_event_sink resource.

https://registry.terraform.io/providers/indykite/indykite/latest/docs/resources/event_sink

resource "indykite_event_sink" "outbound_events" {
name         = "outbound-events"
display_name = "Outbound Events"
location     = "gid:AAAAABBBBBBBBBBBBBB"
providers {
	provider_name = "confluent-provider"
	kafka {
		brokers  = ["pkc-xxxxxx.xxxxxxxx.gcp.confluent.cloud:9092"]
		topic    = "topic_signal"
		username = "FOUFOUFOUFOUFOUFOUFOUFOU"
		password = "123654789qwertypoiuytre"
	}
}
routes {
	provider_id     = "confluent-provider" -> linked with provider_name
	stop_processing = true
	keys_values_filter {
		event_type = "indykite.audit.capture.*" -> event_type designates which operations trigger the signal
	}
	route_display_name = "Configuration Audit Events"
	route_id           = "config-audit-log"
	}
}

REST:

Configure events using the REST endpoint.

https://openapi.indykite.com/api-documentation-config#tag/default/post/event-sinks

{
	"project_id": "your_project_gid",
	"description": "description of eventsink",
	"display_name": "eventsink name",
	"name": "eventsink-name",
	"providers": {
		"provider-with-kafka": {
			"kafka": {
				"brokers": ["your-destination:9092", "another-destination:9092"],
				"disable_tls": false,
				"tls_skip_verify": false,
				"topic": "topic_signal",
				"username": "api_key",
				"password": "api_key_secret"
			}
		}
	},
	"routes": [
	{
		"provider_id": "provider-with-kafka",
		"event_type_key_values_filter": {
			"event_type": "indykite.audit.capture.*"
		},
		"stop_processing": true,
		"display_name": "Configuration Audit Events",
		"route_id": "config-audit-log"
	}
	]
}

Example 2:

I want to sent a signal to my kafka topic "topic_signal" each time a capture.upsert.node operation is executed on an email property node of a Person node.

If no match is found, I want to send a signal to my Azure EventGrid each time a CIQ execution query is run.

If no match is found, I want to send a signal to my Azure Bus Service each time a CRUD operation is executed on a configuration (Config API).

Terraform:

resource "indykite_event_sink" "outbound_event" {
  name         = "outbound-event"
  display_name = "Outbound Event"
  location     = "gid:AAAAAlBBBBBBBBBBBBBBB"
  providers {
    provider_name = "kafka-provider"
     kafka {
      brokers  = ["pkc-xxxxxx.xxxxxxxx.gcp.confluent.cloud:9092", "pkc-xxxxxx.yyyyyyyy.gcp.confluent.cloud:9092"]
			topic    = "topic_signal"
			username = "FOUFOUFOUFOUFOUFOUFOUFOU"
			password = "123654789qwertypoiuytre"
      }
  }
   providers {
    provider_name = "azuregrid-provider"
    azure_event_grid {
      topic_endpoint = "https://xxxxx.eventgrid.azure.net/api/events"
      access_key     = "secret-access-key"
    }
  }
  providers {
    provider_name = "azurebus-provider"
    azure_service_bus {
      connection_string   = "Endpoint=sb://xxxx.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxxxxxxxxxxxxxxxxxxxx"
      queue_or_topic_name = "your-queue"
    }
  }
  routes {
    provider_id     = "kafka-provider"
    stop_processing = true
     keys_values_filter {
      key_value_pairs {
        key   = "email" -> filter with key / value pair  property node email / all values
        value = "*"
      }
       key_value_pairs {
        key   = "captureLabel" -> filter with key / value pair  type / Person
        value = "Person"
      }
      event_type = "indykite.audit.capture.upsert.node"
    }
    route_display_name = "Configuration Audit Events"
    route_id           = "kafka-audit-log"
  }
  routes {
		provider_id     = "azuregrid-provider" -> linked with provider_name
		stop_processing = true
		keys_values_filter {
			event_type = "indykite.audit.ciq.execute" -> event_type designates which operations trigger the signal
		}
		route_display_name = "Configuration Audit Events"
		route_id           = "azuregrid-audit-log"
		}
  }
  routes {
		provider_id     = "azurebus-provider" -> linked with provider_name
		stop_processing = true
		keys_values_filter {
			event_type = "indykite.audit.config.*" -> event_type designates which operations trigger the signal
		}
		route_display_name = "Configuration Audit Events"
		route_id           = "azurebus-audit-log"
		}
  }
}

REST:

{
    "name": "{{eventSinkName}}",
    "display_name": "{{eventSinkDisplayName}}",
    "description": "{{eventSinkDescription}}",
    "project_id": "{{appSpaceId}}",
    "providers": {
        "provider-with-kafka": {
            "kafka": {
                "brokers":         ["your-destination:9092", "another-destination:9092"],
                "topic":           "my-topic",
                "disable_tls":     false,
                "tls_skip_verify": true,
                "username":        "user1",
                "password":        "password123"
            }
        },
        "provider-with-azure-event-grid": {
            "azure_event_grid": {
                "topic_endpoint": "https://xxxxx.eventgrid.azure.net/api/events",
                "access_key":     "key123"
            }
        },
        "provider-with-azure-service-bus": {
            "azure_service_bus": {
                "connection_string":   "Endpoint=sb://xxxx.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxxxxxxxxxxxxxxxxxxxx",
                "queue_or_topic_name": "Queue"
            }
        }
    },
    "routes": [
		{
			"provider_id": "provider-with-kafka",
			"event_type_key_values_filter": {
				"context_key_value": [
				  {
				    "key": "email",
				    "value": "*"
				  },
				  {
				    "key": "captureLabel",
				    "value": "Person"
				  }
				],
				"event_type": "indykite.audit.capture.upsert.node"
		    },
		    "stop_processing": true,
	      "display_name": "Configuration Audit Events Kafka"
		},
		{
			"provider_id":     "provider-with-azure-event-grid",
			"event_type_key_values_filter": {
				"event_type": "indykite.audit.ciq.execute"
			},
			"stop_processing": true,
	    "display_name": "Configuration Audit Events Grid"
		},
	  {
	      "provider_id": "provider-with-azure-service-bus",
	      "event_type_key_values_filter": {
						"event_type": "indykite.audit.config.*"
			  },
	      "stop_processing": true,
	      "display_name": "Configuration Audit Events Bus"
	    }
	]
}

Multiple captureLabel:

{
  "key": "captureLabel",
  "value": "Person"
}
{
  "key": "captureLabel",
  "value": "Human"
}

In order to have the messages sent, if you add several captureLabel in the configuration, the nodes to be captured by the Capture API will need to have the following attributes:
"type": "Person",
"tags": ["Human"]


Results (Confluent example):

Supported filters

Method Event Type Key, Value Node Key, Value Property
Ingest Events
BatchUpsertNodes indykite.audit.capture.batch.upsert.node captureLabel , Car Color, Green
BatchUpsertRelationships indykite.audit.capture.batch.upsert.relationship captureLabel, RENT Status, Active
BatchDeleteNodes indykite.audit.capture.batch.delete.node captureLabel, Car Color , Green
BatchDeleteRelationships indykite.audit.capture.batch.delete.relationship captureLabel, RENT
BatchDeleteNodeProperties indykite.audit.capture.batch.delete.node.property
BatchDeleteRelationshipProperties indykite.audit.capture.delete.relationship.property
BatchDeleteNodeTags indykite.audit.capture.batch.delete.node.tag captureLabel, Car
Configuration Events
Atlas, Hermes indykite.audit.config.create
indykite.audit.config.read
indykite.audit.config.update
indykite.audit.config.delete
indykite.audit.config.permission.assign
indykite.audit.config.permission.revoke
Token Events
TokenIntrospect indykite.audit.credentials.token.introspected
Authorization Events
Authorization indykite.audit.authorization.isauthorized
indykite.audit.authorization.whatauthorized
indykite.audit.authorization.whoauthorized
Ciq Events
Ciq indykite.audit.ciq.execute

Disabling TLS

"Disable TLS" in the context of a Kafka provider means configuring that client to connect to the Kafka brokers using an unencrypted connection, typically over plain TCP/IP, instead of using Transport Layer Security (TLS), which is often used interchangeably with SSL (Secure Sockets Layer).

Implications of Disabling TLS

Feature When TLS is Enabled When TLS is Disabled (Plaintext)
Data Encryption Encrypted. Data transmitted between the client and the Kafka broker is scrambled, protecting it from eavesdropping during transit. Unencrypted. Data is sent in the clear, making it vulnerable to interception and inspection by anyone on the network.
Authentication Can be configured for mutual authentication (two-way SSL) or server authentication (one-way). Ensures the client is talking to the real broker and/or the broker is talking to an authorized client. None by default (unless another mechanism like SASL is used). Does not verify the identity of the broker or client using certificates.
Performance Slightly lower performance due to the overhead of encryption and decryption on both the client and broker. Slightly higher performance as there is no encryption/decryption overhead.
Security High (essential for production environments). Low (suitable only for local development, testing, or highly secure private networks).

TLS certificate check

"Skip TLS certificate check" in a Kafka provider configuration means the client (the Kafka producer, consumer, or tool) will not perform validation of the Kafka broker's TLS/SSL certificate during the connection handshake.

In a standard secure connection using Transport Layer Security (TLS/SSL):

  1. The Kafka client connects to the broker.
  1. The broker sends its digital certificate to the client.
  1. The client then performs a certificate check (validation) to ensure two things:
    • The certificate was issued by a trusted Certificate Authority (CA) whose root certificate the client possesses in its truststore.
    • The certificate's hostname matches the actual hostname the client is connecting to.

Implications of Skipping the Check

Aspect Behavior When Check is Enabled (Default Secure) Behavior When Check is Skipped
Authentication Client authenticates the server (broker), ensuring it is connecting to the intended, legitimate Kafka instance. Client does not authenticate the server, accepting the certificate even if it's expired, self-signed, or issued by an untrusted source.
Encryption The data is still encrypted in transit between the client and the broker. The data is still encrypted in transit.
Security Risk Low. Protected against Man-in-the-Middle (MITM) attacks. High. The connection is vulnerable to Man-in-the-Middle (MITM) attacks, where an attacker can intercept the traffic by presenting a fraudulent certificate, which the client will accept.

So you would disable TLS or skip the TLS certificate check only in a dev environment or internal development.