Note
This connector is subject to the Snowflake Connector Terms.
Snowflake Openflow Connector for Kafka¶
This topic describes the basic concepts of the Openflow Connector for Kafka and its limitations.
The Openflow Connector for Kafka reads data from Kafka topics and writes it into Snowflake tables using the Snowpipe Streaming High Performance architecture.
Use this connector if you’re looking to do the following:
Ingest real-time events from Apache Kafka into Snowflake for near real-time analytics
Ingest real-time events from Apache Kafka into Snowflake-managed Iceberg™ tables
Accelerate your ingestion even more by combining Openflow speed with the Interactive Tables feature
Do Single Message Transforms for data enrichments or filtering before data lands in Snowflake.
Limitations¶
The connector doesn’t support schema evolution for Apache Iceberg™ tables.
Autoscaling isn’t supported. The number of Openflow runtime min and max nodes should be constant for the runtime where the Openflow Connector for Kafka is deployed.
The Kafka cluster must be running version 0.10.0.0 or later. Prior versions of Kafka aren’t supported.
Using different authentication options, data types or data manipulation¶
The connector is configured to work with the JSON data type and the SASL_SSL authentication method. The connector can be modified and extended in many ways. See the dedicated sub-pages in the setup section for guidance on making necessary changes.
Supported Data types¶
The Openflow Connector for Kafka supports the following data types:
JSON (available by default in the connector)
Avro (extra configuration required)
Protobuf (extra configuration required)
Supported Authentication Methods¶
The Openflow Connector for Kafka supports the following authentication mechanisms:
SASL with the following SASL mechanisms:
PLAIN
SCRAM-SHA-256
SCRAM-SHA-512 (available by default in the connector)
OAUTHBEARER
SASL with AWS MSK IAM (extra configuration required via controller services)
mTLS (extra configuration required via controller services)