Skip to main content

Iceberg

Module iceberg

Testing

Important Capabilities

CapabilityStatusNotes
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled via stateful ingestion
DomainsCurrently not supported.
Extract OwnershipOptionally enabled via configuration by specifying which Iceberg table property holds user or group ownership.
Partition SupportCurrently not supported.
Platform InstanceOptionally enabled via configuration, an Iceberg instance represents the datalake name where the table is stored.

Integration Details

The DataHub Iceberg source plugin extracts metadata from Iceberg tables stored in a distributed or local file system. Typically, Iceberg tables are stored in a distributed file system like S3 or Azure Data Lake Storage (ADLS) and registered in a catalog. There are various catalog implementations like Filesystem-based, RDBMS-based or even REST-based catalogs. This Iceberg source plugin relies on the Iceberg python_legacy library and its support for catalogs is limited at the moment. A new version of the Iceberg Python library is currently in development and should fix this. Because of this limitation, this source plugin will only ingest HadoopCatalog-based tables that have a version-hint.text metadata file.

Ingestion of tables happens in 2 steps:

  1. Discover Iceberg tables stored in file system.
  2. Load discovered tables using Iceberg python_legacy library

The current implementation of the Iceberg source plugin will only discover tables stored in a local file system or in ADLS. Support for S3 could be added fairly easily.

CLI based Ingestion

Install the Plugin

pip install 'acryl-datahub[iceberg]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "iceberg"
config:
env: PROD
adls:
# Will be translated to https://{account_name}.dfs.core.windows.net
account_name: my_adls_account
# Can use sas_token or account_key
sas_token: "${SAS_TOKEN}"
# account_key: "${ACCOUNT_KEY}"
container_name: warehouse
base_path: iceberg
platform_instance: my_iceberg_catalog
table_pattern:
allow:
- marketing.*
profiling:
enabled: true

sink:
# sink configs


Config Details

Note that a . is used to denote nested fields in the YAML recipe.

View All Configuration Options
Field [Required]TypeDescriptionDefaultNotes
group_ownership_property [✅]stringIceberg table property to look for a CorpGroup owner. Can only hold a single group value. If property has no value, no owner information will be emitted.None
localfs [✅]stringLocal path to crawl for Iceberg tables. This is one filesystem type supported by this source and only one can be configured.None
max_path_depth [✅]integerMaximum folder depth to crawl for Iceberg tables. Folders deeper than this value will be silently ignored.2
platform_instance [✅]stringThe instance of the platform that all assets produced by this recipe belong toNone
user_ownership_property [✅]stringIceberg table property to look for a CorpUser owner. Can only hold a single user value. If property has no value, no owner information will be emitted.owner
env [✅]stringThe environment that all assets produced by this connector belong toPROD
adls [✅]AdlsSourceConfigAzure Data Lake Storage to crawl for Iceberg tables. This is one filesystem type supported by this source and only one can be configured.None
adls.account_key [❓ (required if adls is set)]stringAzure storage account access key that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.account_name [❓ (required if adls is set)]stringName of the Azure storage account. See Microsoft official documentation on how to create a storage account.None
adls.base_path [❓ (required if adls is set)]stringBase folder in hierarchical namespaces to start from./
adls.client_id [❓ (required if adls is set)]stringAzure client (Application) ID required when a client_secret is used as a credential.None
adls.client_secret [❓ (required if adls is set)]stringAzure client secret that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.container_name [❓ (required if adls is set)]stringAzure storage account container name.None
adls.sas_token [❓ (required if adls is set)]stringAzure storage account Shared Access Signature (SAS) token that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.None
adls.tenant_id [❓ (required if adls is set)]stringAzure tenant (Directory) ID required when a client_secret is used as a credential.None
table_pattern [✅]AllowDenyPatternRegex patterns for tables to filter in ingestion.{'allow': ['.*'], 'deny': [], 'ignoreCase': True}
table_pattern.allow [❓ (required if table_pattern is set)]array(string)None
table_pattern.deny [❓ (required if table_pattern is set)]array(string)None
table_pattern.ignoreCase [❓ (required if table_pattern is set)]booleanWhether to ignore case sensitivity during pattern matching.True
profiling [✅]IcebergProfilingConfig{'enabled': False, 'include_field_null_count': True, 'include_field_min_value': True, 'include_field_max_value': True}
profiling.enabled [❓ (required if profiling is set)]booleanWhether profiling should be done.None
profiling.include_field_max_value [❓ (required if profiling is set)]booleanWhether to profile for the max value of numeric columns.True
profiling.include_field_min_value [❓ (required if profiling is set)]booleanWhether to profile for the min value of numeric columns.True
profiling.include_field_null_count [❓ (required if profiling is set)]booleanWhether to profile for the number of nulls for each column.True
stateful_ingestion [✅]StatefulStaleMetadataRemovalConfigIceberg Stateful Ingestion Config.None
stateful_ingestion.enabled [❓ (required if stateful_ingestion is set)]booleanThe type of the ingestion state provider registered with datahub.None
stateful_ingestion.ignore_new_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the current checkpoint state.None
stateful_ingestion.ignore_old_state [❓ (required if stateful_ingestion is set)]booleanIf set to True, ignores the previous checkpoint state.None
stateful_ingestion.remove_stale_metadata [❓ (required if stateful_ingestion is set)]booleanSoft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.True

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
icebergData Platform
TableDatasetEach Iceberg table maps to a Dataset named using the parent folders. If a table is stored under my/namespace/table, the dataset name will be my.namespace.table. If a Platform Instance is configured, it will be used as a prefix: <platform_instance>.my.namespace.table.
Table propertyUser (a.k.a CorpUser)The value of a table property can be used as the name of a CorpUser owner. This table property name can be configured with the source option user_ownership_property.
Table propertyCorpGroupThe value of a table property can be used as the name of a CorpGroup owner. This table property name can be configured with the source option group_ownership_property.
Table parent folders (excluding warehouse catalog location)ContainerAvailable in a future release
Table schemaSchemaFieldMaps to the fields defined within the Iceberg table schema definition.

Troubleshooting

[Common Issue]

[Provide description of common issues with this integration and steps to resolve]

Code Coordinates

  • Class Name: datahub.ingestion.source.iceberg.iceberg.IcebergSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Iceberg, feel free to ping us on our Slack