lakehouse.databricks.common.write_file_to_datalake

lakehouse.databricks.common.write_file_to_datalake(spark, data, mounting_point, file_path, file_name, file_type)

The function is to write files in datalake

Parameters

sparkspark context

Spark context object passed from the calling Spark instance.

datastring

content of the file to be written -> 'Write this sentence in myfile.csv'

mounting_pointstring

datalake mountpoint where the response will be written to file(s) -> '/mnt/datalake'

file_pathstring

file path from root of the container -> '01-bronze/ga4/263588751/example_ga4_metrics_report'

file_namestring

name of the file -> 'myfile'

file_typestring

type/extention of output file -> 'csv'