Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Q : How can I split a variable-depth, slash-delimited path_column field into separate path_lvl_1, path_lvl_2, … columns in Kusto (Fabric) so that Power BI (in DirectQuery mode) can consume a fixed schema without requiring manual updates when the number of segments grows?
Details:
My raw table is named data_table and has this schema:
path_column can have an arbitrary number of “/” segments—today might be 3 levels, tomorrow 10 or 50.
We need Power BI (in DirectQuery) to see columns like:
where N grows automatically whenever any row’s path_column has more segments than before.
Approaches tried and limitations:
Split to a dynamic array (extend segments_array = split(path_column, "/")) then client-side “List → Record → Expand.”
Works in Import mode, but breaks in DirectQuery because the M step (using List.Count, Record.FromList, Expand) cannot fold.
Split into a fixed number of columns (e.g. assume max of 10 levels via path_lvl_1 = iff(array_length(parts) >= 1, parts[0], ""), …, path_lvl_10 = iff(array_length(parts) >= 10, parts[9], "")).
Works in DirectQuery, but if a row ever has 11 segments, you lose data beyond level 10—and must manually alter the function/table to add path_lvl_11.
mv-expand + evaluate pivot at query time for a truly dynamic set of columns.
KQL function returns a result with as many columns as needed (0..max index), but the schema is not stable, so Power BI (DirectQuery) cannot bind to it.
What I need:
A way in Fabric/Kusto to produce a “wide” result that always has a fixed schema of path_lvl_1…path_lvl_N, automatically increasing N whenever deeper paths appear—without having to manually ALTER the table or function each time the maximum depth increases.
Ideally, a pure KQL solution (since DirectQuery won’t let me do client-side M transformations), or some Fabric feature I’m missing (update policies, etc.) that can “auto-add” new columns.
Why this matters:
We want the Fabric → Power BI pipeline to be maintenance-free. If tomorrow a device reports
the next Power BI refresh should automatically see path_lvl_7 and path_lvl_8 without a manual “.alter table” or “re-publish function.”
Actually I need these different columns to create hierarchy in one of power bi report. so that the dynamic schema can be handled properly.
or if you can provide a solution to handle it at etl level, that will also work, like may be implementing any function or script so that if each time no of values getting inccreased in path_column, then automatically functions or script run and creates same table with new schema with updated policy.
Has anyone in the Fabric community solved this? Is there a recommended pattern to handle truly variable-depth slash-delimited hierarchies in KQL so that downstream Power BI (DirectQuery) always sees a stable, “growing” set of path_lvl_X columns?
Got it 🙂
This KQL query works what you want. Notice the line with the pivot command - here you have the option to add the schema you need. The columns will then be added, even though you don't have a value for that column.
The Pivot them gives you the need to do an aggregation afterwards - so it becomes a bit messy.
I hope this help you. Below code, can be turned into a function if you need it
let T = datatable(row:string, values:string)
[
"row01", "col1/col2/col3",
"row02", "col1/col2/col3/col4",
"row03", "col1/col2",
];
T
| extend ColSplit = split(values, "/")
| mv-expand splittedColumns = ColSplit
| extend SplittedColumnsToString = tostring(splittedColumns)
| evaluate pivot(SplittedColumnsToString, count()) : (row:string, values:string, ColSplit:dynamic , NumberOfColumns:int, splittedColumns:string , col1:string, col2:string,col3:string,col4:string,col5:string,col6:string)
| project-away ColSplit, NumberOfColumns, splittedColumns
| summarize col1 = max(col1), col2 = max(col2), col3 = max(col3), col4 = max(col4), col5 = max(col5), col6 = max(col6) by row, values
Hi
I need some help to understand what you are trying to accomplish.
You are asking for a dynamic resultset (dynamic number of columns) based on the hierarchy column. And in the same time you don't want a variable number of columns due to fixed meta-data in Power BI.
Perhaps I'm misunderstanding your question - please help 🙂
With that said, I can help you create 100 columns in the hierarchy, so you have enough columns almost no matter how deep the hierarchy becomes in the future.
Then you will have a fixed number of columns for Power Query and can go from there.
Let me know what you think of above approach and if you can help me understand what you are asking 🙂
Cheers
Brian
ok, let me clarify first the ETL part then we will go to BI part
We’re ingesting live data from Event Hub into an Eventhouse kql table. One of the columns contains values separated by '/', like:
047/Pass/fail/assure
The number of segments (separated by '/') is not fixed — it changes over time. I'd like to split this column dynamically into multiple columns (Col_1, Col_2, etc.) based on the number of values, and have the schema auto-adjust if more segments appear in future records. For example:
Can Eventhouse/KQL support dynamic schema creation or updates like this during ingestion?
Or is there a best practice for handling this kind of variable structure using something else?