KTL Blog

Two Days at FabCon Changed How I Think About Medallion Architecture 

Written By Barry Crowell

I spent two days at FabCon 2026 in Atlanta sitting in back-to-back medallion architecture workshops. Cold Atlanta weather, but solid workshops. 

By day two, something clicked. I wasn’t learning about Bronze, Silver, and Gold as some immutable law. I was learning they’re a pattern you adapt to your actual data. 

Here’s the thing: most of us come to medallion architecture with this mental image. Bronze gets the raw data. Silver cleans it. Gold serves it polished and ready. Rigid. Three layers. Done. 

Except that’s not how real data works. 

The Flexibility Nobody Talks About 

The first workshop made this clear: Bronze can be Parquet files or Delta tables. Silver and Gold can both be Lakehouse, or you can mix, Lakehouse for Bronze and Silver, Data Warehouse for Gold. There’s no single way. 

What this actually means is you’re not building a fixed pipeline, you’re building a framework and adapting it to what your data needs. 

I come from a SQL Server background, and T-SQL is my specialty. So, when the second workshop focused on designing the Gold layer for performance and security, it hit differently. I could lean on everything I already knew about building views, managing schemas, and designing query patterns. I didn’t have to learn a whole new medallion-specific syntax, I could just use what I already knew. 

That’s what matters here, not medallion as a rigid template, but as a framework that lets you use what you’re already good at. 

Where the Real Performance Gain Lives 

Here’s what actually changed how I think about table architecture: partitioning. 

Partitioned Delta tables in a Lakehouse can cut your incremental refresh time by 70 to 90 percent. 

Think about it.
45 minutes without partitioning.
Five minutes or less with the right approach.

Not theoretical. Just the result of partitioning your tables based on real query patterns.

The problem is most people talk about medallion architecture and forget to mention partitioning. They focus on the layers, Bronze, Silver, Gold, and miss the thing that actually multiplies your performance: how you structure the data inside those layers. 

Partitioning isn’t magical. You partition on the columns that actually matter for your access patterns. Usually it’s time-based by year, month, day. Sometimes it’s a business dimension like region or customer segment. When you partition correctly, incremental refreshes only touch the data that’s actually changed, not the whole table. 

And yes, you can over partition. Too many small files and your metadata overhead kills the benefit. The real issue for most people isn’t over-partitioning, it’s not partitioning at all.

Medallion Isn’t for Every Situation 

Here’s what I learned that matters most: medallion isn’t always the answer. 

If your data is small and you’re only refreshing once a day, you may not need all three layers. If the transformation from raw to analytics is simple, you might skip Silver altogether or push those transformations upstream before the data even hits Bronze.

What changed for me was realizing the architecture serves your data’s needs, not the other way around. Your most expensive refresh? Partition it. Your Gold layer queries? Build them using whatever your team is strongest in, whether that’s T-SQL, Spark SQL, or Python notebooks.

You’re not following a blueprint, you’re focused on making refreshes faster, data cleaner, and insights easier to access.

That’s what medallion actually means. 

Start with your most expensive refresh. Partition it. Measure the impact.
Then decide if a full medallion architecture is worth the added complexity for what you’re actually trying to accomplish.

The framework is there. Now it’s up to you to adapt it to your data.

Related Articles

Scroll to Top