Data infrastructure engineered for real-world conditions—not just the happy path. I specialize in building systems that handle failures gracefully, recover automatically, and provide the observability needed to debug issues when they inevitably occur.
I build data systems where failure has consequences:
Systems that recover automatically, not manually
Data quality tied to actual business impact
Right-sizing and optimization from day one
The difference: I've lost real money to bad pipelines. Now I build so you don't have to.
Ready to build data systems that work when business decisions depend on them?
Data Engineer with a non-traditional path that makes me better at the job.
I spent four years in construction project management learning how systems fail under pressure. Four years as a quantitative trader where bad data meant real money lost. Now three years building production data infrastructure where those lessons matter every day.
The pattern is clear: I've always worked where reliability isn't optional and data drives decisions. Construction taught me to design for failure modes. Trading taught me that data quality is non-negotiable. Data engineering is where both disciplines converge.
I specialize in high-availability systems, real-time pipelines, and cost-conscious architecture—because I've seen what happens when any of those fail.
Currently seeking full-time Data Engineering roles where complex data challenges need someone who thinks like an engineer, plans like a project manager, and measures impact like a trader.
I also take on select consulting engagements helping startups build data foundations that won't collapse at scale.

My data engineering expertise comes from 10+ years across high-stakes environments. Each role taught me critical skills I now apply to building production data systems.
Each career phase developed specific skills that make me a better data engineer today
Managing construction projects taught me to design for failure modes and scale. I now apply this to data architecture: planning for 3x growth, calculating resource constraints, and building systems that don't collapse under load.
Built algorithmic trading systems where bad data meant real money lost. This taught me to build data pipelines with obsessive data quality checks, sub-second latency requirements, and automatic failover.
Currently building production data systems: SEC financial parser (16.5 MB/s throughput), ETL pipelines, data quality frameworks. Combining construction discipline with trading urgency to deliver reliable data infrastructure.
Bottom line: 10+ years of experience building systems where failure isn't an option. Now applying that to data engineering.
Get battle-tested tools and templates that have saved companies $100K+ in development costs
Complete 47-point checklist to ensure your data pipelines are production-ready
Calculate the exact ROI of your data engineering investments in minutes
5 proven architecture templates for different use cases and scales
See exactly how much you could save with optimized data infrastructure
* Results are estimates based on typical improvements seen in similar projects. Actual results may vary depending on your specific infrastructure and requirements.
Let's discuss your specific needs and create a custom solution
Most projects completed in 6-10 weeks
High uptime and reliable systems
Post-launch support and documentation
Let's discuss how I can help you build scalable data systems that drive real business value.
Want to learn more about my services?











