Iβm a Data Engineer specializing in Azure Data Factory, Databricks (PySpark + Delta Lake), Microsoft Fabric, and cloud-based data integration.
I focus on building scalable ETL/ELT pipelines, metadata-driven ingestion frameworks, and hybrid on-prem β cloud migrations.
My experience includes real-world enterprise data integration projects:
designing ingestion frameworks, orchestrating pipelines, automating DDL, implementing CDC, and optimizing Delta Lake transformations.
- Cloud Data Engineering (Azure / Microsoft Fabric / ADLS Gen2)
- ETL/ELT Architecture & Data Integration Pipelines
- Apache Spark (PySpark) & Databricks Delta Lake
- Metadata-driven automation & orchestration
- Hybrid data flows: on-premises β cloud migrations
- SQL engineering: performance tuning, modeling, DWH concepts
- DevOps for Data: CI/CD, automation, scripting (Python & Bash)
Role: Data Integration Engineer
A real-world enterprise solution developed in a cross-functional team.
Highlights:
- Dynamic, metadata-driven ingestion for multiple systems
- Automated DDL execution for external Delta tables
- Delta Lake CDC, schema validation, incremental loads
- Multi-layer architecture: Raw β Bronze β Silver
- Control tables, audit logs, structured notifications
β‘οΈ Repository: https://github.com/Kavoondev/Data-Engineering-Portfolio
β‘οΈ Documentation & Architecture: https://databylex.github.io/
- DP-600: Microsoft Fabric Analytics Engineer β November 21, 2025
- Databricks Certified Data Engineer Associate β October 13, 2025
- Microsoft Azure Data Fundamentals (DP-900) β September 20, 2025
- Microsoft Azure Fundamentals (AZ-900) β September 28, 2025
- Apache Airflow 3 Fundamentals β July 5, 2025
- dbt Fundamentals β July 14, 2025
- Google Data Analytics Professional Certificate β November 13, 2024
- IBM Data Analyst Professional Certificate β November 9, 2024
- Advancing streaming data pipelines with Databricks Unity Catalog and Microsoft Fabric for AI-integrated analytics.
- Enhancing multi-cloud ELT solutions using OCI alongside Azure.
- Deepening expertise in on-premises to cloud database migrations and Data Integration Engineering for hybrid cloud environments, including tools like Azure Database Migration Service and integration patterns for legacy systems.
Keywords: Data Engineer, ETL/ELT Developer, Azure Data Factory, Databricks, Apache Spark, SQL Optimization, dbt Transformations, Apache Airflow Orchestration, Data Pipelines, Cloud Data Engineering, Python PySpark, Data Integration, On-Premises to Cloud Migration, Remote Data Engineer Ukraine.
- LinkedIn: linkedin.com/in/yourprofile
- Medium: medium.com/@yourprofile
- Email: [email protected]
Open to remote opportunities in Data Engineering, ETL Development, Cloud Migration, and Data Integration. Based in Ukraine β let's build scalable data solutions together!
I enjoy transforming messy operational data into clean, scalable pipelines β and optimizing them until they run like a perfectly tuned engine.