Hero

Hello,
This is Emre Uludag, I'm an experienced Data Architect and a Solo Entrepreneur who builds internet SaaS.

Resume

uludag.emre.tr@gmail.com

name:'Emre Uludag',
skills:['AWS', 'Azure', 'Confluent', 'Python', 'SQL', 'Scala', 'Java', 'Dbt', 'Spark', 'Flink', 'Kafka', 'Airflow', 'Terraform', 'Docker', 'Kubernetes'],
location:Munich,
data_engineer_since:2020,
number_of_years_it_experience:6,
current_occupation:DataReplyDE,
title:DataEngineeringConsultant,
hard_worker:True,
motivated:True,
problem_solver:True
}

Who I am?

As a Data Architect and Entrepreneur, I focus on building scalable and high-impact software solutions. Beyond my technical expertise, I am passionate about sharing insights through my tech blog, where I explore topics at the intersection of data, cloud, and business. I am actively growing my audience and engaging with a broader community to exchange ideas and drive innovation. My vision is to build multiple SaaS products that heavily leverage AI and data.

Emre Uludag
Hero
Experiences
Hero

(Jan 2024 - Present)

data

IT Consultant

Data Reply DE

Hero

(May 2023 - November 2023)

scalable

Senior Data Engineer

Scalable Capital

Hero

(Oct 2021 - April 2023)

adastra

Data Engineer

Adastra GmbH

Hero

(Mar 2021 - Sep 2021)

vngrs

Data/Software Engineer

VNGRS

Hero

(Jan 2020 - Feb 2021)

insider

Big Data Software Engineer

Insider

Hero

(Sep 2018 - Apr 2019)

goksal

Software Engineer

Goksal Aeronautics

Skills
References
Projects

Cloud Cost Efficiency Analytics

name:'Cloud Cost Efficiency Analytics',
tools: ['DBT', 'AWS S3', 'AWS Athena', 'AWS Lambda', 'AWS SQS', 'AWS Quicksight', 'SQL', 'Python', 'Terraform', 'Grafana'],
my_role:DataEngineeringTeamLead,
description:'Me and my team consists of 5 other developers develop the backend and dashboard of a cloud cost efficiency data from both Azure and AWS, creating alerts for high cost usages, detects anomalies and displays possible cost saving potentials in a dashboard application',
}

Formula1 Game Real Time Analytics

name:'Formula1 Game Real Time Analytics',
tools: ['Kubernetes', 'Helm Charts', 'AWS EKS', 'Grafana', 'AWS Lambda', 'AWS Redshift', 'Confluent Kafka', 'Confluent Flink', 'InfluxDB', 'SQL', 'Python', 'Terraform'],
my_role:DataPlatformEngineer,
description:'A real-time data application that monitors car-related telemetry data using IoT, MQTT, and WebSocket technologies. The application leverages AWS for cloud infrastructure, Kubernetes for container orchestration, and Confluent-hosted Kafka for data streaming. It also features a historical data dashboard to track metrics such as lap times and best sector times',
}

Data Fabric Project

name:'Data Fabric Project',
tools: ['Python', 'Airflow', 'Neo4j', 'AWS', 'Terraform', 'Kubernetes', 'Starburst'],
my_role:SeniorDataEngineer,
description:'I worked on an exciting data fabric project for a major German manufacturing giant. The goal of the project was to consolidate all cybersecurity departmental data into a single, accessible platform for users. We utilized a range of technologies including Python, Airflow, Neo4j, AWS, Terraform, and Kubernetes. For the virtualization layer, we implemented a powerful data integration tool to ensure seamless data access. This project was a significant step towards enhancing data management and accessibility within the client's domain.',
}

Regulatory Data Stack

name:'Regulatory Data Stack',
tools: ['Python', 'SQL', 'AWS Step Functions', 'Dbt', 'Terraform', 'DynamoDB', 'AWS DMS', 'Amazon RDS', 'PostgreSQL', 'MySQL', 'AWS S3', 'AWS Athena', 'Metabase'],
my_role:SeniorDataEngineer,
description:'At Scalable Capital, I worked on a project called Regulatory Data Stack. The goal was to build robust data pipelines. We used AWS Step Functions for orchestration and dbt for data transformation. Our main development stack included Python and SQL. We implemented numerous Terraform modules and utilized DynamoDB to copy data from various tables belonging to different departments to the raw layer in S3. Additionally, we used AWS DMS to copy data from Amazon RDS, PostgreSQL, and MySQL to our data lake. For querying and analyzing data, we used AWS Athena. For the frontend, we used Metabase.',
}

Automotive Data Analytics

name:'Automotive Data Analytics',
tools: ['AWS Glue', 'S3', 'Redshift', 'Tableau', 'AWS Lambda', 'API Gateway', 'AWS Step Functions', 'Terraform'],
my_role:DataEngineer,
description:'At Adastra, I worked on a project for major automotive clients, Volkswagen and Audi. The goal was to process data from the S3 raw layer using AWS Glue, transforming it into various layers and ultimately loading it into Redshift. I then created dashboards using Tableau. Additionally, I developed a file uploader backend using AWS Lambda and API Gateway to bring data into the raw layer. I used AWS Step Functions for orchestration and implemented numerous Terraform modules, which I also contributed to on GitHub. This project significantly improved data processing and visualization capabilities for the clients.',
}

Streaming Data Processing - Betting

name:'Streaming Data Processing - Betting',
tools: ['Scala', 'AWS Kinesis Data Analytics', 'AWS Kinesis Data Firehose', 'Apache Flink', 'Kafka', 'AWS Athena', 'AWS DMS', 'DynamoDB/DynamoDB Streams', 'Terraform', 'GitLab CI/CD'],
my_role:SoftwareDataEngineer,
description:'At VNGRS, I worked on a streaming data transformation and anomaly detection project for an online betting client. The project utilized AWS Kinesis Data Analytics and Apache Flink for real-time processing, with DynamoDB managing intermediate states and DynamoDB Streams capturing real-time updates. Data sources included an on-premise Kafka cluster and AWS DMS with change data capture, and the pipeline sank into Kinesis Data Firehose. Terraform handled infrastructure provisioning, GitLab CI/CD ensured continuous delivery, and the primary programming language was Scala.',
}

Insider Product Feed ETL Pipeline

name:'Insider Product Feed ETL Pipeline',
tools: ['Scala', 'Akka', 'AWS Lambda', 'AWS Kinesis', 'Elasticsearch', 'JavaScript', 'Python'],
my_role:BigDataSoftwareEngineer,
description:'At Insider, I worked on a Product Feed ETL pipeline where source data was transformed into semantic layers and fed into Elasticsearch for both the recommendation system and Product Feed API. The pipeline utilized AWS Lambda and AWS Kinesis as the data source. Additionally, I developed an API using Scala and the Akka framework to serve the source data.',
}
Hero
Education
Hero

2021 - 2022

tum

Masters of Computer Science

Technical University of Munich

Hero

2014 - 2019

koc

Bachelor of Computer Science

Koc University

Hero

2009 - 2014

iel

High School Degree/Abitur

Istanbul Erkek Listesi/Gymnasium

Schwabing West, Munich, Germany