(Jan 2024 - Present)

IT Consultant
Data Reply DE
coder={name:'Emre Uludag',skills:['AWS', 'Azure', 'Confluent', 'Python', 'SQL', 'Scala', 'Java', 'Dbt', 'Spark', 'Flink', 'Kafka', 'Airflow', 'Terraform', 'Docker', 'Kubernetes'],location:Munich,data_engineer_since:2020,number_of_years_it_experience:6,current_occupation:DataReplyDE,title:DataEngineeringConsultant,hard_worker:True,motivated:True,problem_solver:True}
Who I am?
As a Data Architect and Entrepreneur, I focus on building scalable and high-impact software solutions. Beyond my technical expertise, I am passionate about sharing insights through my tech blog, where I explore topics at the intersection of data, cloud, and business. I am actively growing my audience and engaging with a broader community to exchange ideas and drive innovation. My vision is to build multiple SaaS products that heavily leverage AI and data.
(Jan 2024 - Present)
IT Consultant
Data Reply DE
(May 2023 - November 2023)
Senior Data Engineer
Scalable Capital
(Oct 2021 - April 2023)
Data Engineer
Adastra GmbH
(Mar 2021 - Sep 2021)
Data/Software Engineer
VNGRS
(Jan 2020 - Feb 2021)
Big Data Software Engineer
Insider
(Sep 2018 - Apr 2019)
Software Engineer
Goksal Aeronautics
Cloud Cost Efficiency Analytics
project={name:'Cloud Cost Efficiency Analytics',tools: ['DBT', 'AWS S3', 'AWS Athena', 'AWS Lambda', 'AWS SQS', 'AWS Quicksight', 'SQL', 'Python', 'Terraform', 'Grafana'],my_role:DataEngineeringTeamLead,description:'Me and my team consists of 5 other developers develop the backend and dashboard of a cloud cost efficiency data from both Azure and AWS, creating alerts for high cost usages, detects anomalies and displays possible cost saving potentials in a dashboard application',}
Formula1 Game Real Time Analytics
project={name:'Formula1 Game Real Time Analytics',tools: ['Kubernetes', 'Helm Charts', 'AWS EKS', 'Grafana', 'AWS Lambda', 'AWS Redshift', 'Confluent Kafka', 'Confluent Flink', 'InfluxDB', 'SQL', 'Python', 'Terraform'],my_role:DataPlatformEngineer,description:'A real-time data application that monitors car-related telemetry data using IoT, MQTT, and WebSocket technologies. The application leverages AWS for cloud infrastructure, Kubernetes for container orchestration, and Confluent-hosted Kafka for data streaming. It also features a historical data dashboard to track metrics such as lap times and best sector times',}
Data Fabric Project
project={name:'Data Fabric Project',tools: ['Python', 'Airflow', 'Neo4j', 'AWS', 'Terraform', 'Kubernetes', 'Starburst'],my_role:SeniorDataEngineer,description:'I worked on an exciting data fabric project for a major German manufacturing giant. The goal of the project was to consolidate all cybersecurity departmental data into a single, accessible platform for users. We utilized a range of technologies including Python, Airflow, Neo4j, AWS, Terraform, and Kubernetes. For the virtualization layer, we implemented a powerful data integration tool to ensure seamless data access. This project was a significant step towards enhancing data management and accessibility within the client's domain.',}
Regulatory Data Stack
project={name:'Regulatory Data Stack',tools: ['Python', 'SQL', 'AWS Step Functions', 'Dbt', 'Terraform', 'DynamoDB', 'AWS DMS', 'Amazon RDS', 'PostgreSQL', 'MySQL', 'AWS S3', 'AWS Athena', 'Metabase'],my_role:SeniorDataEngineer,description:'At Scalable Capital, I worked on a project called Regulatory Data Stack. The goal was to build robust data pipelines. We used AWS Step Functions for orchestration and dbt for data transformation. Our main development stack included Python and SQL. We implemented numerous Terraform modules and utilized DynamoDB to copy data from various tables belonging to different departments to the raw layer in S3. Additionally, we used AWS DMS to copy data from Amazon RDS, PostgreSQL, and MySQL to our data lake. For querying and analyzing data, we used AWS Athena. For the frontend, we used Metabase.',}
Automotive Data Analytics
project={name:'Automotive Data Analytics',tools: ['AWS Glue', 'S3', 'Redshift', 'Tableau', 'AWS Lambda', 'API Gateway', 'AWS Step Functions', 'Terraform'],my_role:DataEngineer,description:'At Adastra, I worked on a project for major automotive clients, Volkswagen and Audi. The goal was to process data from the S3 raw layer using AWS Glue, transforming it into various layers and ultimately loading it into Redshift. I then created dashboards using Tableau. Additionally, I developed a file uploader backend using AWS Lambda and API Gateway to bring data into the raw layer. I used AWS Step Functions for orchestration and implemented numerous Terraform modules, which I also contributed to on GitHub. This project significantly improved data processing and visualization capabilities for the clients.',}
Streaming Data Processing - Betting
project={name:'Streaming Data Processing - Betting',tools: ['Scala', 'AWS Kinesis Data Analytics', 'AWS Kinesis Data Firehose', 'Apache Flink', 'Kafka', 'AWS Athena', 'AWS DMS', 'DynamoDB/DynamoDB Streams', 'Terraform', 'GitLab CI/CD'],my_role:SoftwareDataEngineer,description:'At VNGRS, I worked on a streaming data transformation and anomaly detection project for an online betting client. The project utilized AWS Kinesis Data Analytics and Apache Flink for real-time processing, with DynamoDB managing intermediate states and DynamoDB Streams capturing real-time updates. Data sources included an on-premise Kafka cluster and AWS DMS with change data capture, and the pipeline sank into Kinesis Data Firehose. Terraform handled infrastructure provisioning, GitLab CI/CD ensured continuous delivery, and the primary programming language was Scala.',}
Insider Product Feed ETL Pipeline
project={name:'Insider Product Feed ETL Pipeline',tools: ['Scala', 'Akka', 'AWS Lambda', 'AWS Kinesis', 'Elasticsearch', 'JavaScript', 'Python'],my_role:BigDataSoftwareEngineer,description:'At Insider, I worked on a Product Feed ETL pipeline where source data was transformed into semantic layers and fed into Elasticsearch for both the recommendation system and Product Feed API. The pipeline utilized AWS Lambda and AWS Kinesis as the data source. Additionally, I developed an API using Scala and the Akka framework to serve the source data.',}
2021 - 2022
Masters of Computer Science
Technical University of Munich
2014 - 2019
Bachelor of Computer Science
Koc University
2009 - 2014
High School Degree/Abitur
Istanbul Erkek Listesi/Gymnasium