SYSTEM_CONSOLE v2.4.0

Senior Cloud Architect & Technical Platform Leader

Cloud-native order management, data platforms, and resilience engineering for high-volume omnichannel retail.

Status: Open for critical architecture engagements

"Architecture, delivery, and operational leadership across modern and legacy environments. Focused on systems that operate under real business pressure."

Senior technical leader and hands-on cloud architect specializing in cloud-native backend platforms and data services. Strong focus on omnichannel retail, order management, integration, and operational resilience. Experienced in leading architecture across multiple teams, bridging legacy SAP systems with modern GCP-based microservices. Adept at vendor evaluation, stakeholder communication, and making high-stakes technical decisions that align with business goals.

Group Technical Leader – Shop / Order Management Services and Data

2023 — Present
Bauhaus AG

Leading technical strategy and architecture for the core retail engine, focusing on global scalability and reliability.

  • 0x Directed architecture for multi-market migration, ensuring seamless transition of order flows between legacy systems and cloud-native services.
  • 0x Established platform reliability standards and business continuity protocols for high-throughput retail operations.
  • 0x Led vendor evaluation and integration strategy for OMS/WMS ecosystems, balancing build vs. buy trade-offs.
  • 0x Defined engineering standards and CI/CD governance across multiple product teams to ensure consistent delivery quality.
  • 0x Mentored senior engineers and architects, fostering a culture of operational excellence and systems thinking.

Technical Leader – Order Management / Data Platform

2021 — 2023
Bauhaus AG

Architected and delivered the foundational cloud-native data and order processing infrastructure.

  • 0x Designed and implemented a low-latency CDC (Change Data Capture) pipeline for streaming data from on-prem SAP to GCP.
  • 0x Architected a Go-based microservices ecosystem for order management, achieving significant reductions in processing latency.
  • 0x Implemented observability-as-code using Datadog, providing real-time insights into complex distributed transactions.
  • 0x Managed privacy-aware data handling patterns to ensure GDPR compliance across all cloud data services.
  • 0x Supported the transformation from monolithic legacy architecture to a modern, event-driven platform.

Cloud Architecture

GCP (Cloud Run, GKE, Pub/Sub, BigQuery, Cloud SQL) Terraform / Infrastructure as Code Multi-region resilience Serverless & Containerization

Backend & Platform Engineering

Go Python API Design (gRPC, REST) CI/CD Governance (GitHub Actions) Distributed Systems

Data & Streaming

Event-driven architecture Dataflow / Apache Beam Datastream Real-time analytics CDC patterns

Reliability & Observability

Datadog SRE principles Incident Diagnosis Tooling Performance Budgeting SLIs/SLOs

Leadership & Strategy

Technical Roadmapping Vendor Due Diligence Stakeholder Alignment Team Building & Mentoring Cross-team Governance

Two-speed Migration Architecture

Context

Legacy retail systems hindering market expansion and agility.

Action

Designed a 'strangler-fig' inspired architecture allowing legacy SAP and modern GCP services to coexist during a multi-year migration.

Result

Enabled zero-downtime market rollouts while incrementally decommissioning legacy components.

Stack
GCPGoPub/SubSAP Integration

Low-latency Order Management

Context

High-volume order processing suffering from latency spikes during peak retail events.

Action

Rebuilt core order validation and routing logic in Go, utilizing concurrent processing and optimized data access patterns.

Result

Achieved 70% reduction in P99 latency, significantly improving customer experience during high-load periods.

Stack
GoCloud RunRedisCloud SQL

On-prem to Cloud CDC Pipeline

Context

Fragmented data silos preventing real-time business intelligence and inventory visibility.

Action

Architected a streaming data pipeline using Google Cloud Datastream and Dataflow to sync on-prem databases with BigQuery.

Result

Reduced data lag from 24 hours to sub-minute, enabling real-time stock and order analytics.

Stack
DatastreamDataflowBigQueryOracle/SAP

Observability Standardization

Context

Lack of visibility across distributed teams leading to slow incident resolution and 'blame games'.

Action

Defined and rolled out a global observability framework using Datadog and Terraform, including standardized dashboards and alerting.

Result

Improved Mean Time to Recovery (MTTR) by 40% and established a common language for reliability across the organization.

Stack
DatadogTerraformGitHub ActionsSRE

$ catdeaas.md

DEaaS — Data Foundation for LAG, LLM and Agentic AI

DEaaS is the data platform I built to turn operational data into a governed, reusable, and cloud-native foundation. The goal was not just analytics. It was to create a platform that could support reliable downstream consumption across reporting, operational tooling, and newer LLM- and agent-driven workflows.

Platform Overview

The platform was designed to move data from operational and legacy systems into reusable cloud-native data services through structured ingestion, transformation, and standardised domain signals. It prioritised freshness, observability, controlled access, and downstream usability over brittle point-to-point integration patterns.

AI & Agentic Enablement

LLM and agentic systems only become useful when they can work against timely, trustworthy, and well-structured data. DEaaS provides that foundation by improving data availability, standardisation, and governance. This enables use cases such as retrieval-backed assistance, summarisation, operational decision support, and workflow-oriented agents without relying on fragile ad hoc integrations.

System Architecture

01
Sources

Operational systems, legacy platforms, business events

02
Ingestion & Processing

Streaming, CDC, transformation, validation

03
Platform Layer

Standardised domain signals, storage, governance, observability

04
Consumers

Analytics, operational tooling, LLM workflows, agentic processes

Technical Leadership & Execution

I designed and built the platform with a focus on data movement, platform structure, reliability, and downstream usability. My work covered architecture, implementation direction, integration patterns, operational guardrails, and the practical decisions needed to make the platform useful for both engineering teams and AI-enabled consumers.

Key Capabilities

  • + Governed operational data
  • + Reusable domain signals
  • + Cloud-native data services
  • + Observable pipelines
  • + Structured downstream access
  • + Foundation for LLM workflows
  • + Foundation for agentic processes
  • + Analytics and operational intelligence
  • Design for failure: Systems must survive real operational pressure.
  • Observability is a core feature, not an afterthought.
  • Incremental modernization beats reckless rewrites.
  • Standards and guardrails enable scale, not bureaucracy.
  • Architecture must map to business reality and operational constraints.
  • Translating complex technical ambiguity into actionable roadmaps.
  • Balancing delivery velocity with long-term reliability and technical debt.
  • Fostering autonomous teams through clear standards and shared goals.
  • Evidence-based decision making for vendor and platform selections.
  • Clear, executive-ready communication across all levels of the organization.
Location Germany
Languages
Farsi (Native)English (Fluent)German (B1)
Uptime 99.999%
PGP 0x2A4F...B89E