Group Technical Leader – Shop / Order Management Services and Data
2023 — PresentLeading technical strategy and architecture for the core retail engine, focusing on global scalability and reliability.
- 0x Directed architecture for multi-market migration, ensuring seamless transition of order flows between legacy systems and cloud-native services.
- 0x Established platform reliability standards and business continuity protocols for high-throughput retail operations.
- 0x Led vendor evaluation and integration strategy for OMS/WMS ecosystems, balancing build vs. buy trade-offs.
- 0x Defined engineering standards and CI/CD governance across multiple product teams to ensure consistent delivery quality.
- 0x Mentored senior engineers and architects, fostering a culture of operational excellence and systems thinking.
Technical Leader – Order Management / Data Platform
2021 — 2023Architected and delivered the foundational cloud-native data and order processing infrastructure.
- 0x Designed and implemented a low-latency CDC (Change Data Capture) pipeline for streaming data from on-prem SAP to GCP.
- 0x Architected a Go-based microservices ecosystem for order management, achieving significant reductions in processing latency.
- 0x Implemented observability-as-code using Datadog, providing real-time insights into complex distributed transactions.
- 0x Managed privacy-aware data handling patterns to ensure GDPR compliance across all cloud data services.
- 0x Supported the transformation from monolithic legacy architecture to a modern, event-driven platform.
Cloud Architecture
Backend & Platform Engineering
Data & Streaming
Reliability & Observability
Leadership & Strategy
Two-speed Migration Architecture
Legacy retail systems hindering market expansion and agility.
Designed a 'strangler-fig' inspired architecture allowing legacy SAP and modern GCP services to coexist during a multi-year migration.
Enabled zero-downtime market rollouts while incrementally decommissioning legacy components.
Low-latency Order Management
High-volume order processing suffering from latency spikes during peak retail events.
Rebuilt core order validation and routing logic in Go, utilizing concurrent processing and optimized data access patterns.
Achieved 70% reduction in P99 latency, significantly improving customer experience during high-load periods.
On-prem to Cloud CDC Pipeline
Fragmented data silos preventing real-time business intelligence and inventory visibility.
Architected a streaming data pipeline using Google Cloud Datastream and Dataflow to sync on-prem databases with BigQuery.
Reduced data lag from 24 hours to sub-minute, enabling real-time stock and order analytics.
Observability Standardization
Lack of visibility across distributed teams leading to slow incident resolution and 'blame games'.
Defined and rolled out a global observability framework using Datadog and Terraform, including standardized dashboards and alerting.
Improved Mean Time to Recovery (MTTR) by 40% and established a common language for reliability across the organization.
$ catdeaas.md
DEaaS — Data Foundation for LAG, LLM and Agentic AI
DEaaS is the data platform I built to turn operational data into a governed, reusable, and cloud-native foundation. The goal was not just analytics. It was to create a platform that could support reliable downstream consumption across reporting, operational tooling, and newer LLM- and agent-driven workflows.
Platform Overview
The platform was designed to move data from operational and legacy systems into reusable cloud-native data services through structured ingestion, transformation, and standardised domain signals. It prioritised freshness, observability, controlled access, and downstream usability over brittle point-to-point integration patterns.
AI & Agentic Enablement
LLM and agentic systems only become useful when they can work against timely, trustworthy, and well-structured data. DEaaS provides that foundation by improving data availability, standardisation, and governance. This enables use cases such as retrieval-backed assistance, summarisation, operational decision support, and workflow-oriented agents without relying on fragile ad hoc integrations.
System Architecture
Sources
Operational systems, legacy platforms, business events
Ingestion & Processing
Streaming, CDC, transformation, validation
Platform Layer
Standardised domain signals, storage, governance, observability
Consumers
Analytics, operational tooling, LLM workflows, agentic processes
Technical Leadership & Execution
I designed and built the platform with a focus on data movement, platform structure, reliability, and downstream usability. My work covered architecture, implementation direction, integration patterns, operational guardrails, and the practical decisions needed to make the platform useful for both engineering teams and AI-enabled consumers.
Key Capabilities
- + Governed operational data
- + Reusable domain signals
- + Cloud-native data services
- + Observable pipelines
- + Structured downstream access
- + Foundation for LLM workflows
- + Foundation for agentic processes
- + Analytics and operational intelligence
- Design for failure: Systems must survive real operational pressure.
- Observability is a core feature, not an afterthought.
- Incremental modernization beats reckless rewrites.
- Standards and guardrails enable scale, not bureaucracy.
- Architecture must map to business reality and operational constraints.
- → Translating complex technical ambiguity into actionable roadmaps.
- → Balancing delivery velocity with long-term reliability and technical debt.
- → Fostering autonomous teams through clear standards and shared goals.
- → Evidence-based decision making for vendor and platform selections.
- → Clear, executive-ready communication across all levels of the organization.