Skip to Main Content

Generative AI: Using artificial intelligence to make human impact.Learn how

Insight

Driving Real-time Intelligence across all business ecosystems by rapid Integration of data for Majid Al Futtaim

With the vision to transform its overall business and the data practice, Majid Al Futtaim (MAF) in collaboration with Publicis Sapient sought to envisage and build a core platform driving Real-time Intelligence across all its business ecosystems by rapid Integration of data. This involved collecting data from various sources, manage and expose it to various units of the business in order to leverage the data in terms of hyper-personalization or customer centric promotions.

To achieve this transformation, Centralized Data Lake platform was built on AWS as a solution that is future ready, scalable, decoupled, modular and agile with the objective to drive great consumer experiences.

The Solution

The solution leveraged AWS cloud services and DevOps automation tools to create new infrastructure for test purposes and tear it down on demand. EKS cluster is used to host the microservices built to cater to this platform, which can scale on demand. Elastic search and Graph DB is used to store the meta information related to tables, schemas, and APIs etc, which can be configured through the Meta Manager to discover, orchestrate how data is ingested, stored and consumed. Kafka is used to stream the incoming data into AWS S3 and EMR with hundreds of topics and around 60 TB of data. Once consumed it is stored in RDS or Vertica data stores via APIs, OLTPs or other mechanisms for consumption by the data analysts or data scientists for data analysis, building dashboards for reporting, predicting or forecasting based on machine learning.

DevOps practice and methodologies were used to automate the infrastructure provisioning, microservices build and deployment and overall security, logging, and monitoring of the product.

DevOps played an important role in the overall solution and helped attain the outcomes through these automations.

The Engineering with GitOps

GIT is the source of truth for all Kubernetes deployments. The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository. If you want to deploy a new application or update an existing one, you only need to update the repository - the automated process handles everything else. The stack used for GitOps is as below:

Infrastructure as Code Automation - Single Click Environment Setup: Terraform modules and ansible playbooks were used to automate infrastructure provisioning and environment setup (application components as helm charts deployed on EKS) via Jenkins IAC pipelines

Logging & Monitoring - Prometheus and Grafana helm charts were used, and dashboards were saved as json files in GitHub repo to achieve monitoring as code practice. ELK stack was used for centralised logging and visualisation

Here is the list of improvements with overall DevOps automation

1) Separation of 50+ microservices enabled by automated CI/CD helped reduce 10% load per job per service on Jenkins.

2) EC2 VMs migration to EKS resulted in 40% of cost saving with the use of Dynamic slave pod

3) Usage of multi-branch pipelines and Automated PR checks increased developer's productivity and release frequency by 20%. Also resulted in frequent weekly releases as compared to earlier monthly releases.

4) The manual end-to-end environment setup used to take more than a day, but with automated IAC platform the timelines reduced to ~1.5 hours

5) DRY principle, with the use of global terraform modules and groovy shared libraries for Jenkins pipelines helped reduce number of lines of code and complexity in the code with better reusability. Also, reduction in duplicate code reviews by 50%

6) ArgoCD as a Kubernetes native deployment tool enabled application definitions, configurations, declarative and version-controlled environments and automated reconciliation ensuring application always remains in desired state defined in helm charts which can be verified from ArgoCD UI

7) Centralized terraform platform integrated with Jenkins pipelines with security analysis (tfsec), linting, plan review on slack and automated apply post plan approval.

 

Outcomes

  • Built MVP of Meta-Manager to create highly scalable data ingestion framework
  • Integrated all known sources of customer data in MAF with GCR process, added 1000+ Data Sources, approx. 3 TB Data
  • Data processing: Capability to process 100 MBPS data in Real Time (3.5 PB annually)
  • Cost avoidance of AED 5 Mil, estimated future cost saving of AED 5+ Mil over next five years
  • 80% improvement in go-to market speed
Vice President – Global AWS Alliance and Cloud & DevOps Lead
Mohammad Wasim
Vice President – Global AWS Alliance and Cloud & DevOps Lead