End-to-End-Data-Pipeline

End-to-End Data Pipeline with Batch & Streaming Processing

This repository contains a fully integrated, production-ready data pipeline that supports both batch and streaming data processing using open-source technologies. It is designed to be easily configured and deployed by any business or individual with minimal modifications.

The pipeline incorporates:

Python SQL Bash Docker Kubernetes Apache Airflow Apache Spark Apache Flink Kafka Apache Hadoop PostgreSQL MySQL MongoDB InfluxDB MinIO AWS S3 Prometheus Grafana Elasticsearch MLflow Feast Great Expectations Apache Atlas Tableau Power BI Looker Redis Terraform

Read this README and follow the step-by-step guide to set up the pipeline on your local machine or cloud environment. Customize the pipeline components, configurations, and example applications to suit your data processing needs.

Table of Contents

  1. Architecture Overview
  2. Directory Structure
  3. Components & Technologies
  4. Setup Instructions
  5. Configuration & Customization
  6. Example Applications
  7. Troubleshooting & Further Considerations
  8. Contributing
  9. License
  10. Final Notes

Architecture Overview

The architecture of the end-to-end data pipeline is designed to handle both batch and streaming data processing. Below is a high-level overview of the components and their interactions:

High-Level Architecture

graph TB
    subgraph "Data Sources"
        BS[Batch Sources<br/>MySQL, Files, CSV/JSON/XML]
        SS[Streaming Sources<br/>Kafka Events, IoT, Social Media]
    end

    subgraph "Ingestion & Orchestration"
        AIR[Apache Airflow<br/>DAG Orchestration]
        KAF[Apache Kafka<br/>Event Streaming]
    end

    subgraph "Processing Layer"
        SPB[Spark Batch<br/>Large-scale ETL]
        SPS[Spark Streaming<br/>Real-time Processing]
        GE[Great Expectations<br/>Data Quality]
    end

    subgraph "Storage Layer"
        MIN[MinIO<br/>S3-Compatible Storage]
        PG[PostgreSQL<br/>Analytics Database]
        S3[AWS S3<br/>Cloud Storage]
        MDB[MongoDB<br/>NoSQL Store]
        IDB[InfluxDB<br/>Time-series DB]
    end

    subgraph "Monitoring & Governance"
        PROM[Prometheus<br/>Metrics Collection]
        GRAF[Grafana<br/>Dashboards]
        ATL[Apache Atlas<br/>Data Lineage]
    end

    subgraph "ML & Serving"
        MLF[MLflow<br/>Model Tracking]
        FST[Feast<br/>Feature Store]
        BI[BI Tools<br/>Tableau/PowerBI/Looker]
    end

    BS --> AIR
    SS --> KAF
    AIR --> SPB
    KAF --> SPS
    SPB --> GE
    SPS --> GE
    GE --> MIN
    GE --> PG
    MIN --> S3
    PG --> MDB
    PG --> IDB
    SPB --> PROM
    SPS --> PROM
    PROM --> GRAF
    SPB --> ATL
    SPS --> ATL
    PG --> MLF
    PG --> FST
    PG --> BI
    MIN --> MLF

Flow Diagram

Architecture Diagram

Basically, data will be streamed with Kafka, processed with Spark, and stored in a data warehouse using PostgreSQL. The pipeline also integrates MinIO as an object storage solution and uses Airflow to orchestrate the end-to-end data flow. Great Expectations enforces data quality checks, while Prometheus and Grafana provide monitoring and alerting capabilities. MLflow and Feast are used for machine learning model tracking and feature store integration.

[!CAUTION] Note: The diagram(s) may not reflect ALL components in the repository, but it provides a good overview of the main components and their interactions. For instance, I added BI tools like Tableau, Power BI, and Looker to the repo for data visualization and reporting.

Batch Pipeline Flow

sequenceDiagram
    participant BS as Batch Source<br/>(MySQL/Files)
    participant AF as Airflow DAG
    participant GE as Great Expectations
    participant MN as MinIO
    participant SP as Spark Batch
    participant PG as PostgreSQL
    participant MG as MongoDB
    participant PR as Prometheus

    BS->>AF: Trigger Batch Job
    AF->>BS: Extract Data
    AF->>GE: Validate Data Quality
    GE-->>AF: Validation Results
    AF->>MN: Upload Raw Data
    AF->>SP: Submit Spark Job
    SP->>MN: Read Raw Data
    SP->>SP: Transform & Enrich
    SP->>PG: Write Processed Data
    SP->>MG: Write NoSQL Data
    SP->>PR: Send Metrics
    AF->>PR: Job Status Metrics

Streaming Pipeline Flow

sequenceDiagram
    participant KP as Kafka Producer
    participant KT as Kafka Topic
    participant SS as Spark Streaming
    participant AD as Anomaly Detection
    participant PG as PostgreSQL
    participant MN as MinIO
    participant GF as Grafana

    KP->>KT: Publish Events
    KT->>SS: Consume Stream
    SS->>AD: Process Events
    AD->>AD: Detect Anomalies
    AD->>PG: Store Results
    AD->>MN: Archive Data
    SS->>GF: Real-time Metrics
    GF->>GF: Update Dashboard

Data Quality & Governance Flow

graph LR
    subgraph "Data Quality Pipeline"
        DI[Data Ingestion] --> GE[Great Expectations]
        GE --> VR{Validation<br/>Result}
        VR -->|Pass| DP[Data Processing]
        VR -->|Fail| AL[Alert & Log]
        AL --> DR[Data Rejection]
        DP --> DQ[Quality Metrics]
    end

    subgraph "Data Governance"
        DP --> ATL[Apache Atlas]
        ATL --> LIN[Lineage Tracking]
        ATL --> CAT[Data Catalog]
        ATL --> POL[Policies & Compliance]
    end

    DQ --> PROM[Prometheus]
    PROM --> GRAF[Grafana Dashboard]

CI/CD & Deployment Pipeline

graph LR
    subgraph "Development"
        DEV[Developer] --> GIT[Git Push]
    end

    subgraph "CI/CD Pipeline"
        GIT --> GHA[GitHub Actions]
        GHA --> TEST[Run Tests]
        TEST --> BUILD[Build Docker Images]
        BUILD --> SCAN[Security Scan]
        SCAN --> PUSH[Push to Registry]
    end

    subgraph "Deployment"
        PUSH --> ARGO[Argo CD]
        ARGO --> K8S[Kubernetes Cluster]
        K8S --> HELM[Helm Charts]
        HELM --> PODS[Deploy Pods]
    end

    subgraph "Infrastructure"
        TERRA[Terraform] --> CLOUD[Cloud Resources]
        CLOUD --> K8S
    end

    PODS --> MON[Monitoring]

Text-Based Pipeline Diagram

                            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                            β”‚         Batch Source           β”‚
                            β”‚(MySQL, Files, User Interaction)β”‚
                            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β”‚  (Extract/Validate)
                                             β–Ό
                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                           β”‚      Airflow Batch DAG              β”‚
                           β”‚ - Extracts data from MySQL          β”‚
                           β”‚ - Validates with Great Expectations β”‚
                           β”‚ - Uploads raw data to MinIO         β”‚
                           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚ (spark-submit)
                                             β–Ό
                             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                             β”‚         Spark Batch Job        β”‚
                             β”‚ - Reads raw CSV from MinIO     β”‚
                             β”‚ - Transforms, cleans, enriches β”‚
                             β”‚ - Writes transformed data to   β”‚
                             β”‚   PostgreSQL & MinIO           β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                            β”‚ (Load/Analyze)
                                            β–Ό
                             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                             β”‚       Processed Data Store     β”‚
                             β”‚ (PostgreSQL, MongoDB, AWS S3)  β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚ (Query/Analyze)
                                             β–Ό
                             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                             β”‚         Cache & Indexing       β”‚
                             β”‚     (Elasticsearch, Redis)     β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Streaming Side:
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚       Streaming Source      β”‚
                              β”‚         (Kafka)             β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                           β”‚
                                           β–Ό
                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                           β”‚    Spark Streaming Job            β”‚
                           β”‚ - Consumes Kafka messages         β”‚
                           β”‚ - Filters and detects anomalies   β”‚
                           β”‚ - Persists anomalies to           β”‚
                           β”‚   PostgreSQL & MinIO              β”‚
                           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Monitoring & Governance:
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚       Monitoring &             β”‚
                              β”‚  Data Governance Layer         β”‚
                              β”‚ - Prometheus & Grafana         β”‚
                              β”‚ - Apache Atlas / OpenMetadata  β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

ML & Serving:
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚        AI/ML Serving         β”‚
                              β”‚ - Feature Store (Feast)      β”‚
                              β”‚ - MLflow Model Tracking      β”‚
                              β”‚ - Model training & serving   β”‚
                              β”‚ - BI Dashboards              β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

CI/CD & Terraform:
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚        CI/CD Pipelines       β”‚
                              β”‚ - GitHub Actions / Jenkins   β”‚
                              β”‚ - Terraform for Cloud Deploy β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Container Orchestration:
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚       Kubernetes Cluster     β”‚
                              β”‚ - Argo CD for GitOps         β”‚
                              β”‚ - Helm Charts for Deployment β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Full Flow Diagram with Backend & Frontend Integration (Optional)

A more detailed flow diagram that includes backend and frontend integration is available in the assets/ directory. This diagram illustrates how the data pipeline components interact with each other and with external systems, including data sources, storage, processing, visualization, and monitoring.

Although the frontend & backend integration is not included in this repository (since it’s supposed to only contain the pipeline), you can easily integrate it with your existing frontend application or create a new one using popular frameworks like React, Angular, or Vue.js.

Full Flow Diagram

Docker Services Architecture

graph TB
    subgraph "Docker Compose Stack"
        subgraph "Data Sources"
            MYSQL[MySQL<br/>Port: 3306]
            KAFKA[Kafka<br/>Port: 9092]
            ZK[Zookeeper<br/>Port: 2181]
        end

        subgraph "Processing"
            AIR[Airflow<br/>Webserver:8080<br/>Scheduler]
            SPARK[Spark<br/>Master/Worker]
        end

        subgraph "Storage"
            MINIO[MinIO<br/>API: 9000<br/>Console: 9001]
            PG[PostgreSQL<br/>Port: 5432]
        end

        subgraph "Monitoring"
            PROM[Prometheus<br/>Port: 9090]
            GRAF[Grafana<br/>Port: 3000]
        end

        KAFKA --> ZK
        AIR --> MYSQL
        AIR --> PG
        AIR --> SPARK
        SPARK --> MINIO
        SPARK --> PG
        SPARK --> KAFKA
        PROM --> AIR
        PROM --> SPARK
        GRAF --> PROM
    end

ML Pipeline Flow

flowchart LR
    subgraph "Feature Engineering"
        RAW[Raw Data] --> FE[Feature<br/>Extraction]
        FE --> FS[Feature Store<br/>Feast]
    end

    subgraph "Model Training"
        FS --> TRAIN[Training<br/>Pipeline]
        TRAIN --> VAL[Validation]
        VAL --> MLF[MLflow<br/>Registry]
    end

    subgraph "Model Serving"
        MLF --> DEPLOY[Model<br/>Deployment]
        DEPLOY --> API[Prediction<br/>API]
        API --> APP[Applications]
    end

    subgraph "Monitoring"
        API --> METRICS[Performance<br/>Metrics]
        METRICS --> DRIFT[Drift<br/>Detection]
        DRIFT --> RETRAIN[Retrigger<br/>Training]
    end

    RETRAIN --> TRAIN

Directory Structure

end-to-end-pipeline/
  β”œβ”€β”€ .devcontainer/                 # VS Code Dev Container settings
  β”œβ”€β”€ docker-compose.yaml            # Docker orchestration for all services
  β”œβ”€β”€ docker-compose.ci.yaml         # Docker Compose for CI/CD pipelines
  β”œβ”€β”€ End_to_End_Data_Pipeline.ipynb # Jupyter notebook for pipeline overview
  β”œβ”€β”€ requirements.txt               # Python dependencies for scripts
  β”œβ”€β”€ .gitignore                     # Standard Git ignore file
  β”œβ”€β”€ README.md                      # Comprehensive documentation (this file)
  β”œβ”€β”€ airflow/
  β”‚   β”œβ”€β”€ Dockerfile                 # Custom Airflow image with dependencies
  β”‚   β”œβ”€β”€ requirements.txt           # Python dependencies for Airflow
  β”‚   └── dags/
  β”‚       β”œβ”€β”€ batch_ingestion_dag.py # Batch pipeline DAG
  β”‚       └── streaming_monitoring_dag.py  # Streaming monitoring DAG
  β”œβ”€β”€ spark/
  β”‚   β”œβ”€β”€ Dockerfile                 # Custom Spark image with Kafka and S3 support
  β”‚   β”œβ”€β”€ spark_batch_job.py         # Spark batch ETL job
  β”‚   └── spark_streaming_job.py     # Spark streaming job
  β”œβ”€β”€ kafka/
  β”‚   └── producer.py                # Kafka producer for simulating event streams
  β”œβ”€β”€ storage/
  β”‚   β”œβ”€β”€ aws_s3_influxdb.py         # S3-InfluxDB integration stub
  β”‚   β”œβ”€β”€ hadoop_batch_processing.py  # Hadoop batch processing stub
  β”‚   └── mongodb_streaming.py       # MongoDB streaming integration stub
  β”œβ”€β”€ great_expectations/
  β”‚   β”œβ”€β”€ great_expectations.yaml    # GE configuration
  β”‚   └── expectations/
  β”‚       └── raw_data_validation.py # GE suite for data quality
  β”œβ”€β”€ governance/
  β”‚   └── atlas_stub.py              # Dataset lineage registration with Atlas/OpenMetadata
  β”œβ”€β”€ monitoring/
  β”‚   β”œβ”€β”€ monitoring.py              # Python script to set up Prometheus & Grafana
  β”‚   └── prometheus.yml             # Prometheus configuration file
  β”œβ”€β”€ ml/
  β”‚   β”œβ”€β”€ feature_store_stub.py      # Feature Store integration stub
  β”‚   └── mlflow_tracking.py         # MLflow model tracking
  β”œβ”€β”€ kubernetes/
  β”‚   β”œβ”€β”€ argo-app.yaml              # Argo CD application manifest
  β”‚   └── deployment.yaml            # Kubernetes deployment manifest
  β”œβ”€β”€ terraform/                     # Terraform scripts for cloud deployment
  └── scripts/
      └── init_db.sql                # SQL script to initialize MySQL and demo data

Components & Technologies

Setup Instructions

Prerequisites

Step-by-Step Guide

  1. Clone the Repository

    git clone https://github.com/hoangsonww/End-to-End-Data-Pipeline.git
    cd End-to-End-Data-Pipeline
    
  2. Start the Pipeline Stack

    Use Docker Compose to launch all components:

    docker-compose up --build
    

    This command will:

    • Build custom Docker images for Airflow and Spark.
    • Start MySQL, PostgreSQL, Kafka (with Zookeeper), MinIO, Prometheus, Grafana, and Airflow webserver.
    • Initialize the MySQL database with demo data (via scripts/init_db.sql).
  3. Access the Services
  4. Run Batch Pipeline
    • In the Airflow UI, enable the batch_ingestion_dag to run the end-to-end batch pipeline.
    • This DAG extracts data from MySQL, validates it, uploads raw data to MinIO, triggers a Spark job for transformation, and loads data into PostgreSQL.
  5. Run Streaming Pipeline
    • Open a terminal and start the Kafka producer:
      docker-compose exec kafka python /opt/spark_jobs/../kafka/producer.py
      
    • In another terminal, run the Spark streaming job:
      docker-compose exec spark spark-submit --master local[2] /opt/spark_jobs/spark_streaming_job.py
      
    • The streaming job consumes events from Kafka, performs real-time anomaly detection, and writes results to PostgreSQL and MinIO.
  6. Monitoring & Governance
    • Prometheus & Grafana:
      Use the monitoring.py script (or access Grafana) to view real-time metrics and dashboards.
    • Data Lineage:
      The governance/atlas_stub.py script registers lineage between datasets (can be extended for full Apache Atlas integration).
  7. ML & Feature Store
    • Use ml/mlflow_tracking.py to simulate model training and tracking.
    • Use ml/feature_store_stub.py to integrate with a feature store like Feast.
  8. CI/CD & Deployment
    • Use the docker-compose.ci.yaml file to set up CI/CD pipelines.
    • Use the kubernetes/ directory for Kubernetes deployment manifests.
    • Use the terraform/ directory for cloud deployment scripts.
    • Use the .github/workflows/ directory for GitHub Actions CI/CD workflows.

Next Steps

Congratulations! You have successfully set up the end-to-end data pipeline with batch and streaming processing. However, this is a very general pipeline that needs to be customized for your specific use case.

[!IMPORTANT] Note: Be sure to visit the files and scripts in the repository and change the credentials, configurations, and logic to match your environment and use case. Feel free to extend the pipeline with additional components, services, or integrations as needed.

Configuration & Customization

Example Applications

mindmap
  root((Data Pipeline<br/>Use Cases))
    E-Commerce
      Real-Time Recommendations
        Clickstream Processing
        User Behavior Analysis
        Personalized Content
      Fraud Detection
        Transaction Monitoring
        Pattern Recognition
        Risk Scoring
    Finance
      Risk Analysis
        Credit Assessment
        Portfolio Analytics
        Market Risk
      Trade Surveillance
        Market Data Processing
        Compliance Monitoring
        Anomaly Detection
    Healthcare
      Patient Monitoring
        IoT Sensor Data
        Real-time Alerts
        Predictive Analytics
      Clinical Trials
        Data Integration
        Outcome Prediction
        Drug Efficacy Analysis
    IoT/Manufacturing
      Predictive Maintenance
        Sensor Analytics
        Failure Prediction
        Maintenance Scheduling
      Supply Chain
        Inventory Optimization
        Logistics Tracking
        Demand Forecasting
    Media
      Sentiment Analysis
        Social Media Streams
        Brand Monitoring
        Trend Detection
      Ad Fraud Detection
        Click Pattern Analysis
        Bot Detection
        Campaign Analytics

E-Commerce & Retail

Financial Services & Banking

Healthcare & Life Sciences

IoT & Manufacturing

Media & Social Networks

Feel free to use this pipeline as a starting point for your data processing needs. Extend it with additional components, services, or integrations to build a robust, end-to-end data platform.

Troubleshooting & Further Considerations

Contributing

Contributions, issues, and feature requests are welcome!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request
  6. We will review your changes and merge them into the main branch upon approval.

License

This project is licensed under the MIT License.

Final Notes

[!NOTE] This end-to-end data pipeline is designed for rapid deployment and customization. With minor configuration changes, it can be adapted to many business casesβ€”from real-time analytics and fraud detection to predictive maintenance and advanced ML model training. Enjoy building a data-driven future with this pipeline!


Thanks for reading! If you found this repository helpful, please star it and share it with others. For questions, feedback, or suggestions, feel free to reach out to me on GitHub.

⬆️ Back to top