Upgrading our hybride micro-services architecture

Progressive migration of an Hybrid Micro-services architecture 


Four years ago, we embarked on a journey to create a digital platform aimed at assisting individuals in planning their weddings and other special events.
. The platform relies on a hybrid micro-services architecture with two gateway APIs built on top of some domain services.
The APIs are used to aggregate data and persist users actions. Over time our stack was facing some limitations to let it evolve and match our users growing needs.

The limits we were facing …

We had to upgrade our technologies, using the last LTS versions of our frameworks (Angular, NodeJS, Symfony, …). Most of our used framework were using unsupported versions and was hard to maintain.

We lack confidence when upgrading, the security of preventing regression when developing new features due to an insuffisance test coverage.

Our stack became hard to monitor, we generated millions of logs at every level of our architecture but we were not able to process them properly in oder to build the most relevant operational and security dashbaord that will help us monitor our growth.

We needed to extend our capabilities and provides mobile apps that will ease our users life taking benefit from the push notification system to market our features and implement an instant messaging system.

A new version to shape the future …

We started then migrating and upgrading our services and gateways. The main idea was to break all the limits we were facing. We undertook a bottom/up approach where we braisntormed every service/gateway requirements and designed their new capabilities. The media-service has to strongly validate each media file and compress it when necessary. The search service was then able to automatically reindex updated content and most importantly keep working when indexation is broken… etc

New mobile apps …

After progressively migrating our stack from the bottom/up starting from the media and search services and completing it with the client and admin web app through the gateway APIs.

We were then able to easily add mobile methods to our gateway and quickly propose to our users a mobile experience that had strong added value for them (notifications and reminders, instant messaging, … etc)

The actual upgdate also helped us evolve our infrastructure and provides relevant automated processes. Lets see it in detail …

Data Backups: A Seamless and Reliable Approach

Before our platform underwent its transformative migration, our approach to data backups presented a few challenges that demanded innovation and refinement.

The Pre-Migration Challenges

Primarily, backups were conducted manually, introducing a degree of human involvement that could lead to oversights and potential errors. Additionally, the schedule for these backups was inconsistent, as they were not automated. This irregularity meant that certain critical data might not be backed up in time, leaving it vulnerable to potential loss or corruption.

A Transformation in Strategy

To address these vulnerabilities and fortify our data protection strategy, we embarked on a journey of transformation, implementing several crucial changes that reshaped our approach to backups.

1. Automation: A Game-Changer

First and foremost, we automated the backup process, eliminating the manual element. This monumental shift ensured that backups were executed reliably, significantly reducing the margin for human error. The automation also facilitated a regular backup schedule, providing a continuous layer of data protection. This change alone revolutionized the reliability and effectiveness of our backup strategy.

2. Expanding the Horizon of Protection

In addition to automating and regularizing our backup process, we took a more comprehensive approach to safeguarding our data. While previously, the focus was primarily on the database, we broadened our scope to encompass other critical sources of information. This included application-level components such as app logs, as well as system-level information. This holistic approach ensures that a wider array of essential data is safeguarded, contributing to a more robust data protection strategy.

The Power Duo: Cronjobs and Ansible

To achieve this seamless transition and expansion of our backup strategy, we opted for a solution that balanced simplicity with effectiveness. Enter the dynamic duo of Cronjobs and Ansible.

Cronjobs: Precision in Scheduling

Cronjobs, a time-based job scheduler in Unix-like operating systems, emerged as a linchpin in our strategy. This versatile tool allowed us to schedule automated tasks with unparalleled precision. By defining specific time intervals, we ensured that backups occurred consistently and reliably, leaving no room for oversight.

Ansible: Orchestrating Excellence

Complementing Cronjobs, we integrated Ansible, a versatile automation platform that elevated our backup process to new heights. Ansible’s user-friendly playbooks provided a clear and structured way to define backup procedures. Through Ansible, we could orchestrate the entire process, from initiating the backup to ensuring the secure storage of the backed-up data. This synergy between Cronjobs and Ansible streamlined our backup process, ensuring efficiency and reliability.

Embracing Simplicity for Unwavering Security

This combination of Cronjobs and Ansible proved to be a highly effective approach. It allowed us to achieve our backup objectives without the need for intricate or resource-intensive systems. Instead, we harnessed the power of simplicity, ensuring that our data was safeguarded efficiently and consistently.

  • ELK stack (Logstash, Filebeat, Kibana)

Maximizing Website Performance: The Power of Strategic Dashboards

In the realm of multi-user platforms, 3ersi.com stands tall as one of YuccaLabs’ flagship products, drawing a growing traffic, averaging 60K visits per month. This substantial volume speaks volumes about the platform’s popularity and the level of user engagement it commands. With such a dynamic user base, it becomes imperative for stakeholders to possess a crystal-clear understanding of the platform’s performance metrics and the key performance indicators (KPIs) that underlie critical strategic decisions.

The Role of Operational Dashboards

To address this pressing need, we have devised a comprehensive solution centered around strategic and operational dashboards. The operational dashboard serves as the pulse of 3ersi.com, providing real-time insights into its day-to-day performance. This invaluable tool not only informs employees and stakeholders but also forms the base of our operational strategy, facilitating the ongoing monitoring of operational objectives.

Gaining a Strategic Perspective

In contrast, the strategic dashboard offers a bird’s-eye view of the organization’s essential KPIs. This panoramic viewpoint allows us to assess medium-term performance trends, providing invaluable insights that enable us to refine and articulate our strategic objectives. Through this, we are better poised to align our efforts and hit our KPI targets, propelling our commercial strategy forward.

Fusing Data-Driven Decision-Making

In the rapidly evolving landscape of online platforms, staying ahead requires more than just intuition. It demands precision, insight, and the right tools. These dashboards represent a pivotal step in our unwavering commitment to data-driven decision-making. They ensure that our short-term operational goals and long-term strategic objectives remain closely aligned with our overall mission and vision. It’s not just about numbers; it’s about deriving actionable insights that propel us forward.

The ELK Stack: A Powerhouse of Efficiency

To bring these dashboards to life, we harnessed the formidable power of the ELK stack– a seamless integration of Elasticsearch, Logstash, Kibana, and Beats.

1. Elasticsearch: This is where the magic happens. Elasticsearch serves as our search and analytics engine, capable of handling voluminous data. It enables us to swiftly find and analyze the information we need.

2. Logstash: Think of Logstash as our data processor. It assimilates diverse data types from various sources, processes them, and prepares them for indexing in Elasticsearch. This step is vital in ensuring our data is organized and ready for analysis.

3. Kibana: Kibana emerges as our visual storyteller. It’s the tool we wield to craft the dashboards. With Kibana, we seamlessly transform raw data into easily understandable visualizations.

4. Beats: Acting as nimble data shippers, Beats are lightweight agents we deploy on different servers to collect specific types of data. This data is then dispatched to Logstash for processing. Beats ensure that we capture all the pertinent information required.

Through the ELK stack, we’ve erected a robust data pipeline. It empowers us to collect, parse, index, and visualize data with utmost efficiency and seamlessness. We’re not merely accumulating information; we’re making it accessible and comprehensible for everyone involved.

  • The staging environement

Optimizing Infrastructure for Seamless Operations

Before embarking on the progressive migration, our platform faced a series of significant challenges that demanded our attention and strategic intervention.

The Initial Challenges

At the outset, our operations were primarily confined to a production environment. This limited our capacity to conduct comprehensive tests, which were carried out manually. This manual testing process, while indispensable, was labor-intensive and inherently susceptible to human error. Automation was, regrettably, at a minimum, with only basic deployment scripts in place.

A Shift: Comprehensive Infrastructure Overhaul

Recognizing the critical need for a transformation, we embarked on a comprehensive overhaul of our infrastructure. The changes introduced were both substantial and transformative in nature.

Introducing Automated Testing Environments

One of the important improvements was the introduction of specific automated testing environments. This marked a significant change in our testing approach. Every aspect of our system underwent thorough examination before deployment. This change not only ensured the identification of possible weaknesses but also strengthened the overall reliability and stability of our platform.”

The Staging Environment: Ensuring Clean Deployment

In parallel, we introduced a staging environment, where every component of our platform was carefully examined before being deployed into the live environment. This additional layer of examination was instrumental in guaranteeing the seamless operation of our platform, free from potential hiccups or glitches.

Automation with CI/CD Pipeline

Automation, undoubtedly, emerged as a key element of our infrastructure enhancement strategy. We instituted a robust Continuous Integration/Continuous Deployment (CI/CD) pipeline, a dynamic framework that revolutionized our development process.

Prioritizing Quality at Every Stage

This pipeline is characterized by multiple stages, each designed to prioritize quality at every step. From code integration to deployment, quality assurance is embedded into the very fabric of our development process. This dynamic approach not only accelerates our workflows but also equips us to respond more effectively to evolving requirements.

Conclusion

After 8 months of rethinking our architecture and rewriting each brick we ended up with an IT stack that is helping us move faster in the market and ease our life to evolve it and most importantly is changing our users perception of the products we’re providing.

We thank all those who participating in our engineering team making the dream became a reality: Amine Benbakhta, Mohammed Bounoua, Tarek Mohamed Hacene, Yasser Belatreche, Mohammed Tirichine, Akram Mohammed Bennacer and Ahmed SIOUANI.

Author: Ahmed Siouani

CEO & CTO @ Yucca Labs

Leave a Reply

Your email address will not be published. Required fields are marked *