Spark the innovation within you.

When was the last time you did something new?

Case study

Creation of Heart: a tool for automating web quality metrics


February-July 2019

Business impact, time to market and scalability are all top priorities for the communication, marketing and HR departments our technical teams work with today. One of the biggest questions they ask is how to make development quality their utmost priority while meeting deadlines and sticking to budgets.

They are also keen to ensure they can use the products they develop with our technical teams autonomously. It is essential that once our collaboration ends, company teams are fully comfortable using the product (MVP) and adapting to any new developments (or maintaining it as it is). 

Technical issues

The entire point of setting up a solid technical platform (network architecture, framework, version-control tool, third-party service integration, etc.) is to help the client make the technical tools their own, as they will be the product's sole contributor once the MVP phase is over.

As a result, the technical platform is not designed to be a (disposable) prototype, but rather a product built to last and evolve beyond the MVP phase. This means that the concept of technical debt (meaning the accumulation of risks taken throughout the various technical phases of a project's life cycle, according to Bastien Jaillot's definition) emerges right from the start of the project, and therefore needs to be carefully managed. One of the solutions here is to regularly assess product quality metrics, and follow up on developments to take the appropriate corrective measures.

Different ways of automating quality metrics currently exist, including:

  • Static code analysis, which allows quality to be checked without running the product. The tools used to conduct this type of analysis work on the software's source code. One of the best-known tools in this category is undoubtedly SonarQube.
  • Software testing, which can be used to test how the product behaves. These tools work on the executed code. Selenium is the leading software in this field.
  • Web analytics tools, which allow you to check website quality from an external perspective, as a user or bot would see them. Examples here include Dareboost and Mozilla Observatory.

While the first two tool types can be automated very smoothly, the last category is not as flexible, in particular because most of these tools can only be run remotely, to better simulate the user conditions they aim to replicate.

This means that if technical teams are already assessing quality metrics with these tools, using them manually can quickly become tedious, due to the many different MVPs being rolled out simultaneously within Fabernovel, the (daily) frequency of these assessments, and the growing number of new tools being used (Qualys SSL Labs Server Test, for example). Furthermore, their implementation across different MVPs varies greatly. Finally, teams do not always need or want to analyze them all with all these different tools, which makes their use on an everyday basis even more complex.
There is therefore a need to consolidate the use and automation of these tools to obtain a clear picture of MVP quality as it may be perceived by online users.

Heart: introducing a new tool

And this is where Heart comes in: a modular open-source tool for automated web quality analytics. All these different modules are referred to as Heart; a simple, single name for a collection of solutions. Thanks to the tool's modular structure, in practice Heart covers a wide range of different module combinations.

It is modular to meet the needs of each and every user (or client, in our case). This means that most features are optional. There are many benefits to this type of architecture:

  • Reduced fragility: any potential bugs are only contained in the features chosen
  • Tool heaviness reduction
  • Scalability: new module(s) can simply be added if and when new needs emerge

It is automated to help save precious time and maximize users' added value, just as we did with the roll-out stage a few years back.

And finally, it is open-source because we believe that the issues we have tackled, the very challenges that led to us developing Heart in the first place, are not specific to Fabernovel: other industry professionals are very likely to have come across them, too. The source code is available on GitLab and the modules can be installed via the NPM dependencies manager (e.g. Heart Dareboost).

Example of Slack notifications sent by the Heart Slack module

Heart: how it works

Module families

Heart is a collection of modules broken down into three overarching families:

  • Runner modules, which launch analyses
  • Analysis modules, which analyze URLs via third-party services
  • Listener modules, which respond once the results of the analysis are known

Heart's modular design means just two modules need to be installed: A runner module (Heart CLI, see below) and an analysis module.

Existing modules

Two runner modules are currently available:

  • Heart CLI (mandatory), which triggers an analysis via the command line,
  • Heart API (optional), which exposes an API HTTP that triggers an analysis,

Three analysis modules, at least one of which must be present:

  • Heart Dareboost, which analyzes URLs via the Dareboost service,
  • Heart Observatory, which uses Mozilla Observatory to analyze a given URL,
  • Heart SSL Labs Server, which uses Qualys SSL Labs Server to analyze a website,

and two optional listener modules:

  • Heart BigQuery, which saves analysis results in a Google BigQuery table,
  • Heart Slack, which sends analysis results to a dedicated Slack channel.

Example of a dashboard using data saved via the Heart BigQuery module

Inter-module communication

The diagram shown below illustrates how data is transferred within a set-up that includes a runner module, two analysis modules and two listener modules.

Based on this example, the general communication flow between the various module families operates as follows:

  1. A runner module receives a run command to analyze a URL via a third-party service
  2. The latter checks the analysis module in question
  3. The analysis module then communicates with the third-party service to perform the analysis
  4. Once the analysis is complete, the listener modules are notified and the results of the analysis are sent to them
  5. The listener modules react, for example by sending a notification or storing the analysis results in a database

Technical requirements and installation

Heart has been designed to reduce as far as possible the number of dependencies needed in the system on which it is installed. This means the only system prerequisites are:

  • It must be possible to install and run the JavaScript Node.js runtime 
  • The system must support the use of environment variables
  • An Internet connection must be available for communication with third-party services

If these prerequisites are in place, Heart can be installed in a few simple steps, explained in detail in the technical documentation. A set-up example is also provided.

Usage examples

Because there are so few prerequisites, Heart can be used in a wide range of contexts: On a personal machine, in a Docker instance, or in a continuous integration environment.


As explained earlier, other industry professionals are very likely to experience the same issues that led us to develop the Heart tool. Since the solution we have developed may be useful to them, we have chosen to make this tool open source.

Open-source publishing has other advantages, too:

  • It makes it easier to identify any potential security breaches, as the code is accessible
  • It encourages innovation by authorizing derivative projects
  • It adapts to developers' practices by absorbing external ideas and suggestions
  • It contributes to a software ecosystem that benefits us all
  • It showcases our expertise

The question of community management should also be considered. How do we handle external contributions? How do we draw up a roadmap for the tool? How do we manage change and developments to the tool?

All these aspects are currently under review.


In the meantime, the list of planned developments has been made accessible, and a number of different features are planned, including the addition of new analysis modules: Google Lighthouse as well as an SEO-oriented module. It may also be a good idea to provide a module for assessing a page's environmental footprint (Green IT recently released a tool of this kind via a Chrome browser extension).