Connected Operational Intelligence
The project is a cloud platform to provide the user with a complete desktop from which they can control and manage a large construction site.
  • Kotlin, Spring Boot,
    Gradle, PostgreSQL,
    Redis, AWS DynamoDB
  • AWS, Terraform
  • React.js, Redux.js
  • Mapbox.js, D3.js
  • Storybook
  • TestNG, WebdriverIO
Creating microservice backend architecture

Creating modular frontend architecture

Real-time and historical data visualization using Mapbox and D3.js

Holding big data in NoSQL databases
Handling real-time data from IoT devices

Working with visual timeline, providing historical data

Creating project infrastructure using AWS services

Writing unit and integration tests
the project
Visualization of construction data
The app is built to simplify construction management: view historical data, the current status, predictions, and also plan the building process.

The case of usage is the management of building a tunnel, using the app the operator can view historical data, current status, and predictions based on known data. The number of services provide different information and can be used as stand-alone sources, as well as in combinations.
Usually, all data is displayed on the map, and detailed information is in draggable windows, that are opened over the map.
Services and navigation
The project is conceptualized as a set of individual components that provide functionality. Each component provides an
 abstract functionality that configuration gives one or more instances specific to the site.

The project has the ability to build applications with a different set of services to meet the needs and requirements of different clients and not load the browser with extra code in the project build.
All available services are displayed in the navigation, which is also customizable to meet the customer's identity.
There are two possible views:
  • Sidebar navigation: the navigation with the same structure, but it's displayed as a common sidebar.
  • Control Knob: circular navigation tool with three levels. This tool is draggable and it can be minimized.
Tunnel boring module
The Tunnel Boring module (TBM) is responsible for displaying the boring process.

On the map the user can see the tunnel line, planned and actual position of boring machines for the selected date, and also predicted position, if the day is the future is selected.
  1. the number of bored meters,
  2. the number, and information about installed rings,
  3. detailed data about the works that have been done for the selected shift.
The displayed data is based on Shift reports, where the operator can input all relevant data:
Factories module
There're factories that produce rings for the tunnel and the operator can view their status and KPI as well.
During the construction process, workers need to monitor indicators to plan the work and prevent incidents.

SCADA module provides the information about all sensors installed inside and outside the tunnel: their position on the map, their real-time and historical data. Also, in the real-time mode, there are alerts that pop up on the screen if some sensors reached thresholds.

Real time data is received by websocket, and updated in the UI immediately.
SCADA module
CCTV module
In the CCTV module, the operator can view video from cameras installed in the tunnel and in the surrounding area. It's possible to filter cameras, attach/detach the camera popup to its location on the map, view full-screen video, and take a screenshot.
Productivity module
In the productivity module, the operator can plan works and resources (groups of workers) for them
All planned and completed works are displayed as lines on the map and as charts in the window
As a part of this module, the operator can also control trucks and their trips inside tunnels (based on the information received from beacons). This data is also displayed on the map (the current truck position) and in the summary popup.
Logistics module
This module is used to manage people and vehicles inside tunnels. There are several submodules that also can be used separately or together.

There're beacons installed in tunnels and surrounding areas, these beacons detect people and vehicles (all of them have unique tags so detections are personalized).
This module shows the detection heatmap based on data received from beacons.

When the map is zoomed out, the usual heatmap is displayed, but when the map is zoomed in, the beacon coverage area is colored.
This module receives and displays alerts in real-time. There is the ability to zoom to the place with emergency by clicking on alert.
Alert types:
  • the number of people in the tunnel exceeded the threshold
  • the worker is in the cut zone
  • some incident has happened
  • some person is out of their work zone
  • speeding
Static planner
The module to plan static restrictions in the tunnel: some works that could take place in the tunnel, or a big vehicle that will work inside the tunnel.
Dynamic planner
The module to check the availability
of the tunnel for the specific time interval.

There is a display on the map and the user can check using "Vehicle movement planner" if the tunnel is free — choose the vehicle, start position, and end position.
The module to manage incidences in the tunnels — the employee can create an incidence immediately.
The module to see active emergency events on the map. There is the ability to download an emergency report.
Users and permissions
Access to all modules and their functionality is managed by permissions. There're three levels: owner, editor, reader. If the user doesn't have any permission set, they don't have the service in the navigation.

There are also user groups and it's possible to set permissions for the group.
The frontend is made with React. It's modular and consists of components.
There are 3 general component types:
  • Services — a general set of components, all services are fully independent
  • Complex components — components with data handling logic
  • Simple components — components without data handling logic

While it's built into a monolithic set of HTML, CSS, and JS files, it's easy to add a new module and build the specific UI using only specific modules for the specific deployment.

For cartographic displays, we choose to use Mapbox. The frontend is capable of displaying both real-time and historical data with a good performance. For real-time data and notifications, we use the WebSocket protocol.
The Frontend interacts with the services using JSON-RPC over HTTP or WebSocket protocol (with rare exceptions). The same protocol can be used for services to interact with each other.

The authentication is performed using JWT tokens, and any service can verify the token with the known public RSA key. The token is used for both Frontend to#nbspservices authorization and authorization between services. The services use the shared Redis database to register themselves and discover each other.
New functionality is added by utilizing new services. They're included in the "Discovery" service and become available for calls. The API is extended by the new endpoint (new URL path under the same API domain) with new methods.

Typically, each new component consists of some services, each holding some aspects of new functionality.
The services are widely-used shared libraries (they're separate Gradle projects, but living in the same codebase).

The services can call each other, so, some specific data or functionality can be available by a (micro)service.
We use:

  1. PostgreSQL to store business entities. We use JSONB database, so we can store any set of fields for any object. Access to the entities is generalized with "Entity Manager" library. This approach is highly reusable and independent.
  2. Redis in-memory database for services discovery and for the cache of specific data. The service discovery data are general internals of any deployment.
  3. AWS DynamoDB to store historical data.
The services are deployed in AWS Elastic Container Service and Fargate. When the system is idle most of the services are shut down to save money and they wake up when a first user comes.
Cloud platform
At the beginning we had raw requirements and technical documentation from the customer. QA researched all documents and described criteria for development.
Not all layouts were ready, so QA with developers worked on prototypes and approved them with customers.

The backend and frontend parts of the services are covered by autotests.
For all the tests, we took Java as the main language. We also used the Maven build automation tool. To automate the API, we used the REST-assured java library and the JUnit test framework.

To simplify working with JSON, we took the request body examples to the "resources" folder and used them as templates - we only replace the necessary parameters for certain tests. Both positive and negative tests were written — for this we used parameterized JUnit annotations.

For UI tests, the PageObject pattern is used to describe the element base of pages and their interactions. The classes representing the pages describe methods for comprehensive interaction with the element base. For more complex interaction with the elements we use Selenium Actions and JavascriptExecutor.

Maven Surefire Report Plugin in conjunction with Allure Framework is used to generate reports.

(Quality Assurance)
Look at our other projects