The aim of this document is to provide all the necessary information to developers who would like to start working on OperatorFabric. It will walk you through setting up the necessary tooling to be able to launch OperatorFabric in development mode, describe the structure of the project and point out useful tools (Gradle tasks, scripts, etc.) for development purposes.

1. Requirements

This section describes the projects requirements regardless of installation options. Please see [Getting Started] below for details on:
  • setting up a development environment with these prerequisites

  • building and running OperatorFabric

1.1. Tools and libraries

  • Gradle 6

  • Java 8.0

  • Maven 3.5.3

  • Docker

  • Docker Compose with 2.1+ file format support

  • Chrome (needed for UI tests in build)

the current Jdk used for the project is Java 8.0.242-zulu.
It is highly recommended to use sdkman and nvm to manage tools versions.

Once you have installed sdkman and nvm, you can source the following script to set up your development environment (appropriate versions of Gradle, Java, Maven and project variables set):

Set up development environment (using sdkman and nvm)
source bin/load_environment_light.sh

1.2. Software

  • RabbitMQ 3.7.6 +: AMQP messaging layer allows inter service communication

  • MongoDB 4.0 +: Card persistent storage

RabbitMQ is required for :

  • Time change push

  • Card AMQP push

  • Multiple service sync

MongoDB is required for :

  • Current Card storage

  • Archived Card storage

  • User Storage

Installing MongoDB and RabbitMQ is not necessary as preconfigured MongoDB and RabbitMQ are available in the form of docker-compose configuration files at src/main/docker

1.3. Browser support

We currently use Firefox (63.0.3). Automatic tests for the UI rely on Chrome (73.0.3683.86).

2. Setting up your development environment

The steps below assume that you have installed and are using sdkman and nvm to manage tool versions ( for java, gradle, node and npm).

There are several ways to get started with OperatorFabric. Please look into the section that best fits your needs.

If you encounter any issue, see Troubleshooting below. In particular, a command that hangs then fails is often a proxy issue.

The following steps describe how to launch MongoDB, RabbitMQ and SonarQube using Docker, build OperatorFabric using gradle and run it using the run_all.sh script.

2.1. Clone repository

git clone https://github.com/opfab/operatorfabric-core.git
cd operatorfabric-core

2.2. Set up your environment (environment variables & appropriate versions of gradle, maven, etc…)

source bin/load_environment_light.sh
From now on, you can use environment variable $OF_HOME to go back to the home repository of OperatorFabric.

2.3. Deploy dockerized MongoDB, RabbitMQ and SonarQube

MongoDB, RabbitMQ and SonarQube are needed for the tests to be run so the build can be done.

A docker-compose file with properly configured containers is available there.

The docker-compose can be run in detached mode:

cd src/main/docker/test-quality-environment/
docker-compose up -d

2.4. Build OperatorFabric with Gradle

Using the wrapper in order to ensure building the project the same way from one machine to another.

To only compile and package the jars:

cd $OF_HOME
./gradlew assemble

To launch the Unit Test, compile and package the jars:

cd $OF_HOME
./gradlew build

2.5. Run OperatorFabric Services using the run_all.sh script

bin/run_all.sh start
See bin/run_all.sh -h for details.

2.6. Check services status

bin/run_all.sh status

2.7. Log into the UI

URL: localhost:2002/ui/

login: admin

password: test

It might take a little while for the UI to load even after all services are running.
Don’t forget the final slash in the URL or you will get an error.

2.8. Push cards to the feed

You can check that you see cards into the feed by running the push_card_loop.sh script.

services/core/cards-publication/src/main/bin/push_card_loop.sh

3. User Interface

This project was partially generated with Angular CLI version 6.0.8.

In the following document the variable declared as OF_HOME is the root folder of the operatorfabric-core project.
CLI

stands for Command Line Interface

SPA

stands for Single Page Application

OS

stands for Operating System

3.1. Run

3.1.1. Linux

After launching docker containers, use the following command line $OF_HOME/bin/run_all.sh start to run the application. Once the whole application is ready, you should have the following output in your terminal:

##########################################################
Starting client-gateway-cloud-service, debug port: 5008

##########################################################
pid file: $OF_HOME/services/infra/client-gateway/build/PIDFILE
Started with pid: 7479

##########################################################
Starting users-business-service, debug port: 5009

##########################################################
pid file: $OF_HOME/services/core/users/build/PIDFILE
Started with pid: 7483

##########################################################
Starting time-business-service, debug port: 5010

##########################################################
pid file: $OF_HOME/services/core/time/build/PIDFILE
Started with pid: 7488

##########################################################
Starting cards-consultation-business-service, debug port: 5011

##########################################################
pid file: $OF_HOME/services/core/cards-consultation/build/PIDFILE
Started with pid: 7493

##########################################################
Starting cards-publication-business-service, debug port: 5012

##########################################################
pid file: $OF_HOME/services/core/cards-publication/build/PIDFILE

Wait a moment before trying to connect to the`SPA`, leaving time for the`client-gateway` to boot up completely.

The SPA, on a local machine, is available at the following Url: localhost:2002/ui/.

To log in you need to use a valid user. Currently you need to use a connection pair define in $OPERATOR_FABRIC_HOME/services/infra/auth/src/main/java/org/lfenergy/operatorfabric/auth/config/WebSecurityConfiguration.java. It could be admin with test, for example.

To test the reception of cards, you can use the following script to create dummy cards:

$OF_HOME/services/core/cards-publication/src/main/bin/push_cards_loop.sh

Once logged in, with that script running in the background, you should be able to see some cards displayed in localhost:2002/ui/feed.

3.2. Build

Run ng build to build the project. The build artifacts will be stored in :

$OPERATOR_FABRIC_CORE_HOME/services/web/web-ui/build/src/generated/resources/static

3.3. Test

3.3.1. Standalone tests

Run in the $OF_HOME/ui/main directory the command ng test --watch=false to execute the unit tests based on Jasmine using Karma to drive the browser.

3.3.2. Test during UI development

  1. if the RabbitMQ and MongoDB docker containers are not not running, launch them;

  2. set your environment variables with . $OF_HOME/bin/load_environment_light.sh;

  3. run the micro services using the same command as earlier: $OF_HOME/bin/run_all.sh start;

  4. if needed, enable a card-operation test flow using the script $OF_HOME/service/core/cards-publication/src/main/bin/push_cards_loop.sh;

  5. launch an angular server with the command: ng serve;

  6. test your changes in your browser using this url: localhost:4200 which leads to localhost:4200/#/feed.

4. Environment variables

These variables are loaded by bin/load_environment_light.sh bin/load_environment_ramdisk.sh

  • OF_HOME: OperatorFabric root dir

  • OF_CORE: OperatorFabric business services subroot dir

  • OF_INFRA: OperatorFabric infrastructure services subroot dir

  • OF_CLIENT: OperatorFabric client data definition subroot dir

  • OF_TOOLS: OperatorFabric tooling libraries subroot dir

Additionally, you may want to configure the following variables

  • Docker build proxy configuration (used to configure alpine apk proxy settings)

    • APK_PROXY_URI

    • APK_PROXY_HTTPS_URI

    • APK_PROXY_USER

    • APK_PROXY_PASSWORD

5. Project Structure

5.1. Tree View

project
├──bin
│   └─ travis
├──client
│   ├──cards (cards-client-data)
│   ├──src
│   ├──time (time-client-data)
│   └──users (users-client-data)
├──services
│   ├──core
│   │   ├──cards-consultation (cards-consultation-business-service)
│   │   ├──cards-publication (cards-publication-business-service)
│   │   ├──src
│   │   ├──thirds (third-party-business-service)
│   │   ├──time (time-business-service)
│   │   └──users (users-business-service)
│   ├──infra
│   │   ├──client-gateway (client-gateway-cloud-service)
│   │   ├──config (configuration-cloud-service)
│   │   └──registry (registry-cloud-service)
│   └──web
│       └──web-ui
├──src
|   ├──docs
|   │   ├──asciidoc
|   │   └──modelio
|   └──main
|       ├──docker
|       └──headers
├──tools
│   ├──generic
│   │   ├──test-utilities
│   │   └──utilities
│   ├── spring
│   │   ├──spring-amqp-time-utilities
│   │   ├──spring-mongo-utilities
│   │   ├──spring-oauth2-utilities
│   │   ├──spring-test-utilities
│   │   └──spring-utilities
│   └──swagger-spring-generators
└─ui

5.2. Content Details

5.3. Conventions regarding project structure and configuration

Sub-projects must conform to a few rules in order for the configured Gradle tasks to work:

5.3.1. Java

[sub-project]/src/main/java

contains java source code

[sub-project]/src/test/java

contains java tests source code

[sub-project]/src/main/resources

contains resource files

[sub-project]/src/test/resources

contains test resource files

5.3.2. Modeling

Core services projects declaring REST APIS that use Swagger for their definition must declare two files:

[sub-project]/src/main/modeling/swagger.yaml

Swagger API definition

[sub-project]/src/main/modeling/config.json

Swagger generator configuration

5.3.3. Docker

Services project all have docker image generated in their build cycle (See Gradle Tasks).

Per project configuration :

  • docker file : [sub-project]/src/main/docker/Dockerfile

  • docker-compose file : [sub-project]/src/main/docker/docker-compose.yml

  • runtime data : [sub-project]/src/main/docker/volume is copied to [sub-project]/build/docker-volume/ by task copyWorkingDir. The latest can then be mounted as volume in docker containers.

6. Development tools

6.1. Scripts (bin and CICD)

bin/build_all.sh

builds all artifacts as gradle is not able to manage inter project dependencies

bin/clean_all.sh

remove IDE data (project configuration, build output directory) - idea, vsc

bin/load_environment_light.sh

sets up environment when sourced (java version, gradle version, maven version, node version)

bin/load_environment_ramdisk.sh

sets up environment and links build subdirectories to a ramdisk when sourced at ~/tmp

bin/run_all.sh

runs all all services (see below)

bin/setup_dockerized_environment.sh

generate docker images for all services

6.1.1. load_environment_ramdisk.sh

There are prerequisites before sourcing load_environment_ramdisk.sh:

  • Logged user needs sudo rights for mount

  • System needs to have enough free ram

Never ever run a gradle clean or ./gradlew clean to avoid deleting those links.

6.1.2. run_all.sh

Please see run_all.sh -h usage before running.

Prerequisites

  • mongo running on port 27017 with user "root" and password "password" (See src/main/docker/mongodb/docker-compose.yml for a pre configured instance).

  • rabbitmq running on port 5672 with user "guest" and password "guest" (See src/main/docker/rabbitmq/docker-compose.yml for a pre configured instance).

Ports configuration

Port

2000

config

Configuration service http (REST)

2001

registry

Registry service http (REST)

2002

gateway

Gateway service http (REST+html)

2100

thirds

Third party management service http (REST)

2101

time

Time management service http (REST)

2102

cards-publication

card publication service http (REST)

2103

users

Users management service http (REST)

2104

cards-consultation

card consultation service http (REST)

4000

config

java debug port

4001

registry

java debug port

4002

gateway

java debug port

4100

thirds

java debug port

4101

time

java debug port

4102

cards-publication

java debug port

4103

users

java debug port

4103

cards-consultation

java debug port

6.1.3. setup_dockerized_environment.sh

Please see setup_dockerized_environment.sh -h usage before running.

Builds all sub-projects, generate docker images and volumes for docker-compose.

6.2. Gradle Tasks

In this section only custom tasks are described. For more information on tasks, refer to the output of the "tasks" gradle task and to gradle and plugins official documentation.

6.2.1. Services

Common tasks for all sub-projects
  • Test tasks

    • unitTest: runs unit tests

  • Other:

    • copyWorkingDir: copies [sub-project]/src/main/docker/volume to [sub-project]/build/

    • copyDependencies: copy dependencies to build/libs

Core
  • Swagger Generator tasks

    • debugSwaggerOperations: generate swagger code from /src/main/modeling/config.json to build/swagger-analyse

    • swaggerHelp: display help regarding swagger configuration options for java

Thirds Service
  • Test tasks

    • prepareTestDataDir: prepare directory (build/test-data) for test data

    • compressBundle1Data, compressBundle2Data: generate tar.gz third party configuration data for tests in build/test-data

    • prepareDevDataDir: prepare directory (build/dev-data) for bootRun task

    • createDevData: prepare data in build/test-data for running bootRun task during development

  • Other tasks

    • copyCompileClasspathDependencies: copy compile classpath dependencies, catching lombok that must be sent for sonarqube

infra/config
  • Test tasks

    • createDevData: prepare data in build/test-data for running bootRun task during development

tools/generic
  • Test tasks

    • prepareTestData: copy test data from src/test/data/simple to build/test-data/

    • compressTestArchive: compress the contents of /src/test/data/archive to /build/test-data/archive.tar.gz

6.2.2. Gradle Plugins

In addition to these custom tasks and standard Gradle tasks, OperatorFabric uses several Gradle plugins, among which:

7. Useful recipes

7.1. Running sub-project from jar file

  • gradle :[sub-projectPath]:bootJar

  • or java -jar [sub-projectPath]/build/libs/[sub-project].jar

7.2. Overriding properties when running from jar file

  • java -jar [sub-projectPath]/build/libs/[sub-project].jar –spring.config.additional-location=file:[filepath] NB : properties may be set using ".properties" file or ".yml" file. See Spring Boot configuration for more info.

  • Generic property list extract :

    • server.port (defaults to 8080) : embedded server port

  • :services:core:third-party-service properties list extract :

    • operatorfabric.thirds.storage.path (defaults to "") : where to save/load OperatorFabric Third Party data

7.3. Generating docker images

To Generate all docker images run bin/setup_dockerized_environment.sh.

INFORMATION: If you work behind a proxy you need to specify the following properties to configure alpine apk package manager:

  • apk.proxy.uri: proxy http uri ex: "http://somewhere:3128[somewhere:3128]" (defaults to blank)

  • apk.proxy.httpsuri: proxy http uri ex: "http://somewhere:3128[somewhere:3128]" (defaults to apk.proxy.uri value)

  • apk.proxy.user: proxy user

  • apk.proxy.password: proxy unescaped password

Alternatively, you may configure the following environment variables :

  • APK_PROXY_URI

  • APK_PROXY_HTTPS_URI

  • APK_PROXY_USER

  • APK_PROXY_PASSWORD

8. Troubleshooting

Proxy error when running third-party docker-compose

Error message
Pulling rabbitmq (rabbitmq:3-management)...
ERROR: Get https://registry-1.docker.io/v2/: Proxy Authentication Required
Possible causes & resolution

When running docker-compose files using third-party images(such as rabbitmq, mongodb etc.) the first time, docker will need to pull these images from their repositories. If the docker proxy isn’t set properly, you will see the above message.

To set the proxy, follow these steps from the docker documentation.

If your proxy needs authentication, add your user and password as follows:

HTTP_PROXY=http://user:password@proxy.example.com:80/
The password should be URL-encoded.

Gradle Metaspace error

Gradle task (for example gradle build) fails with the following error:

Error message
* What went wrong:
Metaspace
Possible causes & resolution

Issue with the Gradle daemon. Stopping the daemon using gradle --stop and re-launching the build should solve this issue.

Java version not available when setting up environment
When sourcing the load_environment_light script to set up your environment, you might get the following error message:

Error message
Stop! java 8.0.192-zulu is not available. Possible causes:
 * 8.0.192-zulu is an invalid version
 * java binaries are incompatible with Linux64
 * java has not been released yet

Select the next available version and update load_environment_light accordingly before sourcing it again.

Possible causes & resolution

The java version currently listed in the script might have been deprecated (for security reasons) or might not be available for your operating system (for example, 8.0.192-zulu wasn’t available for Ubuntu).

Run sdk list java to find out which versions are available. You will get this kind of output:

================================================================================
Available Java Versions
================================================================================
     13.ea.16-open       9.0.4-open          1.0.0-rc-11-grl
     12.0.0-zulu         8.0.202-zulu        1.0.0-rc-10-grl
     12.0.0-open         8.0.202-amzn        1.0.0-rc-9-grl
     12.0.0-librca       8.0.202.j9-adpt     1.0.0-rc-8-grl
     11.0.2-zulu         8.0.202.hs-adpt
     11.0.2-open         8.0.202-zulufx
     11.0.2-amzn         8.0.202-librca
     11.0.2.j9-adpt      8.0.201-oracle
     11.0.2.hs-adpt  > + 8.0.192-zulu
     11.0.2-zulufx       7.0.211-zulu
     11.0.2-librca       6.0.119-zulu
     11.0.2-sapmchn      1.0.0-rc-15-grl
     10.0.2-zulu         1.0.0-rc-14-grl
     10.0.2-open         1.0.0-rc-13-grl
     9.0.7-zulu          1.0.0-rc-12-grl

================================================================================
+ - local version
* - installed
> - currently in use
================================================================================

BUILD FAILED with message Execution failed for task ':ui:main-user-interface:npmInstall'.

Error message
FAILURE: Build failed with an exception.

    What went wrong:
    Execution failed for task ':ui:main-user-interface:npmInstall'.
Possible causes & resolution

A sudo has been used before the ./gradlew assemble.

Don’t use sudo to build OperatorFabric otherwise unexpected problems could arise.

9. Keycloak Configuration

The configuration needed for development purposes is automatically loaded from the dev-realms.json file. However, the steps below describe how they can be reproduced from scratch on a blank Keycloak instance in case you want to add to it.

The Keycloak Management interface is available here: [host]:89/auth/admin Default credentials are admin/admin.

9.1. Add Realm

  • Click top left down arrow next to Master

  • Add Realm

  • Name it dev (or whatever)

9.2. Setup at least one client (or best one per service)

9.2.1. Create client

  • Click Clients in left menu

  • Click Create Button

  • Set client ID to "opfab-client" (or whatever)

  • Select Openid-Connect Protocol

  • Enable Authorization

  • Access Type to Confidential

  • save

9.2.2. Add a Role to Client

  • In client view, click Roles tab

  • Click Add button

  • create a USER role (or whatever)

  • save == create a Mapper

Used to map the user name to a field that suits services

  • name it sub

  • set mapper type to User Property

  • set Property to username

  • set Token claim name to sub

  • enable add to access token

  • save

9.3. Create Users

  • Click Users in left menu

  • Click Add User button

  • Set username to admin

  • Save

  • Select Role Mappings tab

  • Select "opfab-client" in client roles combo (or whatever id you formerly chose)

  • Select USER as assigned role (or whatever role you formerly created)

  • Select Credentials tab

  • set password and confirmation to "test" *

repeat process for other users: rte-operator, tso1-operator, tso2-operator

9.3.1. Development-specific configuration

To facilitate development, in the configuration file provided in the git (dev-realms.json) ,session are set to have a duration of 10 hours (36000 seconds) and SSL is not required. These parameters should not be used in production.

The following parameters are set : accessTokenLifespan : 36000 ssoSessionMaxLifespan : 36000 accessCodeLifespan" : 36000 accessCodeLifespanUserAction : 36000 sslRequired : none