This repository contains integration tests for the fabric8-analytics services.
The tests can be run against existing deployment, or locally by using docker-compose.
For Local System only, Set values of following environment variables in env.sh
, to test specific deployments:
-
F8A_API_URL
- API server URL -
F8A_JOB_API_URL
- Jobs service URL -
AWS_ACCESS_KEY_ID
- Access Key for Staging/Prod AWS (Optional) -
AWS_SECRET_ACCESS_KEY
- Secret Key for Staging/Prod AWS (Optional) -
S3_REGION_NAME
- S3 Region name (Optional) -
THREE_SCALE_PREVIEW_USER_KEY
- Staging/Prod 3 Scale User Key -
OSIO_AUTH_SERVICE
- Staging/Prod Auth Url -
F8A_THREE_SCALE_PREVIEW_URL
- Staging/Prod 3 Scale Authentication Url -
F8A_SERVICE_ID
- Service Id -
F8A_GREMLIN_URL
- Staging/Prod Gremlin Url for DB Queries -
F8A_GEMINI_API_URL
- Staging/Prod Gremlin API Url. -
RECOMMENDER_REFRESH_TOKEN
- Refer this
By default, the system running on the localhost is tested. The integration tests are not containerized currently hence they start and stop Fabric8 Analytics multiple times. Thus the following prerequisites need to be met:
-
Configure a Python environment on the host system.
-
Ensure that you are a member of the
docker
group to executedocker-compose
withoutsudo
. For information about setting up and using Docker as a non-root user, see here.
Feature tests are written using behave.
To add new feature tests:
-
Edit an existing
name.feature
file in thefeatures/
folder (or create a new one) and save it asnew-name.feature
. -
Specify the missing steps in the
common.py
file in thefeatures/steps/
folder (or create a new step file, where appropriate).
-
Selfcheck: Some checks to ensure that the steps work correctly. list of scenarios
-
Smoke tests: Smoke tests for checking if main API endpoints are available and work as expected. list of scenarios
-
Server API: API tests for the server module. list of scenarios
-
Gemini: Tests for the Gemini API. list of scenarios
-
API Backbone: Tests for the API backbone. list of scenarios
-
Jobs API: API tests for the jobs module. list of scenarios
-
Jobs API: Debug API tests for the jobs module. list of scenarios
-
License analysis: Tests for license analysis API list of scenarios
-
Stack analysis v2: API tests for the stack analysis endpoint
/api/v2/stack-analyses/
. list of scenarios -
Stack analysis v2: API smoke tests for the stack analysis endpoint
/api/v2/stack-analyses/
. list of scenarios -
Stack analysis: basic tests: Basic tests for the stack analysis API list of scenarios
-
Stack analysis for Maven ecosystem: Stack analysis tests for Maven ecosystem list of scenarios
-
Stack analysis for NPM ecosystem: Smoke tests for NPM ecosystem list of scenarios
-
Stack analysis for NPN ecosystem: Stack analysis tests for NPM ecosystem list of scenarios
-
Stack analysis for PyPi ecosystem: Smoke tests for PyPi ecosystem list of scenarios
-
Stack analysis for PyPi ecosystem: Stack analysis tests for PyPi ecosystem list of scenarios
-
Stack analysis for PyPi ecosystem: Stack analysis tests for PyPi ecosystem, input is stored in pylist.json list of scenarios
-
Stack analysis for unknown dependencies: Stack analysis tests for unknown dependencies list of scenarios
-
Stack analysis for Maven direct and transitive dependencies: Stack analysis for Maven direct and transitive dependencies list of scenarios
-
Stack analysis for NPM direct and transitive dependencies: Stack analysis for NPM direct and transitive dependencies list of scenarios
-
Stack analysis for PyPi direct and transitive dependencies: Stack analysis for PyPi direct and transitive dependencies list of scenarios
-
Component analysis: API tests for the component analysis endpoints under
/api/v1/component-analyses/
. list of scenarios -
Checks for components stored in S3: Check how/if components are stored in the S3 database. list of scenarios
-
Checks for components stored in S3: Specific tests for vertx components. list of scenarios
-
Checks for packages stored in S3: Check how/if packages are stored in the S3 database. list of scenarios
-
Checks for packages stored in S3: Specific tests for vertx packages. list of scenarios
-
Component analysis smoke tests: Smoke tests for the component analysis REST API list of scenarios
-
Component analysis basic tests: Basic set of component analysis REST API tests list of scenarios
-
Component analysis: 100 most popular PyPi packages: Component analysis for the 100 most popular PyPi packages list of scenarios
-
Component analysis: 1000 most popular PyPi packages: Component analysis for the 1000 most popular PyPi packages list of scenarios
-
Component analysis: 100 most popular NPM packages: Component analysis for the 100 most popular NPM packages list of scenarios
-
Component analysis: 960 most popular NPM packages: Component analysis for the 960 most popular NPM packages list of scenarios
-
Component analysis: 100 most popular Maven packages: Component analysis for the 100 most popular Maven packages list of scenarios
-
Resilient infrastructure: Tests that check the resiliency of the entire infrastructure. These tests use the OpenShift Console, which needs to be installed. list of scenarios
-
Reproducers for auth issues: Reproducers for authorization issues list of scenarios
-
Regression tests: All regression tests list of scenarios
-
Three scale basic tests: Basic tests for the Three scale gateway list of scenarios
-
Three scale component analyses: Component analysis run via Three scale gateway list of scenarios
-
Three scale stack analyses: Stack analysis run via Three scale gateway list of scenarios
-
Gremlin: Check the Gremlin instance and its behaviour list of scenarios
-
Analysis to Gremlin: The end to end tests, from the start of analysis to the graph database list of scenarios
-
Gremlin DB content: Check the content written into the graph database list of scenarios
-
Stack analysis: API tests for the stack analysis endpoint
/api/v1/stack-analyses/
. list of scenarios -
Known ecosystems: API tests for the known ecosystems endpoint
/api/v1/ecosystems/
. list of scenarios -
Known packages: API tests for the per-ecosystem known packages endpoints under
/api/v1/packages/
. list of scenarios -
Known versions: API tests for the per-package known versions endpoints under
/api/v1/versions/
. list of scenarios -
User feedback: Basic tests for user feedback feature list of scenarios
-
User intente: Basic tests for user intent feature list of scenarios
-
User tag: Test for user tagging feature list of scenarios
-
Disables: Tests that are disabled (empty ATM) list of scenarios
When you add a new feature file, you must add it to the feature_list.txt
file, as it determines the set of features executed by the runtest.sh
script.
Documentation for the module with test steps is automatically generated
into the common.html file. To know more about the available test steps see the existing scenario definitions for usage examples, or the step definitions in features/steps/common.py
and the adjacent step files.
When you add a new test step file no additional changes are needed, as behave automatically checks all Python files in the steps
directory for step definitions.
Note that a single step definition can be shared among multiple steps by stacking decorators. For example:
@when('I wait {num:d} seconds') @then('I wait {num:d} seconds') def pause_scenario_execution(context, num): time.sleep(num)
Thus it allows client pauses to be inserted into both Then
and When
clauses when defining a test scenario.
The behave hooks in features/environment.py
and some of the common step definitions add a number of useful attributes and methods to the behave context.
The available methods include:
-
is_running()
: Indicates whether the core API service is running. -
start_system()
: Starts the API service in its default configuration using Docker Compose. -
teardown_system()
: Shuts down the API service and removes all related container volumes. -
restart_system()
: Tears down and restarts the API service in its default configuration. -
run_command_in_service
: See features/environment.py for more information. -
exec_command_in_container
: See features/environment.py for more information.
The available attributes include:
-
response
: A 'requests.Response' instance containing the most recent response retrieved from the server API. Ensure that, steps making requests to the API set this, steps checking responses from the server query it. -
resource_manager
: A contextlib.ExitStack instance for registering resources to be cleaned up at the end up of the current test scenario. -
docker_compose_path
: A list of Docker compose files defining the default configuration when running under Docker Compose.
The context life cycle policies defined by behave
ensure that any changes to these attributes in step definitions remain in effect only until the end of the current scenario.
The host environment must be configured with docker-compose
, the behave behavior driven development testing framework, and a few other dependencies for particular behavioral checks.
You can configure the host environment in either of the following ways:
-
Install the following components:
$ pip install --user -r requirements.txt
-
Set up a Python virtual environment (either Python 2 or 3) and install the necessary components:
$ pip install -r requirements.txt
The test suite is executed as follows:
$ ./runtest.sh <arguments>
Note that arguments passed to the test runner are passed through to the underlying behave invocation. See the behave docs for the full list of available flags.
The following custom configuration settings are available:
-
-D dump_logs=true
(optional, default is not to print container logs): Requests display of container logs viadocker-compose logs
at the end of each test scenario -
-D dump_errors=true
(optional, default is not to print container logs): Providesdump_logs
only for scenarios that fail. -
-D tail_logs=50
(optional, default is to print 50 lines): Specifies the number of log lines to print for each container when dumping container logs. Impliesdump_errors=true
if neitherdump_logs
nordump_errors
is specified -
-D coreapi_server_image=bayesian/bayesian-api
(optional, default isbayesian/bayesian-api
): Name of Bayesian core API server image -
-D coreapi_worker_image=bayesian/cucos-worker
(optional, default isbayesian/cucos-worker
): Name of Bayesian Worker image -
-D coreapi_url=http://1.2.3.4:32000
(optional, default ishttp://localhost:32000
): Core API URL -
-D breath_time=10
(optional, default is5
): Time to wait before testing
Important
|
Running with non-default image settings will force-retag the
given images as bayesian/bayesian-api and bayesian/worker so that docker-compose can find them. This may affect subsequent docker and docker-compose calls.
|
Some of the tests may be quite slow, you can skip them by passing --tags=-slow
option to behave
.
The following packages need to be imported into the database for successful test run:
sequence array-differ array-flatten array-map array-parallel array-reduce array-slice array-union array-uniq array-unique lodash lodash.assign lodash.assignin lodash._baseuniq lodash.bind lodash.camelcase lodash.clonedeep lodash.create lodash._createset lodash.debounce lodash.defaults lodash.filter lodash.findindex lodash.flatten lodash.foreach lodash.isplainobject lodash.mapvalues lodash.memoize lodash.mergewith lodash.once lodash.pick lodash._reescape lodash._reevaluate lodash._reinterpolate lodash.reject lodash._root lodash.some lodash.tail lodash.template lodash.union lodash.without npm underscore
clojure_py requests scrapy Pillow SQLAlchemy Twisted mechanize pywinauto click scikit-learn coverage cycler numpy mock nose scipy matplotlib nltk pandas parsimonious httpie six wheel pygments setuptools
io.vertx:vertx-core io.vertx:vertx-web io.vertx:vertx-jdbc-client io.vertx:vertx-rx-java io.vertx:vertx-web-client io.vertx:vertx-web-templ-freemarker io.vertx:vertx-web-templ-handlebars io.vertx:vertx-web org.springframework:spring-websocket org.springframework:spring-messaging org.springframework.boot:spring-boot-starter-web org.springframework.boot:spring-boot-starter org.springframework:spring-websocket org.springframework:spring-messaging
Run the resilient infrastructure tests as follows:
-
Ensure that you have logged into OpenShift before the tests are run. These tests access OpenShift Console i.e.. the
oc
command. -
Switch to the right project.
ImportantThese tests restart different pods, so ensure that you do not run them against the production environment. To make sure you are switched to the right project in OpenShift use:
$ oc projects
The selected project is marked by *, for example:
* my-test-project bayesian-preview yet-another-project
To switch to another project use the following command:
$ oc project <project-name>
For example:
$ oc project bayesian-preview
-
Start the resilient infrastructure tests using:
$ ./runtest.sh --tags resilient.infrastructure
A brief about setting up security tokens for end to end tests.
Currently we use the following user for test account: ptisnovs-preview-osiotest1
Caution
|
As the offline token feature manifested in a point of vulnerability (where potential attackers may exploit a stolen token across an extensive period of time, without concern for the token expiring), we now recommend that standard access tokens, obtained using the standard OAuth flow are used instead. |
The process looks like:
-
Login to OSIO and acquire coded token
-
Decode the refresh token
-
Store the refresh token into Vault
-
Setup CI jobs to put refresh token into environment variable with a known name
-
Use this environment variable
Important
|
please choose the right system - production or pre-production! |
To get the token for production system, open the following page:
To get the token for prod-preview, open the following page:
After logging in, you will be redirected to another URL.
Look at the URL of the new page.
Copy the <JSON> part from the URL, it will look like this:
%7B%22access_token%22%3A%22foobar22expires_in%22%3A2592000%2C%22not-before-policy%22%3Anull%2C%22refresh_expires_in%22%3A2592000%2C%22refresh_token%22foobar%22token_type%22%3A%22Bearer%22%7D
Use conversion function to convert these data into JSON format:
Conversion function:
urldecode() { : "${*//+/ }"; echo -e "${_//%/\\x}"; }
Usage:
urldecode `cat url_part.txt` > url_part.json
Result should look like this:
"access_token":"foobar",
"expires_in":2592000,
"not-before-policy":null,
"refresh_expires_in":2592000,
"refresh_token":"foobar",
"token_type":"Bearer"
Get just the refresh_token
part and store it into file named refresh_token.txt
Caution
|
Make sure that the file don’t end with a new line. It will cause problems because the Vault CLI tool will use the whole content of a file, including newline, which is not correct. |
TIP for VIM users: use the following settings to remove EOLN
:set binary
:set noendofline
For CI, Please Refer CI_README.adoc
Please look into Standard operating procedures document for exlanation of most common issues.