From 44378993628481b69eb6bbd15061d37d85808555 Mon Sep 17 00:00:00 2001
From: "github-actions[bot]" All of the tests being executed in this runset are passing. All of the tests being executed in this runset are passing. RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252
@@ -1388,7 +1388,7 @@ Most of the tests are running. The tests not passing are being investigated. Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the
@@ -1421,7 +1421,7 @@ All AIP10 tests are currently running. RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252
@@ -1331,7 +1331,7 @@ Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the
@@ -1349,7 +1349,7 @@ @RFC0793 has some failures due to aries-vcx not supporting full range of DID methods in
@@ -1365,7 +1365,7 @@ Most of the tests are running. The tests not passing are being investigated. All AIP10 tests are currently running. All of the tests being executed in this runset are passing. Results last updated: Mon Nov 11 23:20:47 UTC 2024 Results last updated: Wed Nov 13 16:42:11 UTC 2024 This web site shows the current status of Aries Interoperability between Aries frameworks and agents. The latest interoperability test results are below. The following test agents are currently being tested: In the table above, each row is a test agent, its columns the results of tests executed in combination with other test agents. The last column (\"All Tests\") shows the results of all tests run for the given test agent in any role. The link on each test agent name provides more details about results for all test combinations for that test agent. On that page are links to a full history of the test runs and full details on every executed test. Notes: Results last updated: Mon Nov 11 23:20:47 UTC 2024 Runset Name: ACA-PY to ACA-Py All of the tests being executed in this runset are passing. Status Note Updated: 2021.03.18 Runset Name: ACA-PY to ACA-Py All of the tests being executed in this runset are passing. Status Note Updated: 2021.03.16 Runset Name: acapy to aries-vcx RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252 Status Note Updated: 2024.07.05 Runset Name: ACA-PY to Credo Most of the tests are running. The tests not passing are being investigated. Status Note Updated: 2024.09.06 Runset Name: aries-vcx to acapy Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the backchannel. Being resolved here: https://github.com/hyperledger/aries-vcx/issues/1253 @RFC0793 has relatively low success due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05 Runset Name: Credo to ACA-PY All AIP10 tests are currently running. Status Note Updated: 2024.09.06 Jump back to the interoperability summary. This website reports on the interoperability between different Hyperledger Aries agents. Interoperability includes how seamlessly the agents work together, and how well each agent adheres to community-agreed standards such as Aries Interop Profile (AIP) 1.0 and AIP 2.0. As Digital Trust ecosystems evolve they will naturally require many technologies to coexist and cooperate. Worldwide projects will get larger and will start to overlap. Also, stakeholders and users will not care about incompatibilities; they will simply wish to take advantage of Digital Trust benefits. Interoperability ultimately means more than just Aries agents working with each other, as it covers worldwide standards and paves the way for broader compatibility. For all these reasons interoperability is incredibly important if Hyperledger Aries is to continue to flourish. Aries agents are the pieces of software that provide Digital Trust services such as issuing and receiving verifiable credentials and verifying presentations of verifiable credentials. Many Aries agents are built on Aries Frameworks, common components that make it easier to create agents -- developers need only add the business logic on top of a framework to make their agent. Agents can be written in different programming languages, and designed for different devices or for use in the cloud. What unites Aries agents are the standards and protocols they aim to adhere to, and the underlying technologies (cryptography, DIDs and DID utility ledgers and verifiable credentials). The Aries frameworks and agents currently tested for interoperability with AATH are: The Aries frameworks and agents formerly tested for interoperability with AATH are: Aries Agent Test Harness (AATH) is open-source software that runs a series of Aries interoperability tests and delivers the test results data to this website. AATH uses a Behavior Driven-Development (BDD) framework to run tests that are designed to exercise the community-designed Aries Protocols, as defined in the Aries RFC specifications. The tests are executed by starting up four Test Agents (\u201cAcme\u201d is an issuer, \u201cBob\u201d a holder/prover, \u201cFaber\u201d a verifier and Mallory, a sometimes malicious holder/prover), and having the test harness send instructions to the Test Agents to execute the steps of the BDD tests. Each Test Agent is a container that contains the \u201ccomponent under test\u201d (an Aries agent or framework), along with a web server that communicates (using HTTP) with the test harness to receive instructions and report status, and translates and passes on those instructions to the \u201ccomponent under test\u201d using whatever method works for that component. This is pictured in the diagram below, and is covered in more detail in the AATH Architecture section of the repo\u2019s README. A runset is a named set of tests (e.g. \u201call AIP 1.0 tests\u201d) and test agents (e.g. \u201cACA-Py and Aries Framework JavaScript\u201d) that are run on a periodic basis via GitHub Actions \u2014 for example, every day. The results of each run of a runset are recorded to a test results repository for analysis and summarized on this site. In general, the order of the Test Agent names indicate the roles played, with the first playing all roles except Bob (the holder/prover). However, exact details of what Test Agents play what roles can be found in the runset details page. The set of tests run (the scope) per runset vary by the combined state of the agents involved in a test. For example: For these reasons it\u2019s not possible to say that, for example, an 80% pass result is \u201cgood\u201d or 50% is \u201cbad\u201d. The numbers need to be understood in context. The scope and exceptions columns in the summary and the summary statement found on each runset detail page on this website, document the scope and expectations of runset. Each runset detail page also provides narrative on the current status of the runset \u2014 for example, why some tests of a runset are failing, what issues have been opened and where to address the issue. Tests can fail for many reasons, and much of the work of maintaining the tests and runsets is staying on top of the failures. The following are some notes about failing tests and what to do about them: The Allure reports accessible from this site provide a lot of information about failing tests and are a good place to start in figuring out what is happening. Here's how to follow the links to get to the test failure details: In addition to drilling into a specific test scenario (aka \"stories\")/case (aka \"behavior\")/step, you can look at the recent runset history (last 20 runs). On the left side menu, click on \"Overview\", and then take a look at the big \"history\" graph in the top right, showing how the runset execution has varied over time. Ideally, it's all green, but since you started from a runset that had failures, it won't be. Pretty much every part of the overview page is a drill down link into more and more detailed information about the runset, a specific run of the runset, a specific test case and so on. Lots to look at! Aries Interop Profile (AIP) is a set of concepts and protocols that every Aries agent that wants to be interoperable should implement. Specific Aries agents may implement additional capabilities and protocols, but for interoperability, they must implement those defined in an AIP. AIP currently has two versions: AIP versions go through a rigorous community process of discussion and refinement before being agreed upon. During that process, the RFCs that go into each AIP are debated and the specific version of each included RFC is locked down. AIPs are available for anyone to review (and potentially contribute to) in the Aries RFC repo. For developers improving an Aries agent or framework, each runset's page has a link to a detailed report in Allure. This allows the specific tests and results to be explored in detail. If you are a stakeholder interested in improving the results for an agent, this website (and the Allure links, described above) should have enough material for your teams to take action. Finally, if you want your Aries agent to be added to this website, or wish to expand the tests covered for your agent, your developers can reference the extensive information in the Aries Agent Test Harness repo on GitHub. In addition an API reference for backchannels can be found here Runset Name: acapy to aries-vcx RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252 Status Note Updated: 2024.07.05 Runset Name: aries-vcx to acapy Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the backchannel. Being resolved here: https://github.com/hyperledger/aries-vcx/issues/1253 @RFC0793 has relatively low success due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05 Runset Name: aries-vcx to aries-vcx @RFC0793 has some failures due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05 Runset Name: aries-vcx to credo Runset Name: credo to aries-vcx Jump back to the interoperability summary. Runset Name: ACA-PY to Credo Most of the tests are running. The tests not passing are being investigated. Status Note Updated: 2024.09.06 Runset Name: aries-vcx to credo Runset Name: Credo to ACA-PY All AIP10 tests are currently running. Status Note Updated: 2024.09.06 Runset Name: credo to aries-vcx Runset Name: Credo to Credo All of the tests being executed in this runset are passing. Status Note Updated: 2024.07.29 Jump back to the interoperability summary. The Aries Agent Test Harness (AATH) is a BDD-based test execution engine and set of tests for evaluating the interoperability of Aries Agents and Agent Frameworks. The tests are agnostic to the components under test but rather are designed based on the Aries RFCs and the interaction protocols documented there. The AATH enables the creation of an interop lab much like the labs used by the telcos when introducing new hardware into the markets\u2014routers, switchers and the like. Aries agent and agent framework builders can easily incorporate these tests into the their CI/CD pipelines to ensure that interoperability is core to the development process. Want to see the Aries Agent Test Harness in action? Give it a try using a git, docker and bash enabled system. Once you are in a bash shell, run the following commands to execute a set of RFC tests using the Aries Cloud Agent - Python: The commands take a while to run (you know...building modern apps always means downloading half the internet...), so while you wait, here's what's happening: It's that last part makes the AATH powerful. On every run, different AATH-enabled components can be assigned any role (Acme, Bob, Faber, Mallory). For some initial pain (AATH-enabling a component), interoperability testing becomes routine, and we can make hit our goal: to make interoperability boring. Interesting to you? Read on for more about the architecture, how to build tests, how to AATH-enable the Aries agents and agent frameworks that you are building and how you can run these tests on a continuous basis. For a brief set of slides covering the process and goals, check this out. We'd love to have help in building out a full Aries interoperability lab. The following diagram provides an overview of the architecture of the AATH. There are a couple of layers of abstraction involved in the test harness architecture, and it's worth formalizing some terminology to make it easier to communicate about what's what when we are are running tests. AATH test scripts are written in the Gherkin language, using the python behave framework. Guidelines for writing test scripts are located here. Backchannels are the challenging part of the AATH. In order to participate in the interoperability testing, each CUT builder must create and maintain a backchannel that converts requests from the test harness into commands for the component under test. In some cases, that's relatively easy, such as with Aries Cloud Agent - Python. An ACA-Py controller uses an HTTP interface to control an ACA-Py instance, so the ACA-Py backchannel is \"just another\" ACA-Py controller. In other cases, it may be more difficult, calling for the component under test to be embedded into a web service. We have created a proof-of-concept Test Agent to support manual testing with mobile agents, described here. A further complication is that as tests are added to the test suite, the backchannel interface expands, requiring that backchannel maintainers extend their implementation to be able to run the new tests. Note that the test engine doesn't stop if the backchannel steps are not implemented, however, such tests will be marked as Backchannels can be found in the A number of backchannels have been implemented, with the a subset being regularly run for testing ACA-PY, Aries VCX and Credo-TS Aries agent frameworks. The ACA-Py is built on a common Python base (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/python/aries_backchannel.py) that sets up the backchannel API listener and performs some basic request validation and dispatching. On the other hand Aries VCX is build on their preferred language (Rust). The ACA-PY (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/acapy/acapy_backchannel.py) and AriesVCX (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/aries-vcx) implementations are good example to extend the base to add support for their respective agent frameworks. There is also a backchannel to support (manual) testing with mobile agents. This backchannel doesn't control the mobile agent directly, rather it will prompt the tester to manually accept connection requests, credential offers etc. Use of the mobile backchannel is described here. The AATH Before running tests, you must build the TA and harness docker images. Use There are two options for testing ACA-PY - you can build and run To run the tests, use the There are two ways to control the behave test engine's selection of test cases to run. First, you can specify one or more Note that the To enable full control over behave's behavior (if you will...), the For a full inventory of tests available to run, use the You may have the need to utilize the agents and their controller/backchannels separately from running interop tests with them. This can be for debugging AATH test code, or for something outside of AATH, like Aries Mobile Test Harness (AMTH) tests. To assist in this requirement the manage script can start 1-n agents of any aries framework that exists in AATH. This is done as follows: The command above will only start Acme as ACA-py. No other agents (Bob, Faber, etc.) will be started. The second command above, will start Acme as AFGO, and Bob as ACA-py, utilizing an external Ledger and Tails Server, with a custom configuration to start ACA-py with. It will also start ngrok which is usually needed for mobile testing in AMTH. To stop any agents started in this manner just run When running test code in a debugger, you may not always want or need all the agents running when doing your debugging. Your test may only utilize Acme and Bob, and have no need for Faber and Mallory. This feature will allow you to start only the agents needed in your test you are debugging. The following example will run ACA-py as Acme and Bob with no other agents running. Aries Mobile Test Harness (AMTH) is a testing stack used to test mobile Aries wallets. To do this end to end, mobile tests need issuers, verifiers, and maybe mediators. Instead of AMTH managing a set of controllers and agents, AMTH can point to an Issuer or Verifier controller/agent URL. AMTH can take advantage of the work done across aries frameworks and backchannels to assign AATH agents as issuers or verifiers in testing aries wallets. For example, the BC Wallet tests in AMTH are utilizing ACA-py agents in AATH as an issuer and verifier. This is done by executing the following. From within aries-agent-test-harness From within aries-mobile-test-harness The URLs for issuer and verifier are pointers to the backchannel controllers for Acme and Bob in AATH, so that these test take advantage of the work done there. You can pass backchannel-specific parameters as follows: The environment variable name is of the format The contents of the environment variable are backchannel-specific. For aca-py it is a JSON structure containing parameters to use for agent startup. The above example runs all the tests using the Alternatively to the Extra Backchannel-Specific Parameters above, you can also pass a configuration file through to your agent when it starts (only works if your agent is started by your backchannel). The AATH tests have a predefined set of options needed for the test flow to function properly so, adding this configuration to AATH test execution may have side effects causing the interop tests to fail. However, this is helpful when using the agents as services outside of AATH tests like with Mobile Wallet tests in Aries Mobile Test Harness, where the agents usually will benefit from having auto options turned on. You can pass through your config file using the environment variable AGENT_CONFIG_FILE as follows: The config file should live in the When using AATH agents as a service for AMTH, these agent services will need to be started with differet or extra parameters on the agents than AATH starts them with by default. Mobile test issuers and verifiers may need the auto parameters turned on, like From within aries-agent-test-harness From within aries-mobile-test-harness The test harness has utilized tags in the BDD feature files to be able to narrow down a test set to be executed at runtime. The general AATH tags currently utilized are as follows: Proposed Connection Protocol Tags To get a list of all the tags in the current test suite, run the command: To get a list of the tests (scenarios) and the associated tags, run the command: Using tags, one can just run Acceptance Tests... or all Priority 1 Acceptance Tests, but not the ones flagged Work In Progress... or derived functional tests or all the ExceptionTests... Stringing tags together in one So the command above will run tests from RFC0453 or RFC0454, without the wip tag, and without the CredFormat_JSON-LD tag. To read more on how one can control the execution of test sets based on tags see the behave documentation The option To read about what protocols and features from Aries Interop Profile 1.0, see the Test Coverage Matrix. For information on enhanced test reporting with the Aries Agent Test Harness, see Advanced Test Reporting. Runsets are GHA based workflows that automate the execution of your interop tests and reporting of the results. These workflows are contained in the .github/workflows folder and must be named Test execution is controlled by the The Aries Agent Test Harness defines multiple Dev Containers to aid the test developer and the Backchannel/Controller developer. This allows the developers to write code for these areas without having to install all libraries and configure your local dev machine to write these tests or update an Aries Framework Backchannel. At the time of writing this document there are three Dev Containers in AATH. - A Test Development Dev Container - An ACA-Py Backchannel Development Dev Container - An Aries Framework Javascript/CREDO-TS Backchannel Dev Container (Dev Container still in development) To get started make sure you have installed the Dev Containers VSCode extension in VSCode. Clone the Aries Agent Test Harness repository and open the root folder in VS Code. Once opened, VS Code will detect the available dev containers and prompt you to open them. Selecting this option will display all the dev containers that you can choose from. The other way to open the Dev Container is to select the Then select At the first time of opening a specific Dev Container, the container will be built. If a change is made in any of the dev containers configurations, the dev container will have to be rebuilt. VSCode should sense a change to these files and prompt a rebuild, but if not, or you don't accept the rebuild prompt at the time it appears, a rebuild can be initiated in VSCode within the dev container by clicking on the dev container name in the bottom left corner of VSCode and selecting Rebuild. The dev container json files are located in The dev containers use an existing Dockerfile to build the image. These are not the regular docker files that are build with the AATH manage script. There are specific Dockerfiles for each dev container that are based on those original docker files but needed to be modified to work better with the dev container configurations. The Dockerfiles are named the same as the original files except with These dev containers are named in Docker to allow for identification and better communications between agents. If you want an agent dev container to represent one of acme, bob, faber, or mallory, make sure the devcontainer.json is changed to name you want the agent to represent. All dev containers are on the Many times in a single test scenario there may be 1-n connections to be aware of between the players involved in the scenario. Acme is connected to Bob, and different connection ids are used for both directions depending on which player is acting at the time; Acme to Bob, and Bob to Acme. The connections may extend to other participating players as well, Acme to Faber, Bob to Faber. With those relationships alone, the tests have to manage 6 connection ids. The connection tests uses a dictionary of dictionaries to store these relationships. When a new connection is made between two parties, the tests will create a dictionary keyed by the first player, that contains another dictionary keyed by the second player, that contains the connection id for the relationship. It will do the same thing for the other direction of the relationship as well, in order to get the connection id for that direction of the relationship. The dictionary for the Bob Acme relationship will look like this; Runset acapy-aip10
**Latest results: 29 out of 35 (82%)**
-*Last run: Mon Nov 11 00:41:59 UTC 2024*
+*Last run: Wed Nov 13 00:41:43 UTC 2024*
Current Runset Status¶
Runset acapy-aip20
**Latest results: 61 out of 61 (100%)**
-*Last run: Mon Nov 11 01:15:17 UTC 2024*
+*Last run: Wed Nov 13 01:14:27 UTC 2024*
Current Runset Status¶
Runset acapy-ariesvcx
**Latest results: 19 out of 28 (67%)**
-*Last run: Mon Nov 11 01:33:18 UTC 2024*
+*Last run: Wed Nov 13 01:31:51 UTC 2024*
Current Runset Status¶
Runset acapy-credo
**Latest results: 38 out of 38 (100%)**
-*Last run: Mon Nov 11 02:04:35 UTC 2024*
+*Last run: Wed Nov 13 02:02:20 UTC 2024*
Current Runset Status¶
Runset ariesvcx-acapy
**Latest results: 11 out of 28 (39%)**
-*Last run: Mon Nov 11 03:24:27 UTC 2024*
+*Last run: Wed Nov 13 03:24:09 UTC 2024*
Current Runset Status¶
Runset credo-acapy
**Latest results: 23 out of 28 (82%)**
-*Last run: Mon Nov 11 04:11:39 UTC 2024*
+*Last run: Wed Nov 13 04:11:03 UTC 2024*
Current Runset Status¶
Runset acapy-ariesvcx
**Latest results: 19 out of 28 (67%)**
-*Last run: Mon Nov 11 01:33:18 UTC 2024*
+*Last run: Wed Nov 13 01:31:51 UTC 2024*
Current Runset Status¶
Runset ariesvcx-acapy
**Latest results: 11 out of 28 (39%)**
-*Last run: Mon Nov 11 03:24:27 UTC 2024*
+*Last run: Wed Nov 13 03:24:09 UTC 2024*
Current Runset Status¶
Runset ariesvcx-ariesvcx
**Latest results: 27 out of 32 (84%)**
-*Last run: Mon Nov 11 03:37:33 UTC 2024*
+*Last run: Wed Nov 13 03:37:10 UTC 2024*
Current Runset Status¶
Runset ariesvcx-credo
**Latest results: 8 out of 20 (40%)**
-*Last run: Mon Nov 11 03:52:00 UTC 2024*
+*Last run: Wed Nov 13 03:51:37 UTC 2024*
Current Runset Status¶
No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.
@@ -1380,7 +1380,7 @@
Runset credo-ariesvcx
**Latest results: 6 out of 18 (33%)**
-*Last run: Mon Nov 11 04:25:35 UTC 2024*
+*Last run: Wed Nov 13 04:24:06 UTC 2024*
Current Runset Status¶
No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.
diff --git a/main/credo/index.html b/main/credo/index.html
index 09179abf..5d9ddaec 100644
--- a/main/credo/index.html
+++ b/main/credo/index.html
@@ -1316,7 +1316,7 @@
Runset acapy-credo
**Latest results: 38 out of 38 (100%)**
-*Last run: Mon Nov 11 02:04:35 UTC 2024*
+*Last run: Wed Nov 13 02:02:20 UTC 2024*
Current Runset Status¶
Runset ariesvcx-credo
**Latest results: 8 out of 20 (40%)**
-*Last run: Mon Nov 11 03:52:00 UTC 2024*
+*Last run: Wed Nov 13 03:51:37 UTC 2024*
Current Runset Status¶
No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.
@@ -1346,7 +1346,7 @@
Runset credo-acapy
**Latest results: 23 out of 28 (82%)**
-*Last run: Mon Nov 11 04:11:39 UTC 2024*
+*Last run: Wed Nov 13 04:11:03 UTC 2024*
Current Runset Status¶
Runset credo-ariesvcx
**Latest results: 6 out of 18 (33%)**
-*Last run: Mon Nov 11 04:25:35 UTC 2024*
+*Last run: Wed Nov 13 04:24:06 UTC 2024*
Current Runset Status¶
No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.
@@ -1376,7 +1376,7 @@
Runset credo
**Latest results: 27 out of 28 (96%)**
-*Last run: Mon Nov 11 04:47:17 UTC 2024*
+*Last run: Wed Nov 13 04:45:41 UTC 2024*
Current Runset Status¶
Latest Interoperability ResultsWondering what the results mean? Please read the brief introduction to Aries interoperability for some background.
"},{"location":"#latest-interoperability-results","title":"Latest Interoperability Results","text":"Test Agent Scope Exceptions ACA-Py Credo VCX All Tests ACA-Py AIP 1, 2 None 90 / 9693% 61 / 6692% 30 / 5653% 181 / 21883% Credo AIP 1 Revocation 61 / 6692% 27 / 2896% 14 / 3836% 102 / 13277% VCX AIP 1 Revocation 30 / 5653% 14 / 3836% 27 / 3284% 71 / 12656%
"},{"location":"acapy/#current-runset-status","title":"Current Runset Status","text":"**Latest results: 29 out of 35 (82%)**\n\n\n*Last run: Mon Nov 11 00:41:59 UTC 2024*\n
"},{"location":"acapy/#runset-acapy-aip20","title":"Runset acapy-aip20","text":"
"},{"location":"acapy/#current-runset-status_1","title":"Current Runset Status","text":"**Latest results: 61 out of 61 (100%)**\n\n\n*Last run: Mon Nov 11 01:15:17 UTC 2024*\n
"},{"location":"acapy/#runset-acapy-ariesvcx","title":"Runset acapy-ariesvcx","text":"
"},{"location":"acapy/#current-runset-status_2","title":"Current Runset Status","text":"**Latest results: 19 out of 28 (67%)**\n\n\n*Last run: Mon Nov 11 01:33:18 UTC 2024*\n
"},{"location":"acapy/#runset-acapy-credo","title":"Runset acapy-credo","text":"
"},{"location":"acapy/#current-runset-status_3","title":"Current Runset Status","text":"**Latest results: 38 out of 38 (100%)**\n\n\n*Last run: Mon Nov 11 02:04:35 UTC 2024*\n
"},{"location":"acapy/#runset-ariesvcx-acapy","title":"Runset ariesvcx-acapy","text":"
"},{"location":"acapy/#current-runset-status_4","title":"Current Runset Status","text":"**Latest results: 11 out of 28 (39%)**\n\n\n*Last run: Mon Nov 11 03:24:27 UTC 2024*\n
"},{"location":"acapy/#runset-credo-acapy","title":"Runset credo-acapy","text":"
"},{"location":"acapy/#current-runset-status_5","title":"Current Runset Status","text":"**Latest results: 23 out of 28 (82%)**\n\n\n*Last run: Mon Nov 11 04:11:39 UTC 2024*\n
"},{"location":"aries-interop-intro/#how-is-interoperability-assessed","title":"How is interoperability assessed?","text":""},{"location":"aries-interop-intro/#the-aries-agent-test-harness","title":"The Aries Agent Test Harness","text":"
"},{"location":"aries-interop-intro/#investigating-failing-tests","title":"Investigating Failing Tests","text":"
"},{"location":"aries-vcx/#current-runset-status","title":"Current Runset Status","text":"**Latest results: 19 out of 28 (67%)**\n\n\n*Last run: Mon Nov 11 01:33:18 UTC 2024*\n
"},{"location":"aries-vcx/#runset-ariesvcx-acapy","title":"Runset ariesvcx-acapy","text":"
"},{"location":"aries-vcx/#current-runset-status_1","title":"Current Runset Status","text":"**Latest results: 11 out of 28 (39%)**\n\n\n*Last run: Mon Nov 11 03:24:27 UTC 2024*\n
"},{"location":"aries-vcx/#runset-ariesvcx-ariesvcx","title":"Runset ariesvcx-ariesvcx","text":"
"},{"location":"aries-vcx/#current-runset-status_2","title":"Current Runset Status","text":"**Latest results: 27 out of 32 (84%)**\n\n\n*Last run: Mon Nov 11 03:37:33 UTC 2024*\n
"},{"location":"aries-vcx/#runset-ariesvcx-credo","title":"Runset ariesvcx-credo","text":"
"},{"location":"aries-vcx/#current-runset-status_3","title":"Current Runset Status","text":"**Latest results: 8 out of 20 (40%)**\n\n\n*Last run: Mon Nov 11 03:52:00 UTC 2024*\n
"},{"location":"aries-vcx/#runset-details_3","title":"Runset Details","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.\n
"},{"location":"aries-vcx/#runset-credo-ariesvcx","title":"Runset credo-ariesvcx","text":"
"},{"location":"aries-vcx/#current-runset-status_4","title":"Current Runset Status","text":"**Latest results: 6 out of 18 (33%)**\n\n\n*Last run: Mon Nov 11 04:25:35 UTC 2024*\n
"},{"location":"aries-vcx/#runset-details_4","title":"Runset Details","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.\n
"},{"location":"credo/#current-runset-status","title":"Current Runset Status","text":"**Latest results: 38 out of 38 (100%)**\n\n\n*Last run: Mon Nov 11 02:04:35 UTC 2024*\n
"},{"location":"credo/#runset-ariesvcx-credo","title":"Runset ariesvcx-credo","text":"
"},{"location":"credo/#current-runset-status_1","title":"Current Runset Status","text":"**Latest results: 8 out of 20 (40%)**\n\n\n*Last run: Mon Nov 11 03:52:00 UTC 2024*\n
"},{"location":"credo/#runset-details_1","title":"Runset Details","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.\n
"},{"location":"credo/#runset-credo-acapy","title":"Runset credo-acapy","text":"
"},{"location":"credo/#current-runset-status_2","title":"Current Runset Status","text":"**Latest results: 23 out of 28 (82%)**\n\n\n*Last run: Mon Nov 11 04:11:39 UTC 2024*\n
"},{"location":"credo/#runset-credo-ariesvcx","title":"Runset credo-ariesvcx","text":"
"},{"location":"credo/#current-runset-status_3","title":"Current Runset Status","text":"**Latest results: 6 out of 18 (33%)**\n\n\n*Last run: Mon Nov 11 04:25:35 UTC 2024*\n
"},{"location":"credo/#runset-details_3","title":"Runset Details","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.\n
"},{"location":"credo/#runset-credo","title":"Runset credo","text":"
"},{"location":"credo/#current-runset-status_4","title":"Current Runset Status","text":"**Latest results: 27 out of 28 (96%)**\n\n\n*Last run: Mon Nov 11 04:47:17 UTC 2024*\n
git clone https://github.com/hyperledger/aries-agent-test-harness\ncd aries-agent-test-harness\n./manage build -a acapy -a javascript\n./manage run -d acapy -b javascript -t @AcceptanceTest -t ~@wip\n
./manage build
command builds Test Agent docker images for the Aries Cloud Agent Python (ACA-Py) and Aries Framework JavaScript (AFJ) agent frameworks and the test harness../manage run
command executes a set of tests (those tagged \"AcceptanceTest\" but not tagged \"@wip\") with the ACA-Py test agent playing most of the roles\u2014Acme, Faber and Mallory, while the AFJ test agent plays the role of Bob.
"},{"location":"guide/#architecture","title":"Architecture","text":"manage
bash script
"},{"location":"guide/#aries-agent-test-harness-terminology","title":"Aries Agent Test Harness Terminology","text":"
./manage
) processes the command line options and orchestrates the docker image building and test case running../manage
script also supports running the services needed by the tests, such as a von-network Indy instance, an Indy tails service, a universal resolver and a did:orb
instance.mobile
can be used in the Bob
role to test mobile wallet apps on phones. See this document for details.remote
option.
"},{"location":"guide/#test-script-guidelines","title":"Test Script Guidelines","text":"fail
ing on test runs, usually with an HTTP 404
error.aries-backchannels
folder of this repo. For more information on building a backchannel, see the documentation in the aries-backchannels
README, and look at the code of the existing backchannels. To get help in building a backchannel for a component you want tested, please use GitHub issues and/or ask questions on the Hyperledger Discord #aries-agent-test-harness
channel.manage
bash script","text":"./manage
script in the repo root folder is used to manage running builds of TA images and initiate test runs. Running the script with no arguments or just help
to see the script's usage information. The following summarizes the key concepts../manage
is a bash script, so you must be in a bash compatible shell to run the AATH. You must also have an operational docker installation and git installed. Pretty normal stuff for Aries Agent development. As well, the current AATH requires access to a running Indy network. A locally running instance of VON-Network is one option, but you can also pass in environment variables for the LEDGER_URL, GENESIS_URL or GENESIS_FILE to use a remote network. For example LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io
./manage build -a <TA>
to build the docker images for a TA, and the test harness itself. You may specify multiple -a
parameters to build multiple TAs at the same time. Leaving off the -a
option builds docker images for all of the TAs found in the repo. It takes a long time to run...acapy
, which builds the backchannel based on the latest released code, or you can build and run acapy-main
, which builds the backchannel based on the latest version of the main
branch. (Note that to build the backchannel based on a different repo/branch, edit this file to specify the repo/branch you want to test, and then build/run acapy-main
.)./manage run...
sub-command. the run
command requires defining what TAs will be used for Acme (-a <TA>
), Bob (-b <TA>
) and Mallory (-m <TA>
). To default all the agents to use a single component, use -d <TA>
. Parameters are processed in order, so you can use -d
to default the agents to one, and then use -b
to use a different TA for Bob.-t <tag>
options to select the tests associated with specific tags. See the guidance on using tags with behave here. Note that each -t
option is passed to behave as a --tags <tag>
parameter, enabling control of the ANDs and ORs handling of tags. Specifically, each separate -t
option is ANDed with the rest of the -t
options. To OR tags, use a single -t
option with commas (,
) between the tags. For example, specify the options -t @t1,@t2 -t @f1
means to use \"tests tagged with (t1 or t2) AND f1
.\" To get a full list of possible tags to use in this run command, use the ./manage tags
command.<tag>
arguments passed in on the command line cannot have a space, even if you double-quote the tag or escape the space. This is because the args are going through multiple layers shells (the script, calling docker, calling a script in the docker instance that in turn calls behave...). In all that argument passing, the wrappers around the args get lost. That should be OK in most cases, but if it is a problem, we have the -i
option as follows...-i <ini file>
option can be used to pass a behave \"ini\" format file into the test harness container. The ini file enables full control over the behave engine, add handles the shortcoming of not being able to pass tags arguments with spaces in them. See the behave configuration file options here. Note that the file name can be whatever you want. When it lands in the test harness container, it will be called behave.ini
. There is a default ini file located in aries-agent-test-harness/aries-test-harness/behave.ini
. This ini file is picked up and used by the test harness without the -i option. To run the tests with a custom behave ini file, follow this example,./manage run -d acapy -t @AcceptanceTest -t ~@wip -i aries-test-harness/MyNewBehaveConfig.ini\n
./manage tests
. Note that tests in the list tagged @wip are works in progress and should generally not be run../manage start -a acapy-main\n
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -a afgo-interop -b acapy-main -n\n
./manage stop
.
"},{"location":"guide/#aries-mobile-test-harness","title":"Aries Mobile Test Harness","text":"./manage start -a acapy-main -b acapy-main\n
./manage start -a acapy-main -b acapy-main\n
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io REGION=us-west-1 ./manage run -d SauceLabs -u <device-cloud-username> -k <device-cloud-access-key> -p iOS -a AriesBifold-114.ipa -i http://0.0.0.0:9020 -v http://0.0.0.0:9030 -t @bc_wallet -t @T001-Connect\n
BACKCHANNEL_EXTRA_acapy_main=\"{\\\"wallet-type\\\":\\\"indy\\\"}\" ./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n
-<agent_name>
, where <agent_name>
is the name of the agent (e.g. acapy-main
) with hyphens replaced with underscores (i.e. acapy_main
).indy
wallet type (vs askar
, which is the default).NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -b acapy-main -n\n
aries-backchannels/<agent>
folder so it gets copied into the agent container automatically. Currently only the acapy backchannel supports this custom configuration in this manner. --auto-accept-requests
, --auto-respond-credential-proposal
, etc. The only way to do this when using the AATH agents is through using this configuration file handling. There is an existing file in aries-backchannels/acapy
called auto_isser_config.yaml that is there to support this requirement for the BC wallet. This works in BC Wallet as follows;NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -a acapy-main -b acapy-main -n\n
"},{"location":"guide/#test-tags","title":"Test Tags","text":"LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io REGION=us-west-1 ./manage run -d SauceLabs -u <device-cloud-username> -k <device-cloud-access-key> -p iOS -a AriesBifold-114.ipa -i http://0.0.0.0:9020 -v http://0.0.0.0:9030 -t @bc_wallet -t @T001-Connect\n
./manage tags
./manage tests
./manage run -d acapy -t @AcceptanceTest\n
./manage run -d acapy -t @P1 -t @AcceptanceTest -t ~@wip\n
./manage run -d acapy -t @DerivedFunctionalTest\n
"},{"location":"guide/#using-and-or-in-test-execution-tags","title":"Using AND, OR in Test Execution Tags","text":"./manage run -t @ExceptionTest\n
-t
with commas as separators is equivalent to an OR
. The separate -t
options is equivalent to an AND
../manage run -d acapy-main -t @RFC0453,@RFC0454 -t ~@wip -t ~@CredFormat_JSON-LD\n
-i <inifile>
can be used to pass a file in the behave.ini
format into behave. With that, any behave configuration settings can be specified to control how behave behaves. See the behave documentation about the behave.ini
configuration file here.test-harness-<name>.yml
. Refer to the existing files for examples on how to create one specific to your use case. In most cases you will be able to copy an existing file and change a few parameters.test-harness-runner
. This workflow will dynamically pick up and run any workflow conforming to the test-harness-*.yml
naming convention. Specific test harnesses can be excluded by adding their file name pattern to the ignore_files_starts_with
list separated by a ,
. The test harnesses are run by the Run Test Harness job which uses a throttled matrix strategy. The number of concurrent test harness runs can be controlled by setting the max-parallel
parameter to an appropriate number.
"},{"location":"guide/AATH_DEV_CONTAINERS/#dev-containers-in-aath","title":"Dev Containers in AATH","text":"Open a Remote Window
option in the bottom of VSCode. Reopen in Container
.devcontainer\\
. This is where enhancements to existing dev containers and adding new dev containers for other Aries Frameworks would take place following the conventions already laid out in that folder.dev
in the name. for example, the Dockerfile.dev-acapy-main
was based off of Dockerfile.acapy-main
. \"runArgs\": [\n \"--network=aath_network\",\n \"--name=acme_agent\"\n ],\n
aath_network
in docker which corresponds to the network that the regular agent containers are on when running the manage script. This allows the developer to run tests in a dev container against agents ran by the manage script, along with an agent running in a dev container communicating with other agents ran buy the manage script.
With all three players mentioned above, participating in one scenario, the dictionary will look like this once all connections have been established through the connection steps definitions; ['Bob']['Acme']['30e86995-a2f7-442c-942c-96497aefad8d']\n['Acme']['Bob']['9c0d9f2c-23c1-4384-b89e-950f97a7f173']\n
If the connection step definitions are used in other non connection related tests, like issue credential or proof to establish the connection between two players, then these tests are taking advantage of this relationship storage mechanism. ['Bob']['Acme']['30e86995-a2f7-442c-942c-96497aefad8d']\n['Bob']['Faber']['2c75d023-91dc-43b6-9103-b25af582fc6c']\n['Acme']['Bob']['9c0d9f2c-23c1-4384-b89e-950f97a7f173']\n['Acme']['Faber']['3514daa2-f9a1-492f-94f5-386b03fb8d31']\n['Faber']['Bob']['f907c1e2-abe1-4c27-b9e2-e19f403cdfb5']\n['Faber']['Acme']['b1faea96-84bd-4c3c-b4a9-3d99a6d51030']\n
This connection id dictionary is actually stored in the context
object in test harness, and because the context
object is passed into ever step definition in the test scenario, it can be accessed from anywhere in the test scenario. Retrieving the connection id for a relationship is done like this;
connection_id = context.connection_id_dict['player1']['player2']\n
Lets say you are writing a step def where Bob is going make a request to Acme, like Bob proposes a credential to Acme
. The call may need Bob's connection id for the relationship to Acme. Doing this would look like the following; connection_id = context.connection_id_dict['Bob']['Acme']\n
Since player names are always passed into step definitions as variables representing their roles, ie. holder proposes a credential to issuer
, the code will actually look like this. connection_id = context.connection_id_dict[holder][issuer]\n
Connection IDs are always needed at the beginning of a protocol, if not throughout other parts of the protocol as well. Having all ids necessary within the scenario easily accessible at any time, will make writing and maintaining agent tests simpler. "},{"location":"guide/CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"guide/CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"guide/CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"guide/CODE_OF_CONDUCT/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"guide/CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at angelika.ehlers@gov.bc.ca. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"guide/CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/\u00bc
"},{"location":"guide/CONFIGURE-CRED-TYPES/","title":"Configuring Tests with Credential Types and Proofs","text":""},{"location":"guide/CONFIGURE-CRED-TYPES/#contents","title":"Contents","text":"Initially the Aries Agent Interop Tests were written with hard coded Credential Type Definitions, Credential Data for issued cedentials, and a canned Proof Request and Presentation of that Proof. This default behaviour for the tests is fine for quick cursory assessment of the protocol, however it was always a goal to provide a method of having this credential and proof input external to the tests, and to be able to quickly construct tests with different credential and proof data, driven from that external data. Tests still remain that use the default hard coded credential input. Tests like the Proof test below, make no mention of specific credentials or proofs.
@T001-AIP10-RFC0037 @P1 @AcceptanceTest @Indy\nScenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\nGiven \"2\" agents\n| name | role |\n| Faber | verifier |\n| Bob | prover |\nAnd \"Faber\" and \"Bob\" have an existing connection\nAnd \"Bob\" has an issued credential from <issuer>\nWhen \"Faber\" sends a request for proof presentation to \"Bob\"\nAnd \"Bob\" makes the presentation of the proof\nAnd \"Faber\" acknowledges the proof\nThen \"Bob\" has the proof acknowledged\n\nExamples:\n| issuer |\n| Acme |\n| Faber |\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#defining-tests-in-feature-files-with-externalized-credential-info","title":"Defining Tests in Feature Files with Externalized Credential Info","text":"Tests that have externalized input data for credentials and proofs look obviously different than the test above. They use Scenario tags and Example Data Tables to feed the test with input data. This input data is contains in json files located in /aries-agent-test-harness/aries-test-harness/features/data
.
schema_driverslicense.json
file in /aries-agent-test-harness/aries-test-harness/features/data
.cred_data_schema_driverslicense.json
file in /aries-agent-test-harness/aries-test-harness/features/data
. This file will contain to sections, one for \"Data_DL_MaxValues\" and one for \"Data_DL_MinValues\". proof_request_DL_address.json
and a proof_request_DL_age_over_19.json
file in /aries-agent-test-harness/aries-test-harness/features/data
.presentation_DL_address.json
and presentation_DL_age_over_19.json
in /aries-agent-test-harness/aries-test-harness/features/data
.Some conventions are in place here that make it workable.
Proof Requests can contain requests from multiple credentials from the holder. The Test Harness will create the credential types for as many credential types listed as tags for the scenario. For example, below is an example of a scenario that will utilize two credentials in its proofs; Biological Indicators and Health Consent.
@T001.4-AIP10-RFC0037 @P1 @AcceptanceTest @Schema_Biological_Indicators @Schema_Health_Consent @Indy\nScenario Outline: Present Proof of specific types and proof is acknowledged\nGiven \"2\" agents\n| name | role |\n| Faber | verifier |\n| Bob | prover |\nAnd \"Faber\" and \"Bob\" have an existing connection\nAnd \"Bob\" has an issued credential from <issuer> with <credential_data>\nWhen \"Faber\" sends a <request for proof> presentation to \"Bob\"\nAnd \"Bob\" makes the <presentation> of the proof\nAnd \"Faber\" acknowledges the proof\nThen \"Bob\" has the proof acknowledged\n\nExamples:\n| issuer | credential_data | request for proof | presentation |\n| Faber | Data_BI_HealthValues | proof_request_health_consent | presentation_health_consent |\n
In this scenario before the scenario starts 2 credential types are created for the issuer to be able to issue. The credential_data
points to a section named Data_BI_HealthValues in each cred_data_\\.json file, and those two credentials are issued to the holder on step And \"Bob\" has an issued credential from <issuer> with <credential_data>
The request for proof
points to one json file that hold the request that contains data from both credentials. The presentation
obviously is the presentation by the holder of the proof using the 2 credentials. This pattern will work and can be extended for as many credentials as are needed for a presentation test.
"},{"location":"guide/CONFIGURE-CRED-TYPES/#credential-type-definitions","title":"Credential Type Definitions","text":"The following are the basics in defining a Credential Type. It really just consists of a name, a version and the actual attributes of the credential. It also contains a section to set the credential defintion revocation support if needed. To reiterate, this is contained in a Schema_\\.json file in /aries-agent-test-harness/aries-test-harness/features/data
. Follow this pattern to create new tests with different credential types.
{\n\"schema\":{\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n \"address\",\n \"DL_number\",\n \"expiry\",\n \"age\"\n ]\n},\n\"cred_def_support_revocation\":false\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#credential-data","title":"Credential Data","text":"The credential data json file references the credential type name in the main scenario tag. ie cred_data_\\.json. This file holds sections of data that references the name in the examples data table. There needs to be a section that references this name, for ever name mentioned in the test or across other tests that use that credential. na
{\n\"Data_DL_MaxValues\":{\n \"cred_name\":\"Data_DriversLicense_MaxValues\",\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n {\n \"name\":\"address\",\n \"value\":\"947 this street, Kingston Ontario Canada, K9O 3R5\"\n },\n {\n \"name\":\"DL_number\",\n \"value\":\"09385029529385\"\n },\n {\n \"name\":\"expiry\",\n \"value\":\"10/12/2022\"\n },\n {\n \"name\":\"age\",\n \"value\":\"30\"\n }\n ]\n},\n\"Data_DL_MinValues\":{\n \"cred_name\":\"Data_DriversLicense_MaxValues\",\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n {\n \"name\":\"address\",\n \"value\":\"9\"\n },\n {\n \"name\":\"DL_number\",\n \"value\":\"0\"\n },\n {\n \"name\":\"expiry\",\n \"value\":\"10/12/2022\"\n },\n {\n \"name\":\"age\",\n \"value\":\"20\"\n }\n ]\n}\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#proof-requests","title":"Proof Requests","text":"The following is an example of a simple proof request for one attribute with some restrictions.
{\n \"presentation_request\": {\n \"requested_attributes\": {\n \"address_attrs\": {\n \"name\": \"address\",\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_DriversLicense\",\n \"schema_version\": \"1.0.1\"\n }\n ]\n }\n },\n \"version\": \"0.1.0\"\n }\n}\n
The following is an example of a proof request using more than one credential. {\n \"presentation_request\": {\n \"name\": \"Health Consent Proof\",\n \"requested_attributes\": {\n \"bioindicators_attrs\": {\n \"names\": [\n \"name\",\n \"range\",\n \"concentration\",\n \"unit\",\n \"concentration\",\n \"collected_on\"\n ],\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_Biological_Indicators\",\n \"schema_version\": \"0.2.0\"\n }\n ]\n },\n \"consent_attrs\": {\n \"name\": \"jti_id\",\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_Health_Consent\",\n \"schema_version\": \"0.2.0\"\n }\n ]\n }\n },\n \"requested_predicates\": {},\n \"version\": \"0.1.0\"\n }\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#proof-presentations","title":"Proof Presentations","text":"Proof Presentations are straight forward as in the example below. The only thing to note is the cred_id is created during the execution of the test scenario, so there is no way to know it beforehand and have it inside the json file. The test harness takes care of this and swaps in the actual credential id from the credential issued to the holder. To do this change the test harness needs to know what the credential type name is in order to pick the correct cred_id for credential.
{\n \"presentation\": {\n \"comment\": \"This is a comment for the send presentation.\",\n \"requested_attributes\": {\n \"address_attrs\": {\n \"cred_type_name\": \"Schema_DriversLicense\",\n \"revealed\": true,\n \"cred_id\": \"replace_me\"\n }\n }\n }\n}\n
The following is an example of a presentation with two credentials. {\n \"presentation\": {\n \"comment\": \"This is a comment for the send presentation for the Health Consent Proof.\",\n \"requested_attributes\": {\n \"bioindicators_attrs\": {\n \"cred_type_name\": \"Schema_Biological_Indicators\",\n \"cred_id\": \"replace me\",\n \"revealed\": true\n },\n \"consent_attrs\": {\n \"cred_type_name\": \"Schema_Health_Consent\",\n \"cred_id\": \"replace me\",\n \"revealed\": true\n }\n },\n \"requested_predicates\": {},\n \"self_attested_attributes\": {}\n }\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#conclusion","title":"Conclusion","text":"With the constructs above is should be very easy to add new tests based off of new credentials. Essentially, once the json files are created, opening the present proof feature file, copying and pasting one of these tests, incrementing the test id, and replacing the credential type tags and data table names, should have a running test scenario with those new credentials.
As we move forward and non Aca-py Agents are used in the test harness, some of the nomenclature may change in the json file to generalize those names, however it will still be essential for the agent's backchannel to translate that json into whatever the agent is expecting to accomplish the goals of the test steps.
"},{"location":"guide/CONNECTION-REUSE/","title":"Taking Advantage of Connection Reuse in AATH","text":"The Issue Credential and Proof tests that use DID Exchange Connections will attempt to reuse an existing connection if one was established between the agents involved from a subsequent test. This not only tests native connection reuse functionality in the agents, but also saves execution time.
There are three conditions an agent and backchannel can be in when executing these Issue Cred and Proof tests that supports connection reuse.
1/ An agent supports public DIDs, and connection reuse.
A connection was made in a subsequent test that used a public DID for the connection.
In the followup test for either Issue Credential or Proof that has the And requester and responder have an existing connection Given (precondition clause) as part of the test, and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message
is made with \"use_public_did\": True
in the payload.
The backchannel, if needed, can use this to create an invitation that contains the public_did. The invitation returned must contain the Public DID for the responder.
The test harness then calls out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel can use this to trigger the agent to reuse an existing connection if one exists. The connection record is returned to the test harness containing with a state of completed, the requesters connection_id, and the did (my_did) of the requester.
The test harness recognizes that we have a completed connection and calls GET active-connection
on the responder with an id of the requester's DID.
GET active-connection
in the backchannel should query the agent for an active connection that contains the requester's DID. Then return the active connection record that contains the connection_id for the responder.
The test harness at this point has all the info needed to continue the test scenario using that existing connection.
2/ An agent doesn't support public DIDs in connections officially, however has a key in the invite (can be a public DID) that can be used to query the existing connection.
A connection was made in a subsequent test.
In the followup test for either Issue Credential or Proof that has the And requester and responder have an existing connection Given (precondition clause) as part of the test, and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message is made with \"use_public_did\": True
in the payload.
The backchannel, can ignore the use_public_did
flag and remove it from the payload if it interferes with the creation of the invitation. An invitation is returned in the response.
A call is then made to out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel can use this as a trigger to search for an existing connection based on some key that is available in the invitation.
The connection record is returned to the test harness containing with a state of completed
, the requesters connection_id, and the did (my_did) of the requester.
The test harness recognizes that we have a completed connection and calls GET active-connection
on the responder with an id of the requester's DID.
GET active-connection
in the backchannel should query the agent for an active connection that contains the requester's DID. Then return the active connection record that contains the connection_id for the responder.
The test harness at this point has all the info needed to continue the test scenario based on that existing connection.
3/ An agent doesn't support public DIDs in Connections, and cannot reuse a connection in AATH.
Tests for either Issue Credential or Proof that has the And requester and responder have an existing connection
Given (precondition clause) as part of the test and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message
is made with \"use_public_did\": True
in the payload.
The backchannel should ignore the use_public_did
flag and remove it from the payload if it interferes with the creation of the invitation. An invitation is returned in the response.
A call is then made to out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel should ignore this flag and remove it from the data if it interferes with the operation.
completed
.You are encouraged to contribute to the repository by forking and submitting a pull request.
For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.
(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)
Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main
branch. Pull requests should have a descriptive name and include an summary of all changes made in the pull request description.
If you would like to propose a significant change, please open an issue first to discuss the work with the community.
Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).
"},{"location":"guide/Debugging/","title":"Debugging Backchannels","text":""},{"location":"guide/Debugging/#vscode","title":"VSCode","text":""},{"location":"guide/Debugging/#net","title":".NET","text":"$DOCKERHOST
variable, and currently VSCode doesn't offer a way to do this dynamically.cp .env.example .env
DOCKERHOST
with the output of ./manage dockerhost
, also replace the IP in LEDGER_URL
with the output../manage run
script are the same as the backchannels started with the debugger../manage run -d dotnet -t @T001-AIP10-RFC0160
it will run the tests using the backchannels you started from the debugger.For more information on debugging in VSCode see the docs.
"},{"location":"guide/Debugging/#troubleshooting","title":"Troubleshooting","text":""},{"location":"guide/Debugging/#process-dotnet-dev-certs-https-check-trust","title":"Process 'dotnet dev-certs https --check --trust'","text":"If you get the following error:
Error: Process 'dotnet dev-certs https --check --trust' exited with code 9\nError:\n
This means the ASP.NET Core development certificate is not quite working. Running the following should fix the problem:
dotnet dev-certs https --clean\ndotnet dev-certs https --trust\n
See: https://github.com/microsoft/vscode-docker/issues/1761
"},{"location":"guide/LICENSE/","title":"License","text":" Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Definitions.
\"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
\"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
\"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
\"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.
\"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
\"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
\"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
\"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
\"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"
\"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
\u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n
Copyright 2019 Province of British Columbia Copyright 2017-2019 Government of Canada
Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0\n
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
"},{"location":"guide/MAINTAINERS/","title":"Maintainers","text":""},{"location":"guide/MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"name Github Discord Stephen Curran swcurran Ian Costanzo ianco Wade Barnes WadeBarnes Andrew Whitehead andrewwhitehead Timo Glastra TimoGlastra Sheldon Regular nodlesh"},{"location":"guide/MOBILE_AGENT_TESTING/","title":"Mobile Agent (Manual) Testing","text":"Aries Agent Test Harness includes the \"mobile\" Test Agent that supports the manual testing of some mobile agents. The mobile Test Agent doesn't control the mobile app directly but rather prompts the user to interact with the wallet app on their phone to scan a QR code to establish a connection, respond to a credential offer, etc.
Before executing a test run, you have to build the Test Agents you are going to use. For example, the following builds the \"mobile\" and \"acapy-main\" Test Agents:
./manage build -a mobile -a acapy-main\n
Remember to build any other Test Agents you are going to run with the mobile tests.
There are several options to the ./manage run
script that must be used when testing a mobile wallet:
-n
option tells the ./manage
script to start ngrok services for each agent (issuer, verifier) to provide the mobile app an Internet accessible endpoint for each of those agents. You will need to provide an ngrok AuthToken, either free or paid to use this feature. Pass it as an environment variable when calling manage
like NGROK_AUTHTOKEN=YourAuthTokenHere ./manage ...
-b mobile
option to use the mobile Test Agent for the Bob
role (the only one that makes sense for a mobile app)-t @MobileTest
option to run only the tests that have been tagged as \"working\" with the mobile test agent@MobileTest
tag should be added to those test scenariosAnother requirement for using the mobile Test Agent is that you have to use an Indy ledger that is publicly accessible, does not have a Transaction Author Agreement (TAA), and is \"known\" by the mobile wallet app you are testing. That generally means you must use the \"BCovrin Test\" network. Also needed is a public Indy tails file for running revocation tests.
Before you run the tests, you have to have a mobile wallet to test (here are some instructions for getting a mobile wallet app), and if necessary, you must use the wallet app settings to use the \"BCovrin Test\" ledger.
Put together, that gives us the following command to test a mobile wallet with Aries Cloud Agent Python (main branch) running in all other roles.
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io ./manage run -d acapy-main -b mobile -n -t @MobileTest\n
The mobile agent is in \"proof-of-concept\" status and some tests are not 100% reliable with all mobile agents. If things don't work, take a look at the logs (in the ./logs
folder) to try to understand what went wrong.
You can try to run other test scenarios by adjusting the tags (-t
options) you select when running tests, per these instructions. If you do find other test scenarios that work with the mobile Test Agent, please add an issue or PR to add the \"@MobileTest\" tag to the test scenario.
While this gives us one way to test mobile agent interoperability, it would be really nice to be able to run the mobile wallets without human intervention so that we can include mobile wallets in the continuous integration testing. Those working on the Aries Agent Test Harness haven't looked into how that could be done, so if you have any ideas, please let us know.
Another thing that would be nice to have supported is capturing the mobile wallet (brand and version) and test run results in a way that we could add the test run to the https://aries-interop.info page. Do you have any ideas for that? Let us know!
"},{"location":"guide/REMOTE_AGENT_TESTING/","title":"Remote Agent Testing in OATH","text":"OWL Agent Test Harness is a powerful tool for running verifiable credential and decentralized identity interoperability tests. It supports a variety of agent configurations, including running agents locally that are test harness managed, or remotely, unmanaged by the test harness. This guide covers the remote option, allowing you to execute interoperability tests with agents running on remote servers in development, test, staging, or production environments, communicating with other remote agents or test harness managed agents.
"},{"location":"guide/REMOTE_AGENT_TESTING/#prerequisites","title":"Prerequisites","text":"Before using the remote
option, make sure you have:
When running the test harness with remote agents, the basic command structure for setting remote agents is as follows:
./manage run -a remote --aep <acme_endpoint> -b remote --bep <bob_endpoint> -f remote --fep <faber_endpoint> -m remote --mep <mallory_endpoint> \n
For any of the agent flags, -a
, -b
, -f
, -m
, if the agent is set to remote
then the test harness will look for the long option of --aep
, --bep
, --fep
, and -mep
for the endpoint of that particular remote agent.
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io \\\nTAILS_SERVER_URL_CONFIG=https://tails.vonx.io \\\n./manage run \\\n -a remote --aep http://remote-acme.com \\\n -b acapy-main -f acapy-main -m acapy-main \\\n -t @T002-RFC0160\n
This example command will test a remote agent in the role if Acme, an issuer/verifier in conjuction with test harness managed acapy agents playing the other roles of Bob, Faber, and Mallory. Any combination of remote and test harness managed agents is testable, including all remote if one is so inclined.
"},{"location":"guide/REMOTE_AGENT_TESTING/#local-example","title":"Local Example","text":"To verify and see the remote implementation in the test harness working locally, you will need to run one of the test harness agents outside of the OATH docker network. Then use that agent as a remote agent.
Build the local agents:
./manage build -a acapy-main\n
Run a remote agent locally:
./start-remote-agent-demo.sh\n
Run the tests:
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io ./manage run -a acapy-main -b remote --bep http://0.0.0.0:9031 -f acapy-main -m acapy-main -t @T002-RFC0160\n
Shutdown the remote agent
./end-remote-agent-demo.sh\n
"},{"location":"guide/REMOTE_AGENT_TESTING/#handling-errors","title":"Handling Errors","text":"If you encounter any issues while using the remote option, check the following:
The remote option in the Test Harness allows you to test verifiable credential interactions with agents running in remote environments. This flexibility essentially allows you to verify that your agent(s) can successfully interop with other agents for the implemented protocols.
For any extra troubleshooting please consult with the OWL maintainers on Discord.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/","title":"Retry Failed Test Scenarios","text":"This feature introduces the ability to retry failed test scenarios in your test runs. It provides flexibility in managing the number of retry attempts for scenarios that fail during testing.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#table-of-contents","title":"Table of Contents","text":"This feature addresses the issue of retrying failed test scenarios. It implements a mechanism to automatically rerun failed scenarios to improve the stability of test results.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#changes-made","title":"Changes Made","text":"The following changes have been made to implement the retry functionality:
before_feature
hook in \\features\\environment.py
to handle retrying failed scenarios.TEST_RETRY_ATTEMPTS_OVERRIDE
via manage.py
to the Docker environment.There are two ways to override the number of attempts for retrying failed scenarios:
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#1-using-behaveini","title":"1. Usingbehave.ini
","text":"Add the following variable to the [behave.userdata]
section of the behave.ini
file:
[behave.userdata]\ntest_retry_attempts = 2\n
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#2-using-environment-variable","title":"2. Using Environment Variable","text":"Pass the TEST_RETRY_ATTEMPTS_OVERRIDE
variable as an environment variable while running tests or through deployment YAML files.
Example:
TEST_RETRY_ATTEMPTS_OVERRIDE=2 ./manage run -d acapy -b javascript -t @AcceptanceTest\n
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#feedback-and-contributions","title":"Feedback and Contributions","text":"Your feedback and contributions are welcome! If you encounter any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request.
"},{"location":"guide/RunningLocally/","title":"Running OATH Locally","text":""},{"location":"guide/RunningLocally/#running-locally-bare-metal-not-recommended","title":"Running Locally (Bare Metal) - NOT RECOMMENDED","text":"Note this is not recommended, however it may be desirable if you want to run outside of Docker containers. While this repo is in early iteration, we can only provide limited support in using this. These instructions cover what was done in initially setting up the ACA-Py and VCX backchannels before they were standardized. As such, they are included for historical purposes only, and may or may not still be accurate.
The backchannel for Aries Framework .NET only supports the standardized dockerized method for setting up backchannels. However the backchannel does support debugging the backchannel from inside the docker container, which is the most common reason for running locally. See DEBUGGING.md for more info on debugging.
We would FAR prefer help in being able in documenting the use of a debugger with the docker containers vs. documentation on running the test harness on bare-metal.
To run each agent, install the appropriate pre-requisites (the VCX adapter requires a local install of indy-sdk and VCX) and then run as follows.
Setup - you need to run an Indy ledger and a ledger browser. One way to run locally is to run the Indy ledger from the indy-sdk, and the browser from von-network.
In one shell, run the ledger (the nodes will be available on localhost):
git clone https://github.com/hyperledger/indy-sdk.git\ncd indy-sdk\ndocker build -f ci/indy-pool.dockerfile -t indy_pool .\ndocker run -itd -p 9701-9708:9701-9708 indy_pool\n
(Note that you will need the indy-sdk to build the Indy and VCX libraries to run the VCX backchannel.)
... and in a second shell, run the ledger browser:
git clone https://github.com/bcgov/von-network.git\ncd von-network\n# run a python virtual environment\nvirtualenv venv\nsource ./venv/bin/activate\n# install the pre-requisites and then run the ledger browser\npip install -r server/requirements.txt\nGENESIS_FILE=<your path>/aries-agent-test-harness/aries-backchannels/data/local-genesis.txt REGISTER_NEW_DIDS=true PORT=9000 python -m server.server\n
Open additional shells to run the agents.
For ACA-PY:
# install the pre-requisites:\ncd aries-agent-test-harness/aries-backchannels\npip install -r requirements.txt\n
Note that this installs the aca-py and vcx python libraries from ianco
forks of the gitub repositories.
cd aries-agent-test-harness/aries-backchannels\nLEDGER_URL=http://localhost:9000 python acapy_backchannel.py -p 8020\n
-p
specifies the backchannel port that the test harness will talk to. The backchannel adaptor starts up an ACA-PY agent as a sub-process, and will use additional ports for communication between the adaptor and agent. In general make sure there is a range of 10
free ports (i.e. in the example above reserve ports 8020 to 8029).
For VCX:
# install the pre-requisites:\ncd aries-agent-test-harness/aries-backchannels\npip install -r requirements-vcx.txt\n
cd aries-agent-test-harness/aries-backchannels\nLEDGER_URL=http://localhost:9000 python vcx_backchannel.py -p 8030\n
Note that you can run multiple instances of these agents.
Note also for VCX you need to install the Indy dependencies locally - libindy, libvcx, libnulpay - and you also need to run a dummy-cloud-agent
server. You need to install these from ianco
's fork and branch of the indy-sdk: https://github.com/ianco/indy-sdk/tree/vcx-aries-support
See the instructions in the indy-sdk for more details.
"},{"location":"guide/TEST-COVERAGE/","title":"Aries Agent Test Harness: Test Coverage","text":"The following test coverage is as of September 1, 2020.
AIP 1.0 Status:
Terminology Directly Tested: There is a Test Scenario that has it as it's goal to test that protocol feature. Tested as Inclusion: The Test Scenario's focus is not on this test case however it uses all or portions of other tests that use the protocol feature. Tested Indicrectly: The Test Scenario is testing a Protocol that is using, as part of it's operations, another protocol.
The Google Sheets Version of Coverage Matrix is also made available for better viewing.
Tested Indirectly Tested Indirectly Tested Indirectly Tested Indirectly RFC Feature Variation Test Type(s) Directly Tested Tested as Inclusion RFC0056 Service Decorator RFC0035 Report Problem RFC0025 DIDComm Transports RFC0015 Acks RFC0160 - Connection Protocol Establish Connection w/ Trust Ping Functional T001-AIP10-RFC0160 T001-AIP10-RFC0036 X T002-AIP10-RFC0036 X T003-AIP10-RFC0036 X T004-AIP10-RFC0036 X T001-AIP10-RFC0037 X T001.2-AIP10-RFC0037 X T001.3-AIP10-RFC0037 X T001.4-AIP10-RFC0037 X T002-AIP10-RFC0037 X T003-AIP10-RFC0037 X T003.1-AIP10-RFC0037 X T006-AIP10-RFC0037 X Establish Connection w/ Acks Functional X Establish Connection Reversed Roles Functional T001.2-AIP10-RFC0160 X Establish Connection final acknowledgment comes from inviter Functional T002-AIP10-RFC0160 X Establish Connection Single Use Invite Functional, Exception T003-AIP10-RFC0160 X T004-AIP10-RFC0160 X Establish Connection Mult Use Invite Functional T005-AIP10-RFC0160 (wip) X Establish Multiple Connections Between the Same Agents Functional T006-AIP10-RFC0160 X Establish Connection Single Try on Exception Funtional, Exception T007-AIP10-RFC0160 (wip) X X RFC0036 - Issue Credential Issue Credential Start w/ Proposal Functional T001-AIP10-RFC0036 T001-AIP10-RFC0037 X T001.2-AIP10-RFC0037 X T001.3-AIP10-RFC0037 X T001.4-AIP10-RFC0037 X T002-AIP10-RFC0037 X T003-AIP10-RFC0037 X T003.1-AIP10-RFC0037 X T006-AIP10-RFC0037 X Issue Credential Negotiated w/ Proposal Functional T002-AIP10-RFC0036 X Issue Credential Start w/ Offer Functional T003-AIP10-RFC0036 X Issue Credential w/ Offer w/ Negotiation Functional T004-AIP10-RFC0036 X Issue Credential Start w/ Request w/ Negotiation Functional T005-AIP10-RFC0036 (wip) X Issue Credential Start w/ Request Functional T006-AIP10-RFC0036 (wip) X RFC0037 - Present Proof Present Proof w/o Proposal, Verifier is not the Issuer, 1 Cred Type Functional T001-AIP10-RFC0037 X X T001.2-AIP10-RFC0037 X X T001.3-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is the Issuer, 1 Cred Type Functional T001-AIP10-RFC0037 X X T001.2-AIP10-RFC0037 X X T001.3-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is the Issuer, Multi Cred Types Functional T001.4-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is not the Issuer, Multi Cred Types Functional T001.4-AIP10-RFC0037 X X Present Proof Connectionless w/o Proposal Functional T002-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Same Cred Type Different Attribute, Verifier is the Issuer Functional T003-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Same Cred Type Different Attribute, Verifier is not the Issuer Functional T003-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Different Cred Type, Verifier is the Issuer Functional T003.1-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Different Cred Type, Verifier is not the Issuer Functional T003.1-AIP10-RFC0037 X X Present Proof Connectionless w/ Proposal, Verifier is the Issuer Functional T004-AIP10-RFC0037 (wip) X X X Present Proof Connectionless w/ Proposal, Verifier is not the Issuer Functional T004-AIP10-RFC0037 (wip) X X X Present Proof w/o Proposal, Verifier Rejects Presentation Functional, Exception T005-AIP10-RFC0037 (wip) X X Present Proof Start w/ Proposal Functional T006-AIP10-RFC0037 X X"},{"location":"guide/TEST_DEV_GUIDE/","title":"Test Development Guidelines","text":""},{"location":"guide/TEST_DEV_GUIDE/#contents","title":"Contents","text":"The Aries Agent Test Harness utilizes a behavioral driven approach to testing. The Python toolset Behave is used to actualize this approach. [Gherkin] is the language syntax used to define test preconditions and context, actions and events, and expected results and outcomes.
The first step in developing a suite of tests for an Aries RFC is to write plain english Gherkin definitions, before any code is written. The only input to the test cases should be the RFC. The test cases should not be driven by agent or agent framework implementations.
The priority is to do \"happy path\" type tests first, leaving the exception & negative testing until there are multiple suites across protocols of happy path acceptance tests. Write one main scenario then get peers and others familiar with the RFC to review the test. This is important because the structure and language of this initial test may guide the rest of the tests in the suite.
Initial writing of the Gherkin tests themselves are done in a .feature file or in a GitHub issue detailing the test development work to be accomplished. If no GitHub issue exists for the test development work, create one.
To keep test definitions immune to code changes or nomenclatures in code, it is best to express the RFC in high level terms from the user level based on predefined persona, currently Acme
, Bob
and Mallory
, that can be interpreted at the business level without revealing implementation details. For example, When Acme requests a connection with Bob
instead of When Acme sends a connection-request to Bob
. Sometimes this may be cumbersome, so just make it as high level as makes sense. A full example from the connection protocol might look something like this;
Scenario Outline: establish a connection between two agents\nGiven we have \"2\" agents\n| name | role |\n| Acme | inviter |\n| Bob | invitee |\nWhen \"Acme\" generates a connection invitation\nAnd \"Bob\" receives the connection invitation\nAnd \"Bob\" sends a connection request to \"Acme\"\nAnd \"Acme\" receives the connection request\nAnd \"Acme\" sends a connection response to \"Bob\"\nAnd \"Bob\" receives the connection response\nAnd \"Bob\" sends <message> to \"Acme\"\nThen \"Acme\" and \"Bob\" have a connection\n\nExamples:\n| message |\n| trustping |\n| ack |\n
Utilize data tables and examples in the Gherkin definition where possible.
The test cases should use the test persona as follows:
Acme
: an enterprise agent with issuer and verifier capabilitiesBob
: a holder/prover personFaber
another enterprise agent with issuer and verifier capabilitiesMallory
: a malicious holder/prover personAs necessary, other persona will be added. We expect adding Carol
(another holder/prover person) and perhaps Thing
(an IOT thing, likely an issuer and verifier). Note that each additional persona requires updates to the running of tests (via the ./manage
script) and introduce operational overhead, so thought should be given before introducing new characters into the test suite.
The test harness run script supports the use of tags in the feature files to be able to narrow down a test set to be executed. The general tags currently utilized are as follows:
There will be cases where there will be a need for Protocol specific tags. This will usually reveal itself when there are optional implementations or where implementations can diverge into 2 or more options. Tests will need to be tagged with the least common option, where no tag means the other option. For example in the connection protocol there are specific tests that exercise the behavior of the protocol using a Multi Use Invite and a Single Use Invite. The tag @MultiUseInvite is used to differentiate the two, and by default it is expected that MultiUseInvite is the least common option.
Currently Existing Connection Protocol Tags
Defining specific tags should be discussed with the Aries Protocol test community.
"},{"location":"guide/TEST_DEV_GUIDE/#defining-backchannel-operations","title":"Defining Backchannel Operations","text":"Defining test steps require using and extending the commands and operations to be implemented by the backchannels. The commands and operations are documented in an OpenAPI specification located here. A rendered version of the OpenApi spec (from the main branch) can be viewed on the Aries Interop page here. As test developers add new steps to test cases, document the new operations on which they depend in the OpenAPI spec.
During development (and if using VSCode) there are some tools that can make it easier to work with the OpenAPI spec:
Defining a new operation is as simple as adding a new path to the OpenAPI spec file. If you're adding a new topic, make sure to add a new entry to the tags
at the top of the OpenAPI file. When adding a new endpoint try to group it with the existing commands, so proof commands should be grouped with other proof commands. When adding a new path, it is easiest to copy an already existing path.
Follow standard best practices for implementing test steps in Behave, writing the test steps as if the feature is fully supported and then adding code at all levels to support the new test. The process is something like this:
steps
Python codeExisting backchannels will throw a \"NotImplementedException\" for any steps that are not implemented in the backchannels, and should include information from the above-mentioned data file.
"},{"location":"guide/TEST_DEV_GUIDE/#github-actions-and-comparing-test-results-day-to-day","title":"Github Actions and Comparing Test Results Day-To-Day","text":"AATH has the capability of checking whether the test results change from day-to-day (in addition to checking that all tests have passed).
To enable this checking run AATH as follows:
PROJECT_ID=acapy ./manage run -d acapy-main -r allure -e comparison -t @AcceptanceTest -t ~@wip\n
In the above, PROJECT_ID
is the name of the Allure project (acapy
in the example above), the parameter -e comparison
is what invokes the comparison (can only be used with the -r allure
option) and the test scope (the -t
parameters) must match what is expected for the specified PROJECT_ID
(as used in the automated GitHub actions).
This comparison is done using a \"Known Good Results\" (\"KGR\") file that is checked into GitHub.
When adding a new test, or if a different set of tests is expected to pass or fail, this KGR file must be updated.
The KGR files are checked into this folder.
To update the file, run the test suite locally (as in the above command) - it will create a \"NEW-\" KGR file in this folder - just copy this file to replace the existing \"The-KGR-File-\" for the PROJECT_ID
under test, and check into GitHub.
See the README in the aries-backchannels
folder for details on writing backchannels.
The Aries Agent Test Harness(AATH), in utilizing the Behave test engine, has default test output that specifically shows passed/failed steps along with a test/feature summary after an execution run. This output can of course be piped to a file as the user sees fit.
The need will arise to give more formal and graphic reporting along with keeping historical trends of test executions under a certain configuration. The AATH integrates with the Allure Reporting framework to fulfill this requirement.
"},{"location":"guide/TEST_REPORTING/#allure-integration","title":"Allure Integration","text":"The AATH utilizes Allure in as much as Behave and Allure integrate. See Behave with Allure for details.
"},{"location":"guide/TEST_REPORTING/#using-allure-with-manage-script","title":"Using Allure with manage Script","text":"The test execution container that is ramped up with the manage script gets the Allure framework installed for use inside the continer. So to execute the tests and have Allure generated report files, use the -r allure
option on the manage
script.
cd aries-agent-test-harness\n./manage run -d acapy -r allure -t @AcceptanceTest -t ~@wip\n
Running locally and not in a build pipeline/continuous integration system, you will need to install the allure framework in order to generate and display the html report. You will also need the allure command line toolset. The brew example below is for Mac OS X, if on a different platform see the other options here. $ pip install allure-behave\n$ brew install allure\n
To generate the html report and start an allure report server. Use any IP or port in the open
command.
cd aries-test-harness\nallure generate --clean ./reports\nallure open -h 192.168.2.141 -p 54236\n
If keeping a history and reporting trends over time is important to do locally, the history folder inside the allure-report folder that was generated by Allure, will have to be copied into the reports folder before the next execution of the allure generate
command after another test run. cd aries-test-harness\n$ cp -r ./allure-report/history ./reports\nallure generate --clean ./reports\n
Allure reports with the Aries Agent Test Harness will resemble the following, "},{"location":"guide/TEST_REPORTING/#using-allure-at-the-command-line","title":"Using Allure at the Command Line","text":"For debugging or developing purposes you may not want to always be running the test containers with the manage script, but you still may wish to maintain the reporting locally. To do that, just follow the standard command line options used with behave with custom formatters and reporters. To run this command you will need to have Allure installed locally as above.
behave -f allure_behave.formatter:AllureFormatter -o ./reports -t @AcceptanceTest -t ~@wip --no-skipped -D Acme=http://0.0.0.0:8020 -D Bob=http://0.0.0.0:8030 -D Faber=http://0.0.0.0:8050\n
"},{"location":"guide/TEST_REPORTING/#using-allure-with-ci","title":"Using Allure with CI","text":"The AATH is executed with varying configurations and Aries Agent types at pre-determined intervals to make find issues and track deltas between builds of these agents. You can find the Allure reports for these test runs at the following links.
If your build pipeline is using junit style test results for reporting purposes, the AATH supports this as behave does. To use junit style report data, add the following to the behave.ini file or create your own ini file to use with behave that includes the following,
[behave]\njunit = true\njunit_directory = ./junit-reports\n
The above junit reports cannot be used in conjunction with Allure. "},{"location":"guide/TEST_REPORTING/#references","title":"References","text":"Behave formatters and reporters
Allure Framework
Allure for Python/Behave
This folder contains the Aries backchannels that have been added to the Aries Agent Test Harness, each in their own folder, plus some shared files that may be useful to specific backchannel implementations. As noted in the main repo readme, backchannels receive requests from the test harness and convert those requests into instructions for the component under test (CUT). Within the component backchannel folders there may be more than one Dockerfile to build a different Test Agents sharing a single backchannel, perhaps for different versions of the CUT or different configurations.
"},{"location":"guide/aries-backchannels/#writing-a-new-backchannel","title":"Writing a new Backchannel","text":"If you are writing a backchannel using Python, you're in luck! Just use either the ACA-Py
or VCX
backchannels as a model. They sub-class from a common base class (in the python
folder), which implements the common backchannel features. The Python implementation is data driven, using the txt file in the data
folder.
If you are implementing from scratch, you need to implement a backchannel which:
Once you have the backchannel, you need to define one or more docker files to create docker images of Test Agents to deploy in an AATH run. To do that, you must create a Dockerfile that builds a Docker image for the Test Agent (TA), including the backchannel, the CUT and anything else needed to operate the TA. The resulting docker image must be able to be launched by the common ./manage
bash script so the new TA can be included in the standard test scenarios.
The test harness interacts with each backchannel using a small set of standard set of web services. Endpoints are here:
That's all of the endpoints your agent has to handle. Of course, your backchannel also has to be able to communicate with the CUT (the agent or agent framework being tested). Likely that means being able to generate requests to the CUT (mostly based on requests from the endpoints above) and monitor events from the CUT.
See the OpenAPI definition located here for an overview of all current topics and operations.
"},{"location":"guide/aries-backchannels/#standard-backchannel-topics-and-operations","title":"Standard Backchannel Topics and Operations","text":"Although the number of endpoints is small, the number of topic and operation parameters is much larger. That list of operations drives the effort in building and maintaining the backchannel. The list of operations to be supported can be found in this OpenAPI spec. It lists all of the possible topic
values, the related operations
and information about each one, including such things as:
A rendered version of the OpenAPI spec can be found We recommend that in writing a backchannel, any Not Implemented
commands and operations return an HTTP 501
result code (\"Not Implemented\").
Support for testing new protocols will extend the OpenAPI spec with additional topics
and related operations
, adding to the workload of the backchannel maintainer.
The test harness interacts with each published backchannel API using the following common Python functions. Pretty simple, eh?
"},{"location":"guide/aries-backchannels/#docker-build-script","title":"Docker Build Script","text":"Each backchannel should provide one or more Docker scripts, each of which build a self-contained Docker image for the backchannel, the CUT and anything else needed to run the TA.
The following lists the requirements for building AATH compatible docker images:
Dockerfile.<TA>
. For example Dockerfile.acapy
, Dockerfile.vcx
../manage
script uses the <TA>
to validate command line arguments, to tag the agent, and for invoking docker build and run operations.acapy
backchannel where there the Dockerfile.acapy
builds the latest released version of ACA-Py, where Dockerfile.acapy-main
builds from the ACA-Py main
branch.<TA>
name.aries-backchannels
folder../manage
script looks for the TA Dockerfiles in those folders.See examples of this for aca-py (Dockerfile.acapy
) and aries-vcx (Dockerfile.vcx
).
./manage
Script Integration","text":"The ./manage
script builds images and runs those images as containers in test runs. This integration applies some constraints on the docker images used. Most of those constraints are documented in the previous section, but the following provides some additional context.
An image for each backchannel using the following command:
echo \"Building ${agent}-agent-backchannel ...\"\n docker build \\\n ${args} \\\n $(initDockerBuildArgs) \\\n -t \"${agent}-agent-backchannel\" \\\n -f \"${BACKCHANNEL_FOLDER}/Dockerfile.${agent}\" \"aries-backchannels/\"\n
where:
${agent}
is the name of the component under test (CUT)$(initDockerBuildArgs)
picks up any HTTP_PROXY environment variables, and${args}
are any extra arguments on the command line after standard options processing.Once built, the selected TAs for the run are started for the test roles (currently Acme, Bob and Mallory) using the following commands:
echo \"Starting Acme Agent ...\"\n docker run -d --rm --name acme_agent --expose 9020-9029 -p 9020-9029:9020-9029 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${ACME_AGENT} -p 9020 -i false >/dev/null\n echo \"Starting Bob Agent ...\"\n docker run -d --rm --name bob_agent --expose 9030-9039 -p 9030-9039:9030-9039 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${BOB_AGENT} -p 9030 -i false >/dev/null\n echo \"Starting Mallory Agent ...\"\n docker run -d --rm --name mallory_agent --expose 9040-9049 -p 9040-9049:9040-9049 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${MALLORY_AGENT} -p 9040 -i false >/dev/null\n
Important things to note from the script snippet:
-expose
parameter), which are mapped to localhost-p
parameter, and not hard code them in the container.acapy
or aries-vcx
, etc.) is done earlier in the script by setting the ${ACME_AGENT}
etc. environment variablesDOCKERHOST
) and a url to the ledger genesis transactions (LEDGER_URL
)LEDGER_URL
assumed to be for a locally running instance of von-network
-p port
) and to use non-interactive mode (-i false
)Many of the BDD feature steps (and hence, backchannel requests) in the initial test cases map very closely to the ACA-Py \"admin\" API used by a controller to control an instance of an ACA-Py agent. This makes sense because both the ACA-Py admin API and the AATH test cases were defined based on the Aries RFCs. However, we are aware the alignment between the two might be too close and welcome recommendations for making the backchannel API more agnostic, easier for other CUTs. Likewise, as the test suite becomes ledger- and verifiable credential format-agnostic, we anticipate abstracting away the Indy-isms that are in the current test cases, making them test parameters versus explicit steps.
The Google Sheet list of operations has that same influence, referencing things like connection_id
, cred_exchange_id
and so on. As new backchannels are developed, we welcome feedback on how to make the list of operations easier to maintain backchannels.
This web site shows the current status of Aries Interoperability between Aries frameworks and agents. The latest interoperability test results are below.
The following test agents are currently being tested:
In the table above, each row is a test agent, its columns the results of tests executed in combination with other test agents. The last column (\"All Tests\") shows the results of all tests run for the given test agent in any role. The link on each test agent name provides more details about results for all test combinations for that test agent. On that page are links to a full history of the test runs and full details on every executed test.
Notes:
Results last updated: Wed Nov 13 16:42:11 UTC 2024
"},{"location":"acapy/","title":"Aries Cloud Agent Python Interoperability","text":""},{"location":"acapy/#runsets-with-aca-py","title":"Runsets with ACA-Py","text":"Runset ACME(Issuer) Bob(Holder) Faber(Verifier) Mallory(Holder) Scope Results acapy-aip10 acapy-main1.1.0 acapy-main1.1.0 acapy-main1.1.0 acapy-main1.1.0 AIP 1.0 29 / 3582% acapy-aip20 acapy-main1.1.0 acapy-main1.1.0 acapy-main1.1.0 acapy-main1.1.0 AIP 2.0 61 / 61100% acapy-ariesvcx acapy-main1.1.0 aries-vcx0.65.0 acapy-main1.1.0 acapy-main1.1.0 AIP 1.0 19 / 2867% acapy-credo acapy-main1.1.0 credo0.5.13 acapy-main1.1.0 acapy-main1.1.0 AIP 1.0 38 / 38100% ariesvcx-acapy aries-vcx0.65.0 acapy-main1.1.0 aries-vcx0.65.0 aries-vcx0.65.0 AIP 1.0 11 / 2839% credo-acapy credo0.5.13 acapy-main1.1.0 credo0.5.13 credo0.5.13 AIP 1.0 23 / 2882%"},{"location":"acapy/#runset-notes","title":"Runset Notes","text":""},{"location":"acapy/#runset-acapy-aip10","title":"Runset acapy-aip10","text":"Runset Name: ACA-PY to ACA-Py
**Latest results: 29 out of 35 (82%)**\n\n\n*Last run: Wed Nov 13 00:41:43 UTC 2024*\n
"},{"location":"acapy/#current-runset-status","title":"Current Runset Status","text":"All of the tests being executed in this runset are passing.
Status Note Updated: 2021.03.18
"},{"location":"acapy/#runset-details","title":"Runset Details","text":"Runset Name: ACA-PY to ACA-Py
**Latest results: 61 out of 61 (100%)**\n\n\n*Last run: Wed Nov 13 01:14:27 UTC 2024*\n
"},{"location":"acapy/#current-runset-status_1","title":"Current Runset Status","text":"All of the tests being executed in this runset are passing.
Status Note Updated: 2021.03.16
"},{"location":"acapy/#runset-details_1","title":"Runset Details","text":"Runset Name: acapy to aries-vcx
**Latest results: 19 out of 28 (67%)**\n\n\n*Last run: Wed Nov 13 01:31:51 UTC 2024*\n
"},{"location":"acapy/#current-runset-status_2","title":"Current Runset Status","text":"RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252 Status Note Updated: 2024.07.05
"},{"location":"acapy/#runset-details_2","title":"Runset Details","text":"Runset Name: ACA-PY to Credo
**Latest results: 38 out of 38 (100%)**\n\n\n*Last run: Wed Nov 13 02:02:20 UTC 2024*\n
"},{"location":"acapy/#current-runset-status_3","title":"Current Runset Status","text":"Most of the tests are running. The tests not passing are being investigated.
Status Note Updated: 2024.09.06
"},{"location":"acapy/#runset-details_3","title":"Runset Details","text":"Runset Name: aries-vcx to acapy
**Latest results: 11 out of 28 (39%)**\n\n\n*Last run: Wed Nov 13 03:24:09 UTC 2024*\n
"},{"location":"acapy/#current-runset-status_4","title":"Current Runset Status","text":"Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the backchannel. Being resolved here: https://github.com/hyperledger/aries-vcx/issues/1253 @RFC0793 has relatively low success due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05
"},{"location":"acapy/#runset-details_4","title":"Runset Details","text":"Runset Name: Credo to ACA-PY
**Latest results: 23 out of 28 (82%)**\n\n\n*Last run: Wed Nov 13 04:11:03 UTC 2024*\n
"},{"location":"acapy/#current-runset-status_5","title":"Current Runset Status","text":"All AIP10 tests are currently running.
Status Note Updated: 2024.09.06
"},{"location":"acapy/#runset-details_5","title":"Runset Details","text":"Jump back to the interoperability summary.
"},{"location":"aries-interop-intro/","title":"Introduction to Aries Interoperability","text":"This website reports on the interoperability between different Hyperledger Aries agents. Interoperability includes how seamlessly the agents work together, and how well each agent adheres to community-agreed standards such as Aries Interop Profile (AIP) 1.0 and AIP 2.0.
"},{"location":"aries-interop-intro/#why-is-interoperability-important","title":"Why is interoperability important?","text":"As Digital Trust ecosystems evolve they will naturally require many technologies to coexist and cooperate. Worldwide projects will get larger and will start to overlap. Also, stakeholders and users will not care about incompatibilities; they will simply wish to take advantage of Digital Trust benefits. Interoperability ultimately means more than just Aries agents working with each other, as it covers worldwide standards and paves the way for broader compatibility.
For all these reasons interoperability is incredibly important if Hyperledger Aries is to continue to flourish.
"},{"location":"aries-interop-intro/#what-are-hyperledger-aries-agents-and-frameworks","title":"What are Hyperledger Aries agents and frameworks?","text":"Aries agents are the pieces of software that provide Digital Trust services such as issuing and receiving verifiable credentials and verifying presentations of verifiable credentials. Many Aries agents are built on Aries Frameworks, common components that make it easier to create agents -- developers need only add the business logic on top of a framework to make their agent. Agents can be written in different programming languages, and designed for different devices or for use in the cloud.
What unites Aries agents are the standards and protocols they aim to adhere to, and the underlying technologies (cryptography, DIDs and DID utility ledgers and verifiable credentials).
The Aries frameworks and agents currently tested for interoperability with AATH are:
The Aries frameworks and agents formerly tested for interoperability with AATH are:
Aries Agent Test Harness (AATH) is open-source software that runs a series of Aries interoperability tests and delivers the test results data to this website.
AATH uses a Behavior Driven-Development (BDD) framework to run tests that are designed to exercise the community-designed Aries Protocols, as defined in the Aries RFC specifications.
The tests are executed by starting up four Test Agents (\u201cAcme\u201d is an issuer, \u201cBob\u201d a holder/prover, \u201cFaber\u201d a verifier and Mallory, a sometimes malicious holder/prover), and having the test harness send instructions to the Test Agents to execute the steps of the BDD tests. Each Test Agent is a container that contains the \u201ccomponent under test\u201d (an Aries agent or framework), along with a web server that communicates (using HTTP) with the test harness to receive instructions and report status, and translates and passes on those instructions to the \u201ccomponent under test\u201d using whatever method works for that component. This is pictured in the diagram below, and is covered in more detail in the AATH Architecture section of the repo\u2019s README.
"},{"location":"aries-interop-intro/#runsets","title":"Runsets","text":"A runset is a named set of tests (e.g. \u201call AIP 1.0 tests\u201d) and test agents (e.g. \u201cACA-Py and Aries Framework JavaScript\u201d) that are run on a periodic basis via GitHub Actions \u2014 for example, every day. The results of each run of a runset are recorded to a test results repository for analysis and summarized on this site. In general, the order of the Test Agent names indicate the roles played, with the first playing all roles except Bob (the holder/prover). However, exact details of what Test Agents play what roles can be found in the runset details page.
The set of tests run (the scope) per runset vary by the combined state of the agents involved in a test. For example:
For these reasons it\u2019s not possible to say that, for example, an 80% pass result is \u201cgood\u201d or 50% is \u201cbad\u201d. The numbers need to be understood in context.
The scope and exceptions columns in the summary and the summary statement found on each runset detail page on this website, document the scope and expectations of runset.
Each runset detail page also provides narrative on the current status of the runset \u2014 for example, why some tests of a runset are failing, what issues have been opened and where to address the issue.
"},{"location":"aries-interop-intro/#failing-tests","title":"Failing Tests","text":"Tests can fail for many reasons, and much of the work of maintaining the tests and runsets is staying on top of the failures. The following are some notes about failing tests and what to do about them:
The Allure reports accessible from this site provide a lot of information about failing tests and are a good place to start in figuring out what is happening. Here's how to follow the links to get to the test failure details:
In addition to drilling into a specific test scenario (aka \"stories\")/case (aka \"behavior\")/step, you can look at the recent runset history (last 20 runs). On the left side menu, click on \"Overview\", and then take a look at the big \"history\" graph in the top right, showing how the runset execution has varied over time. Ideally, it's all green, but since you started from a runset that had failures, it won't be. Pretty much every part of the overview page is a drill down link into more and more detailed information about the runset, a specific run of the runset, a specific test case and so on. Lots to look at!
"},{"location":"aries-interop-intro/#what-is-aries-interop-profile","title":"What is Aries Interop Profile?","text":"Aries Interop Profile (AIP) is a set of concepts and protocols that every Aries agent that wants to be interoperable should implement. Specific Aries agents may implement additional capabilities and protocols, but for interoperability, they must implement those defined in an AIP.
AIP currently has two versions:
AIP versions go through a rigorous community process of discussion and refinement before being agreed upon. During that process, the RFCs that go into each AIP are debated and the specific version of each included RFC is locked down. AIPs are available for anyone to review (and potentially contribute to) in the Aries RFC repo.
"},{"location":"aries-interop-intro/#how-can-i-contribute","title":"How can I contribute?","text":"For developers improving an Aries agent or framework, each runset's page has a link to a detailed report in Allure. This allows the specific tests and results to be explored in detail.
If you are a stakeholder interested in improving the results for an agent, this website (and the Allure links, described above) should have enough material for your teams to take action.
Finally, if you want your Aries agent to be added to this website, or wish to expand the tests covered for your agent, your developers can reference the extensive information in the Aries Agent Test Harness repo on GitHub.
In addition an API reference for backchannels can be found here
"},{"location":"aries-vcx/","title":"Aries VCX Interoperability","text":""},{"location":"aries-vcx/#runsets-with-vcx","title":"Runsets with VCX","text":"Runset ACME(Issuer) Bob(Holder) Faber(Verifier) Mallory(Holder) Scope Results acapy-ariesvcx acapy-main1.1.0 aries-vcx0.65.0 acapy-main1.1.0 acapy-main1.1.0 AIP 1.0 19 / 2867% ariesvcx-acapy aries-vcx0.65.0 acapy-main1.1.0 aries-vcx0.65.0 aries-vcx0.65.0 AIP 1.0 11 / 2839% ariesvcx-ariesvcx aries-vcx0.65.0 aries-vcx0.65.0 aries-vcx0.65.0 aries-vcx0.65.0 AIP 1.0 27 / 3284% ariesvcx-credo aries-vcx0.65.0 credo0.5.13 aries-vcx0.65.0 aries-vcx0.65.0 AIP 1.0 8 / 2040% credo-ariesvcx credo0.5.13 aries-vcx0.65.0 credo0.5.13 credo0.5.13 AIP 1.0 6 / 1833%"},{"location":"aries-vcx/#runset-notes","title":"Runset Notes","text":""},{"location":"aries-vcx/#runset-acapy-ariesvcx","title":"Runset acapy-ariesvcx","text":"Runset Name: acapy to aries-vcx
**Latest results: 19 out of 28 (67%)**\n\n\n*Last run: Wed Nov 13 01:31:51 UTC 2024*\n
"},{"location":"aries-vcx/#current-runset-status","title":"Current Runset Status","text":"RFC0023 is disabled due to inconsistent results. RFC0793 is also being investigated: https://github.com/hyperledger/aries-vcx/issues/1252 Status Note Updated: 2024.07.05
"},{"location":"aries-vcx/#runset-details","title":"Runset Details","text":"Runset Name: aries-vcx to acapy
**Latest results: 11 out of 28 (39%)**\n\n\n*Last run: Wed Nov 13 03:24:09 UTC 2024*\n
"},{"location":"aries-vcx/#current-runset-status_1","title":"Current Runset Status","text":"Most tests are currently struggling, due to aries-vcx reporting the wrong connection state to the backchannel. Being resolved here: https://github.com/hyperledger/aries-vcx/issues/1253 @RFC0793 has relatively low success due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05
"},{"location":"aries-vcx/#runset-details_1","title":"Runset Details","text":"Runset Name: aries-vcx to aries-vcx
**Latest results: 27 out of 32 (84%)**\n\n\n*Last run: Wed Nov 13 03:37:10 UTC 2024*\n
"},{"location":"aries-vcx/#current-runset-status_2","title":"Current Runset Status","text":"@RFC0793 has some failures due to aries-vcx not supporting full range of DID methods in these tests. Status Note Updated: 2024.07.05
"},{"location":"aries-vcx/#runset-details_2","title":"Runset Details","text":"Runset Name: aries-vcx to credo
**Latest results: 8 out of 20 (40%)**\n\n\n*Last run: Wed Nov 13 03:51:37 UTC 2024*\n
"},{"location":"aries-vcx/#current-runset-status_3","title":"Current Runset Status","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.\n
"},{"location":"aries-vcx/#runset-details_3","title":"Runset Details","text":"Runset Name: credo to aries-vcx
**Latest results: 6 out of 18 (33%)**\n\n\n*Last run: Wed Nov 13 04:24:06 UTC 2024*\n
"},{"location":"aries-vcx/#current-runset-status_4","title":"Current Runset Status","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.\n
"},{"location":"aries-vcx/#runset-details_4","title":"Runset Details","text":"Jump back to the interoperability summary.
"},{"location":"credo/","title":"Credo-TS Interoperability","text":""},{"location":"credo/#runsets-with-credo","title":"Runsets with Credo","text":"Runset ACME(Issuer) Bob(Holder) Faber(Verifier) Mallory(Holder) Scope Results acapy-credo acapy-main1.1.0 credo0.5.13 acapy-main1.1.0 acapy-main1.1.0 AIP 1.0 38 / 38100% ariesvcx-credo aries-vcx0.65.0 credo0.5.13 aries-vcx0.65.0 aries-vcx0.65.0 AIP 1.0 8 / 2040% credo-acapy credo0.5.13 acapy-main1.1.0 credo0.5.13 credo0.5.13 AIP 1.0 23 / 2882% credo-ariesvcx credo0.5.13 aries-vcx0.65.0 credo0.5.13 credo0.5.13 AIP 1.0 6 / 1833% credo credo0.5.13 credo0.5.13 credo0.5.13 credo0.5.13 AIP 1.0 27 / 2896%"},{"location":"credo/#runset-notes","title":"Runset Notes","text":""},{"location":"credo/#runset-acapy-credo","title":"Runset acapy-credo","text":"Runset Name: ACA-PY to Credo
**Latest results: 38 out of 38 (100%)**\n\n\n*Last run: Wed Nov 13 02:02:20 UTC 2024*\n
"},{"location":"credo/#current-runset-status","title":"Current Runset Status","text":"Most of the tests are running. The tests not passing are being investigated.
Status Note Updated: 2024.09.06
"},{"location":"credo/#runset-details","title":"Runset Details","text":"Runset Name: aries-vcx to credo
**Latest results: 8 out of 20 (40%)**\n\n\n*Last run: Wed Nov 13 03:51:37 UTC 2024*\n
"},{"location":"credo/#current-runset-status_1","title":"Current Runset Status","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-ariesvcx-credo.yml.\n
"},{"location":"credo/#runset-details_1","title":"Runset Details","text":"Runset Name: Credo to ACA-PY
**Latest results: 23 out of 28 (82%)**\n\n\n*Last run: Wed Nov 13 04:11:03 UTC 2024*\n
"},{"location":"credo/#current-runset-status_2","title":"Current Runset Status","text":"All AIP10 tests are currently running.
Status Note Updated: 2024.09.06
"},{"location":"credo/#runset-details_2","title":"Runset Details","text":"Runset Name: credo to aries-vcx
**Latest results: 6 out of 18 (33%)**\n\n\n*Last run: Wed Nov 13 04:24:06 UTC 2024*\n
"},{"location":"credo/#current-runset-status_3","title":"Current Runset Status","text":"No test status note is available for this runset. Please update: .github/workflows/test-harness-credo-ariesvcx.yml.\n
"},{"location":"credo/#runset-details_3","title":"Runset Details","text":"Runset Name: Credo to Credo
**Latest results: 27 out of 28 (96%)**\n\n\n*Last run: Wed Nov 13 04:45:41 UTC 2024*\n
"},{"location":"credo/#current-runset-status_4","title":"Current Runset Status","text":"All of the tests being executed in this runset are passing.
Status Note Updated: 2024.07.29
"},{"location":"credo/#runset-details_4","title":"Runset Details","text":"Jump back to the interoperability summary.
"},{"location":"guide/","title":"Aries Agent Test Harness: Smashing Complexity in Interoperability Testing","text":"The Aries Agent Test Harness (AATH) is a BDD-based test execution engine and set of tests for evaluating the interoperability of Aries Agents and Agent Frameworks. The tests are agnostic to the components under test but rather are designed based on the Aries RFCs and the interaction protocols documented there. The AATH enables the creation of an interop lab much like the labs used by the telcos when introducing new hardware into the markets\u2014routers, switchers and the like. Aries agent and agent framework builders can easily incorporate these tests into the their CI/CD pipelines to ensure that interoperability is core to the development process.
Want to see the Aries Agent Test Harness in action? Give it a try using a git, docker and bash enabled system. Once you are in a bash shell, run the following commands to execute a set of RFC tests using the Aries Cloud Agent - Python:
git clone https://github.com/hyperledger/aries-agent-test-harness\ncd aries-agent-test-harness\n./manage build -a acapy -a javascript\n./manage run -d acapy -b javascript -t @AcceptanceTest -t ~@wip\n
The commands take a while to run (you know...building modern apps always means downloading half the internet...), so while you wait, here's what's happening:
./manage build
command builds Test Agent docker images for the Aries Cloud Agent Python (ACA-Py) and Aries Framework JavaScript (AFJ) agent frameworks and the test harness../manage run
command executes a set of tests (those tagged \"AcceptanceTest\" but not tagged \"@wip\") with the ACA-Py test agent playing most of the roles\u2014Acme, Faber and Mallory, while the AFJ test agent plays the role of Bob.It's that last part makes the AATH powerful. On every run, different AATH-enabled components can be assigned any role (Acme, Bob, Faber, Mallory). For some initial pain (AATH-enabling a component), interoperability testing becomes routine, and we can make hit our goal: to make interoperability boring.
Interesting to you? Read on for more about the architecture, how to build tests, how to AATH-enable the Aries agents and agent frameworks that you are building and how you can run these tests on a continuous basis. For a brief set of slides covering the process and goals, check this out.
We'd love to have help in building out a full Aries interoperability lab.
"},{"location":"guide/#contents","title":"Contents","text":"manage
bash scriptThe following diagram provides an overview of the architecture of the AATH.
./manage
) processes the command line options and orchestrates the docker image building and test case running../manage
script also supports running the services needed by the tests, such as a von-network Indy instance, an Indy tails service, a universal resolver and a did:orb
instance.mobile
can be used in the Bob
role to test mobile wallet apps on phones. See this document for details.remote
option. There are a couple of layers of abstraction involved in the test harness architecture, and it's worth formalizing some terminology to make it easier to communicate about what's what when we are are running tests.
AATH test scripts are written in the Gherkin language, using the python behave framework. Guidelines for writing test scripts are located here.
"},{"location":"guide/#aries-agent-backchannels","title":"Aries Agent Backchannels","text":"Backchannels are the challenging part of the AATH. In order to participate in the interoperability testing, each CUT builder must create and maintain a backchannel that converts requests from the test harness into commands for the component under test. In some cases, that's relatively easy, such as with Aries Cloud Agent - Python. An ACA-Py controller uses an HTTP interface to control an ACA-Py instance, so the ACA-Py backchannel is \"just another\" ACA-Py controller. In other cases, it may be more difficult, calling for the component under test to be embedded into a web service.
We have created a proof-of-concept Test Agent to support manual testing with mobile agents, described here.
A further complication is that as tests are added to the test suite, the backchannel interface expands, requiring that backchannel maintainers extend their implementation to be able to run the new tests. Note that the test engine doesn't stop if the backchannel steps are not implemented, however, such tests will be marked as fail
ing on test runs, usually with an HTTP 404
error.
Backchannels can be found in the aries-backchannels
folder of this repo. For more information on building a backchannel, see the documentation in the aries-backchannels
README, and look at the code of the existing backchannels. To get help in building a backchannel for a component you want tested, please use GitHub issues and/or ask questions on the Hyperledger Discord #aries-agent-test-harness
channel.
A number of backchannels have been implemented, with the a subset being regularly run for testing ACA-PY, Aries VCX and Credo-TS Aries agent frameworks. The ACA-Py is built on a common Python base (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/python/aries_backchannel.py) that sets up the backchannel API listener and performs some basic request validation and dispatching. On the other hand Aries VCX is build on their preferred language (Rust). The ACA-PY (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/acapy/acapy_backchannel.py) and AriesVCX (https://github.com/hyperledger/aries-agent-test-harness/blob/main/aries-backchannels/aries-vcx) implementations are good example to extend the base to add support for their respective agent frameworks.
There is also a backchannel to support (manual) testing with mobile agents. This backchannel doesn't control the mobile agent directly, rather it will prompt the tester to manually accept connection requests, credential offers etc. Use of the mobile backchannel is described here.
"},{"location":"guide/#the-manage-bash-script","title":"Themanage
bash script","text":"The AATH ./manage
script in the repo root folder is used to manage running builds of TA images and initiate test runs. Running the script with no arguments or just help
to see the script's usage information. The following summarizes the key concepts.
./manage
is a bash script, so you must be in a bash compatible shell to run the AATH. You must also have an operational docker installation and git installed. Pretty normal stuff for Aries Agent development. As well, the current AATH requires access to a running Indy network. A locally running instance of VON-Network is one option, but you can also pass in environment variables for the LEDGER_URL, GENESIS_URL or GENESIS_FILE to use a remote network. For example LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io
Before running tests, you must build the TA and harness docker images. Use ./manage build -a <TA>
to build the docker images for a TA, and the test harness itself. You may specify multiple -a
parameters to build multiple TAs at the same time. Leaving off the -a
option builds docker images for all of the TAs found in the repo. It takes a long time to run...
There are two options for testing ACA-PY - you can build and run acapy
, which builds the backchannel based on the latest released code, or you can build and run acapy-main
, which builds the backchannel based on the latest version of the main
branch. (Note that to build the backchannel based on a different repo/branch, edit this file to specify the repo/branch you want to test, and then build/run acapy-main
.)
To run the tests, use the ./manage run...
sub-command. the run
command requires defining what TAs will be used for Acme (-a <TA>
), Bob (-b <TA>
) and Mallory (-m <TA>
). To default all the agents to use a single component, use -d <TA>
. Parameters are processed in order, so you can use -d
to default the agents to one, and then use -b
to use a different TA for Bob.
There are two ways to control the behave test engine's selection of test cases to run. First, you can specify one or more -t <tag>
options to select the tests associated with specific tags. See the guidance on using tags with behave here. Note that each -t
option is passed to behave as a --tags <tag>
parameter, enabling control of the ANDs and ORs handling of tags. Specifically, each separate -t
option is ANDed with the rest of the -t
options. To OR tags, use a single -t
option with commas (,
) between the tags. For example, specify the options -t @t1,@t2 -t @f1
means to use \"tests tagged with (t1 or t2) AND f1
.\" To get a full list of possible tags to use in this run command, use the ./manage tags
command.
Note that the <tag>
arguments passed in on the command line cannot have a space, even if you double-quote the tag or escape the space. This is because the args are going through multiple layers shells (the script, calling docker, calling a script in the docker instance that in turn calls behave...). In all that argument passing, the wrappers around the args get lost. That should be OK in most cases, but if it is a problem, we have the -i
option as follows...
To enable full control over behave's behavior (if you will...), the -i <ini file>
option can be used to pass a behave \"ini\" format file into the test harness container. The ini file enables full control over the behave engine, add handles the shortcoming of not being able to pass tags arguments with spaces in them. See the behave configuration file options here. Note that the file name can be whatever you want. When it lands in the test harness container, it will be called behave.ini
. There is a default ini file located in aries-agent-test-harness/aries-test-harness/behave.ini
. This ini file is picked up and used by the test harness without the -i option. To run the tests with a custom behave ini file, follow this example,
./manage run -d acapy -t @AcceptanceTest -t ~@wip -i aries-test-harness/MyNewBehaveConfig.ini\n
For a full inventory of tests available to run, use the ./manage tests
. Note that tests in the list tagged @wip are works in progress and should generally not be run.
You may have the need to utilize the agents and their controller/backchannels separately from running interop tests with them. This can be for debugging AATH test code, or for something outside of AATH, like Aries Mobile Test Harness (AMTH) tests. To assist in this requirement the manage script can start 1-n agents of any aries framework that exists in AATH. This is done as follows:
./manage start -a acapy-main\n
The command above will only start Acme as ACA-py. No other agents (Bob, Faber, etc.) will be started.
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -a afgo-interop -b acapy-main -n\n
The second command above, will start Acme as AFGO, and Bob as ACA-py, utilizing an external Ledger and Tails Server, with a custom configuration to start ACA-py with. It will also start ngrok which is usually needed for mobile testing in AMTH.
To stop any agents started in this manner just run ./manage stop
.
When running test code in a debugger, you may not always want or need all the agents running when doing your debugging. Your test may only utilize Acme and Bob, and have no need for Faber and Mallory. This feature will allow you to start only the agents needed in your test you are debugging. The following example will run ACA-py as Acme and Bob with no other agents running.
./manage start -a acapy-main -b acapy-main\n
"},{"location":"guide/#aries-mobile-test-harness","title":"Aries Mobile Test Harness","text":"Aries Mobile Test Harness (AMTH) is a testing stack used to test mobile Aries wallets. To do this end to end, mobile tests need issuers, verifiers, and maybe mediators. Instead of AMTH managing a set of controllers and agents, AMTH can point to an Issuer or Verifier controller/agent URL. AMTH can take advantage of the work done across aries frameworks and backchannels to assign AATH agents as issuers or verifiers in testing aries wallets. For example, the BC Wallet tests in AMTH are utilizing ACA-py agents in AATH as an issuer and verifier. This is done by executing the following.
From within aries-agent-test-harness
./manage start -a acapy-main -b acapy-main\n
From within aries-mobile-test-harness
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io REGION=us-west-1 ./manage run -d SauceLabs -u <device-cloud-username> -k <device-cloud-access-key> -p iOS -a AriesBifold-114.ipa -i http://0.0.0.0:9020 -v http://0.0.0.0:9030 -t @bc_wallet -t @T001-Connect\n
The URLs for issuer and verifier are pointers to the backchannel controllers for Acme and Bob in AATH, so that these test take advantage of the work done there.
"},{"location":"guide/#extra-backchannel-specific-parameters","title":"Extra Backchannel-Specific Parameters","text":"You can pass backchannel-specific parameters as follows:
BACKCHANNEL_EXTRA_acapy_main=\"{\\\"wallet-type\\\":\\\"indy\\\"}\" ./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n
The environment variable name is of the format -<agent_name>
, where <agent_name>
is the name of the agent (e.g. acapy-main
) with hyphens replaced with underscores (i.e. acapy_main
).
The contents of the environment variable are backchannel-specific. For aca-py it is a JSON structure containing parameters to use for agent startup.
The above example runs all the tests using the indy
wallet type (vs askar
, which is the default).
Alternatively to the Extra Backchannel-Specific Parameters above, you can also pass a configuration file through to your agent when it starts (only works if your agent is started by your backchannel). The AATH tests have a predefined set of options needed for the test flow to function properly so, adding this configuration to AATH test execution may have side effects causing the interop tests to fail. However, this is helpful when using the agents as services outside of AATH tests like with Mobile Wallet tests in Aries Mobile Test Harness, where the agents usually will benefit from having auto options turned on. You can pass through your config file using the environment variable AGENT_CONFIG_FILE as follows:
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -b acapy-main -n\n
The config file should live in the aries-backchannels/<agent>
folder so it gets copied into the agent container automatically. Currently only the acapy backchannel supports this custom configuration in this manner.
When using AATH agents as a service for AMTH, these agent services will need to be started with differet or extra parameters on the agents than AATH starts them with by default. Mobile test issuers and verifiers may need the auto parameters turned on, like --auto-accept-requests
, --auto-respond-credential-proposal
, etc. The only way to do this when using the AATH agents is through using this configuration file handling. There is an existing file in aries-backchannels/acapy
called auto_isser_config.yaml that is there to support this requirement for the BC wallet. This works in BC Wallet as follows;
From within aries-agent-test-harness
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io AGENT_CONFIG_FILE=/aries-backchannels/acapy/auto_issuer_config.yaml ./manage start -a acapy-main -b acapy-main -n\n
From within aries-mobile-test-harness
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io REGION=us-west-1 ./manage run -d SauceLabs -u <device-cloud-username> -k <device-cloud-access-key> -p iOS -a AriesBifold-114.ipa -i http://0.0.0.0:9020 -v http://0.0.0.0:9030 -t @bc_wallet -t @T001-Connect\n
"},{"location":"guide/#test-tags","title":"Test Tags","text":"The test harness has utilized tags in the BDD feature files to be able to narrow down a test set to be executed at runtime. The general AATH tags currently utilized are as follows:
Proposed Connection Protocol Tags
To get a list of all the tags in the current test suite, run the command: ./manage tags
To get a list of the tests (scenarios) and the associated tags, run the command: ./manage tests
Using tags, one can just run Acceptance Tests...
./manage run -d acapy -t @AcceptanceTest\n
or all Priority 1 Acceptance Tests, but not the ones flagged Work In Progress...
./manage run -d acapy -t @P1 -t @AcceptanceTest -t ~@wip\n
or derived functional tests
./manage run -d acapy -t @DerivedFunctionalTest\n
or all the ExceptionTests...
./manage run -t @ExceptionTest\n
"},{"location":"guide/#using-and-or-in-test-execution-tags","title":"Using AND, OR in Test Execution Tags","text":"Stringing tags together in one -t
with commas as separators is equivalent to an OR
. The separate -t
options is equivalent to an AND
.
./manage run -d acapy-main -t @RFC0453,@RFC0454 -t ~@wip -t ~@CredFormat_JSON-LD\n
So the command above will run tests from RFC0453 or RFC0454, without the wip tag, and without the CredFormat_JSON-LD tag.
To read more on how one can control the execution of test sets based on tags see the behave documentation
The option -i <inifile>
can be used to pass a file in the behave.ini
format into behave. With that, any behave configuration settings can be specified to control how behave behaves. See the behave documentation about the behave.ini
configuration file here.
To read about what protocols and features from Aries Interop Profile 1.0, see the Test Coverage Matrix.
"},{"location":"guide/#test-reporting","title":"Test Reporting","text":"For information on enhanced test reporting with the Aries Agent Test Harness, see Advanced Test Reporting.
"},{"location":"guide/#adding-runsets","title":"Adding Runsets","text":"Runsets are GHA based workflows that automate the execution of your interop tests and reporting of the results.
These workflows are contained in the .github/workflows folder and must be named test-harness-<name>.yml
. Refer to the existing files for examples on how to create one specific to your use case. In most cases you will be able to copy an existing file and change a few parameters.
Test execution is controlled by the test-harness-runner
. This workflow will dynamically pick up and run any workflow conforming to the test-harness-*.yml
naming convention. Specific test harnesses can be excluded by adding their file name pattern to the ignore_files_starts_with
list separated by a ,
. The test harnesses are run by the Run Test Harness job which uses a throttled matrix strategy. The number of concurrent test harness runs can be controlled by setting the max-parallel
parameter to an appropriate number.
The Aries Agent Test Harness defines multiple Dev Containers to aid the test developer and the Backchannel/Controller developer. This allows the developers to write code for these areas without having to install all libraries and configure your local dev machine to write these tests or update an Aries Framework Backchannel.
At the time of writing this document there are three Dev Containers in AATH. - A Test Development Dev Container - An ACA-Py Backchannel Development Dev Container - An Aries Framework Javascript/CREDO-TS Backchannel Dev Container (Dev Container still in development)
"},{"location":"guide/AATH_DEV_CONTAINERS/#getting-started","title":"Getting Started","text":"To get started make sure you have installed the Dev Containers VSCode extension in VSCode.
Clone the Aries Agent Test Harness repository and open the root folder in VS Code. Once opened, VS Code will detect the available dev containers and prompt you to open them. Selecting this option will display all the dev containers that you can choose from.
The other way to open the Dev Container is to select the Open a Remote Window
option in the bottom of VSCode.
Then select Reopen in Container
At the first time of opening a specific Dev Container, the container will be built. If a change is made in any of the dev containers configurations, the dev container will have to be rebuilt. VSCode should sense a change to these files and prompt a rebuild, but if not, or you don't accept the rebuild prompt at the time it appears, a rebuild can be initiated in VSCode within the dev container by clicking on the dev container name in the bottom left corner of VSCode and selecting Rebuild.
"},{"location":"guide/AATH_DEV_CONTAINERS/#dev-container-configuration","title":"Dev Container Configuration","text":"The dev container json files are located in .devcontainer\\
. This is where enhancements to existing dev containers and adding new dev containers for other Aries Frameworks would take place following the conventions already laid out in that folder.
The dev containers use an existing Dockerfile to build the image. These are not the regular docker files that are build with the AATH manage script. There are specific Dockerfiles for each dev container that are based on those original docker files but needed to be modified to work better with the dev container configurations. The Dockerfiles are named the same as the original files except with dev
in the name. for example, the Dockerfile.dev-acapy-main
was based off of Dockerfile.acapy-main
.
These dev containers are named in Docker to allow for identification and better communications between agents. If you want an agent dev container to represent one of acme, bob, faber, or mallory, make sure the devcontainer.json is changed to name you want the agent to represent.
\"runArgs\": [\n \"--network=aath_network\",\n \"--name=acme_agent\"\n ],\n
All dev containers are on the aath_network
in docker which corresponds to the network that the regular agent containers are on when running the manage script. This allows the developer to run tests in a dev container against agents ran by the manage script, along with an agent running in a dev container communicating with other agents ran buy the manage script.
Many times in a single test scenario there may be 1-n connections to be aware of between the players involved in the scenario. Acme is connected to Bob, and different connection ids are used for both directions depending on which player is acting at the time; Acme to Bob, and Bob to Acme. The connections may extend to other participating players as well, Acme to Faber, Bob to Faber. With those relationships alone, the tests have to manage 6 connection ids.
The connection tests uses a dictionary of dictionaries to store these relationships. When a new connection is made between two parties, the tests will create a dictionary keyed by the first player, that contains another dictionary keyed by the second player, that contains the connection id for the relationship. It will do the same thing for the other direction of the relationship as well, in order to get the connection id for that direction of the relationship. The dictionary for the Bob Acme relationship will look like this;
['Bob']['Acme']['30e86995-a2f7-442c-942c-96497aefad8d']\n['Acme']['Bob']['9c0d9f2c-23c1-4384-b89e-950f97a7f173']\n
With all three players mentioned above, participating in one scenario, the dictionary will look like this once all connections have been established through the connection steps definitions; ['Bob']['Acme']['30e86995-a2f7-442c-942c-96497aefad8d']\n['Bob']['Faber']['2c75d023-91dc-43b6-9103-b25af582fc6c']\n['Acme']['Bob']['9c0d9f2c-23c1-4384-b89e-950f97a7f173']\n['Acme']['Faber']['3514daa2-f9a1-492f-94f5-386b03fb8d31']\n['Faber']['Bob']['f907c1e2-abe1-4c27-b9e2-e19f403cdfb5']\n['Faber']['Acme']['b1faea96-84bd-4c3c-b4a9-3d99a6d51030']\n
If the connection step definitions are used in other non connection related tests, like issue credential or proof to establish the connection between two players, then these tests are taking advantage of this relationship storage mechanism. "},{"location":"guide/ACCESS-CONNECTION-IDS/#accessing","title":"Accessing","text":"This connection id dictionary is actually stored in the context
object in test harness, and because the context
object is passed into ever step definition in the test scenario, it can be accessed from anywhere in the test scenario. Retrieving the connection id for a relationship is done like this;
connection_id = context.connection_id_dict['player1']['player2']\n
Lets say you are writing a step def where Bob is going make a request to Acme, like Bob proposes a credential to Acme
. The call may need Bob's connection id for the relationship to Acme. Doing this would look like the following; connection_id = context.connection_id_dict['Bob']['Acme']\n
Since player names are always passed into step definitions as variables representing their roles, ie. holder proposes a credential to issuer
, the code will actually look like this. connection_id = context.connection_id_dict[holder][issuer]\n
Connection IDs are always needed at the beginning of a protocol, if not throughout other parts of the protocol as well. Having all ids necessary within the scenario easily accessible at any time, will make writing and maintaining agent tests simpler. "},{"location":"guide/CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"guide/CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"guide/CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"guide/CODE_OF_CONDUCT/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"guide/CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at angelika.ehlers@gov.bc.ca. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"guide/CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/\u00bc
"},{"location":"guide/CONFIGURE-CRED-TYPES/","title":"Configuring Tests with Credential Types and Proofs","text":""},{"location":"guide/CONFIGURE-CRED-TYPES/#contents","title":"Contents","text":"Initially the Aries Agent Interop Tests were written with hard coded Credential Type Definitions, Credential Data for issued cedentials, and a canned Proof Request and Presentation of that Proof. This default behaviour for the tests is fine for quick cursory assessment of the protocol, however it was always a goal to provide a method of having this credential and proof input external to the tests, and to be able to quickly construct tests with different credential and proof data, driven from that external data. Tests still remain that use the default hard coded credential input. Tests like the Proof test below, make no mention of specific credentials or proofs.
@T001-AIP10-RFC0037 @P1 @AcceptanceTest @Indy\nScenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\nGiven \"2\" agents\n| name | role |\n| Faber | verifier |\n| Bob | prover |\nAnd \"Faber\" and \"Bob\" have an existing connection\nAnd \"Bob\" has an issued credential from <issuer>\nWhen \"Faber\" sends a request for proof presentation to \"Bob\"\nAnd \"Bob\" makes the presentation of the proof\nAnd \"Faber\" acknowledges the proof\nThen \"Bob\" has the proof acknowledged\n\nExamples:\n| issuer |\n| Acme |\n| Faber |\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#defining-tests-in-feature-files-with-externalized-credential-info","title":"Defining Tests in Feature Files with Externalized Credential Info","text":"Tests that have externalized input data for credentials and proofs look obviously different than the test above. They use Scenario tags and Example Data Tables to feed the test with input data. This input data is contains in json files located in /aries-agent-test-harness/aries-test-harness/features/data
.
schema_driverslicense.json
file in /aries-agent-test-harness/aries-test-harness/features/data
.cred_data_schema_driverslicense.json
file in /aries-agent-test-harness/aries-test-harness/features/data
. This file will contain to sections, one for \"Data_DL_MaxValues\" and one for \"Data_DL_MinValues\". proof_request_DL_address.json
and a proof_request_DL_age_over_19.json
file in /aries-agent-test-harness/aries-test-harness/features/data
.presentation_DL_address.json
and presentation_DL_age_over_19.json
in /aries-agent-test-harness/aries-test-harness/features/data
.Some conventions are in place here that make it workable.
Proof Requests can contain requests from multiple credentials from the holder. The Test Harness will create the credential types for as many credential types listed as tags for the scenario. For example, below is an example of a scenario that will utilize two credentials in its proofs; Biological Indicators and Health Consent.
@T001.4-AIP10-RFC0037 @P1 @AcceptanceTest @Schema_Biological_Indicators @Schema_Health_Consent @Indy\nScenario Outline: Present Proof of specific types and proof is acknowledged\nGiven \"2\" agents\n| name | role |\n| Faber | verifier |\n| Bob | prover |\nAnd \"Faber\" and \"Bob\" have an existing connection\nAnd \"Bob\" has an issued credential from <issuer> with <credential_data>\nWhen \"Faber\" sends a <request for proof> presentation to \"Bob\"\nAnd \"Bob\" makes the <presentation> of the proof\nAnd \"Faber\" acknowledges the proof\nThen \"Bob\" has the proof acknowledged\n\nExamples:\n| issuer | credential_data | request for proof | presentation |\n| Faber | Data_BI_HealthValues | proof_request_health_consent | presentation_health_consent |\n
In this scenario before the scenario starts 2 credential types are created for the issuer to be able to issue. The credential_data
points to a section named Data_BI_HealthValues in each cred_data_\\.json file, and those two credentials are issued to the holder on step And \"Bob\" has an issued credential from <issuer> with <credential_data>
The request for proof
points to one json file that hold the request that contains data from both credentials. The presentation
obviously is the presentation by the holder of the proof using the 2 credentials. This pattern will work and can be extended for as many credentials as are needed for a presentation test.
"},{"location":"guide/CONFIGURE-CRED-TYPES/#credential-type-definitions","title":"Credential Type Definitions","text":"The following are the basics in defining a Credential Type. It really just consists of a name, a version and the actual attributes of the credential. It also contains a section to set the credential defintion revocation support if needed. To reiterate, this is contained in a Schema_\\.json file in /aries-agent-test-harness/aries-test-harness/features/data
. Follow this pattern to create new tests with different credential types.
{\n\"schema\":{\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n \"address\",\n \"DL_number\",\n \"expiry\",\n \"age\"\n ]\n},\n\"cred_def_support_revocation\":false\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#credential-data","title":"Credential Data","text":"The credential data json file references the credential type name in the main scenario tag. ie cred_data_\\.json. This file holds sections of data that references the name in the examples data table. There needs to be a section that references this name, for ever name mentioned in the test or across other tests that use that credential. na
{\n\"Data_DL_MaxValues\":{\n \"cred_name\":\"Data_DriversLicense_MaxValues\",\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n {\n \"name\":\"address\",\n \"value\":\"947 this street, Kingston Ontario Canada, K9O 3R5\"\n },\n {\n \"name\":\"DL_number\",\n \"value\":\"09385029529385\"\n },\n {\n \"name\":\"expiry\",\n \"value\":\"10/12/2022\"\n },\n {\n \"name\":\"age\",\n \"value\":\"30\"\n }\n ]\n},\n\"Data_DL_MinValues\":{\n \"cred_name\":\"Data_DriversLicense_MaxValues\",\n \"schema_name\":\"Schema_DriversLicense\",\n \"schema_version\":\"1.0.1\",\n \"attributes\":[\n {\n \"name\":\"address\",\n \"value\":\"9\"\n },\n {\n \"name\":\"DL_number\",\n \"value\":\"0\"\n },\n {\n \"name\":\"expiry\",\n \"value\":\"10/12/2022\"\n },\n {\n \"name\":\"age\",\n \"value\":\"20\"\n }\n ]\n}\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#proof-requests","title":"Proof Requests","text":"The following is an example of a simple proof request for one attribute with some restrictions.
{\n \"presentation_request\": {\n \"requested_attributes\": {\n \"address_attrs\": {\n \"name\": \"address\",\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_DriversLicense\",\n \"schema_version\": \"1.0.1\"\n }\n ]\n }\n },\n \"version\": \"0.1.0\"\n }\n}\n
The following is an example of a proof request using more than one credential. {\n \"presentation_request\": {\n \"name\": \"Health Consent Proof\",\n \"requested_attributes\": {\n \"bioindicators_attrs\": {\n \"names\": [\n \"name\",\n \"range\",\n \"concentration\",\n \"unit\",\n \"concentration\",\n \"collected_on\"\n ],\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_Biological_Indicators\",\n \"schema_version\": \"0.2.0\"\n }\n ]\n },\n \"consent_attrs\": {\n \"name\": \"jti_id\",\n \"restrictions\": [\n {\n \"schema_name\": \"Schema_Health_Consent\",\n \"schema_version\": \"0.2.0\"\n }\n ]\n }\n },\n \"requested_predicates\": {},\n \"version\": \"0.1.0\"\n }\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#proof-presentations","title":"Proof Presentations","text":"Proof Presentations are straight forward as in the example below. The only thing to note is the cred_id is created during the execution of the test scenario, so there is no way to know it beforehand and have it inside the json file. The test harness takes care of this and swaps in the actual credential id from the credential issued to the holder. To do this change the test harness needs to know what the credential type name is in order to pick the correct cred_id for credential.
{\n \"presentation\": {\n \"comment\": \"This is a comment for the send presentation.\",\n \"requested_attributes\": {\n \"address_attrs\": {\n \"cred_type_name\": \"Schema_DriversLicense\",\n \"revealed\": true,\n \"cred_id\": \"replace_me\"\n }\n }\n }\n}\n
The following is an example of a presentation with two credentials. {\n \"presentation\": {\n \"comment\": \"This is a comment for the send presentation for the Health Consent Proof.\",\n \"requested_attributes\": {\n \"bioindicators_attrs\": {\n \"cred_type_name\": \"Schema_Biological_Indicators\",\n \"cred_id\": \"replace me\",\n \"revealed\": true\n },\n \"consent_attrs\": {\n \"cred_type_name\": \"Schema_Health_Consent\",\n \"cred_id\": \"replace me\",\n \"revealed\": true\n }\n },\n \"requested_predicates\": {},\n \"self_attested_attributes\": {}\n }\n}\n
"},{"location":"guide/CONFIGURE-CRED-TYPES/#conclusion","title":"Conclusion","text":"With the constructs above is should be very easy to add new tests based off of new credentials. Essentially, once the json files are created, opening the present proof feature file, copying and pasting one of these tests, incrementing the test id, and replacing the credential type tags and data table names, should have a running test scenario with those new credentials.
As we move forward and non Aca-py Agents are used in the test harness, some of the nomenclature may change in the json file to generalize those names, however it will still be essential for the agent's backchannel to translate that json into whatever the agent is expecting to accomplish the goals of the test steps.
"},{"location":"guide/CONNECTION-REUSE/","title":"Taking Advantage of Connection Reuse in AATH","text":"The Issue Credential and Proof tests that use DID Exchange Connections will attempt to reuse an existing connection if one was established between the agents involved from a subsequent test. This not only tests native connection reuse functionality in the agents, but also saves execution time.
There are three conditions an agent and backchannel can be in when executing these Issue Cred and Proof tests that supports connection reuse.
1/ An agent supports public DIDs, and connection reuse.
A connection was made in a subsequent test that used a public DID for the connection.
In the followup test for either Issue Credential or Proof that has the And requester and responder have an existing connection Given (precondition clause) as part of the test, and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message
is made with \"use_public_did\": True
in the payload.
The backchannel, if needed, can use this to create an invitation that contains the public_did. The invitation returned must contain the Public DID for the responder.
The test harness then calls out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel can use this to trigger the agent to reuse an existing connection if one exists. The connection record is returned to the test harness containing with a state of completed, the requesters connection_id, and the did (my_did) of the requester.
The test harness recognizes that we have a completed connection and calls GET active-connection
on the responder with an id of the requester's DID.
GET active-connection
in the backchannel should query the agent for an active connection that contains the requester's DID. Then return the active connection record that contains the connection_id for the responder.
The test harness at this point has all the info needed to continue the test scenario using that existing connection.
2/ An agent doesn't support public DIDs in connections officially, however has a key in the invite (can be a public DID) that can be used to query the existing connection.
A connection was made in a subsequent test.
In the followup test for either Issue Credential or Proof that has the And requester and responder have an existing connection Given (precondition clause) as part of the test, and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message is made with \"use_public_did\": True
in the payload.
The backchannel, can ignore the use_public_did
flag and remove it from the payload if it interferes with the creation of the invitation. An invitation is returned in the response.
A call is then made to out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel can use this as a trigger to search for an existing connection based on some key that is available in the invitation.
The connection record is returned to the test harness containing with a state of completed
, the requesters connection_id, and the did (my_did) of the requester.
The test harness recognizes that we have a completed connection and calls GET active-connection
on the responder with an id of the requester's DID.
GET active-connection
in the backchannel should query the agent for an active connection that contains the requester's DID. Then return the active connection record that contains the connection_id for the responder.
The test harness at this point has all the info needed to continue the test scenario based on that existing connection.
3/ An agent doesn't support public DIDs in Connections, and cannot reuse a connection in AATH.
Tests for either Issue Credential or Proof that has the And requester and responder have an existing connection
Given (precondition clause) as part of the test and the test is tagged with @DIDExchangeConnection
, will attempt to reuse the subsequent connection.
A call to out-of-band/send-invitation-message
is made with \"use_public_did\": True
in the payload.
The backchannel should ignore the use_public_did
flag and remove it from the payload if it interferes with the creation of the invitation. An invitation is returned in the response.
A call is then made to out-of-band/receive-invitation
with use_existing_connection: true
in the payload.
The backchannel should ignore this flag and remove it from the data if it interferes with the operation.
completed
.You are encouraged to contribute to the repository by forking and submitting a pull request.
For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.
(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)
Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main
branch. Pull requests should have a descriptive name and include an summary of all changes made in the pull request description.
If you would like to propose a significant change, please open an issue first to discuss the work with the community.
Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).
"},{"location":"guide/Debugging/","title":"Debugging Backchannels","text":""},{"location":"guide/Debugging/#vscode","title":"VSCode","text":""},{"location":"guide/Debugging/#net","title":".NET","text":"$DOCKERHOST
variable, and currently VSCode doesn't offer a way to do this dynamically.cp .env.example .env
DOCKERHOST
with the output of ./manage dockerhost
, also replace the IP in LEDGER_URL
with the output../manage run
script are the same as the backchannels started with the debugger../manage run -d dotnet -t @T001-AIP10-RFC0160
it will run the tests using the backchannels you started from the debugger.For more information on debugging in VSCode see the docs.
"},{"location":"guide/Debugging/#troubleshooting","title":"Troubleshooting","text":""},{"location":"guide/Debugging/#process-dotnet-dev-certs-https-check-trust","title":"Process 'dotnet dev-certs https --check --trust'","text":"If you get the following error:
Error: Process 'dotnet dev-certs https --check --trust' exited with code 9\nError:\n
This means the ASP.NET Core development certificate is not quite working. Running the following should fix the problem:
dotnet dev-certs https --clean\ndotnet dev-certs https --trust\n
See: https://github.com/microsoft/vscode-docker/issues/1761
"},{"location":"guide/LICENSE/","title":"License","text":" Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Definitions.
\"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
\"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
\"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
\"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.
\"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
\"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
\"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
\"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
\"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"
\"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
\u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n
Copyright 2019 Province of British Columbia Copyright 2017-2019 Government of Canada
Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0\n
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
"},{"location":"guide/MAINTAINERS/","title":"Maintainers","text":""},{"location":"guide/MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"name Github Discord Stephen Curran swcurran Ian Costanzo ianco Wade Barnes WadeBarnes Andrew Whitehead andrewwhitehead Timo Glastra TimoGlastra Sheldon Regular nodlesh"},{"location":"guide/MOBILE_AGENT_TESTING/","title":"Mobile Agent (Manual) Testing","text":"Aries Agent Test Harness includes the \"mobile\" Test Agent that supports the manual testing of some mobile agents. The mobile Test Agent doesn't control the mobile app directly but rather prompts the user to interact with the wallet app on their phone to scan a QR code to establish a connection, respond to a credential offer, etc.
Before executing a test run, you have to build the Test Agents you are going to use. For example, the following builds the \"mobile\" and \"acapy-main\" Test Agents:
./manage build -a mobile -a acapy-main\n
Remember to build any other Test Agents you are going to run with the mobile tests.
There are several options to the ./manage run
script that must be used when testing a mobile wallet:
-n
option tells the ./manage
script to start ngrok services for each agent (issuer, verifier) to provide the mobile app an Internet accessible endpoint for each of those agents. You will need to provide an ngrok AuthToken, either free or paid to use this feature. Pass it as an environment variable when calling manage
like NGROK_AUTHTOKEN=YourAuthTokenHere ./manage ...
-b mobile
option to use the mobile Test Agent for the Bob
role (the only one that makes sense for a mobile app)-t @MobileTest
option to run only the tests that have been tagged as \"working\" with the mobile test agent@MobileTest
tag should be added to those test scenariosAnother requirement for using the mobile Test Agent is that you have to use an Indy ledger that is publicly accessible, does not have a Transaction Author Agreement (TAA), and is \"known\" by the mobile wallet app you are testing. That generally means you must use the \"BCovrin Test\" network. Also needed is a public Indy tails file for running revocation tests.
Before you run the tests, you have to have a mobile wallet to test (here are some instructions for getting a mobile wallet app), and if necessary, you must use the wallet app settings to use the \"BCovrin Test\" ledger.
Put together, that gives us the following command to test a mobile wallet with Aries Cloud Agent Python (main branch) running in all other roles.
NGROK_AUTHTOKEN=2ZrwpFakeAuthToken_W4VDBxavAzdB5K3wsDGz LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io ./manage run -d acapy-main -b mobile -n -t @MobileTest\n
The mobile agent is in \"proof-of-concept\" status and some tests are not 100% reliable with all mobile agents. If things don't work, take a look at the logs (in the ./logs
folder) to try to understand what went wrong.
You can try to run other test scenarios by adjusting the tags (-t
options) you select when running tests, per these instructions. If you do find other test scenarios that work with the mobile Test Agent, please add an issue or PR to add the \"@MobileTest\" tag to the test scenario.
While this gives us one way to test mobile agent interoperability, it would be really nice to be able to run the mobile wallets without human intervention so that we can include mobile wallets in the continuous integration testing. Those working on the Aries Agent Test Harness haven't looked into how that could be done, so if you have any ideas, please let us know.
Another thing that would be nice to have supported is capturing the mobile wallet (brand and version) and test run results in a way that we could add the test run to the https://aries-interop.info page. Do you have any ideas for that? Let us know!
"},{"location":"guide/REMOTE_AGENT_TESTING/","title":"Remote Agent Testing in OATH","text":"OWL Agent Test Harness is a powerful tool for running verifiable credential and decentralized identity interoperability tests. It supports a variety of agent configurations, including running agents locally that are test harness managed, or remotely, unmanaged by the test harness. This guide covers the remote option, allowing you to execute interoperability tests with agents running on remote servers in development, test, staging, or production environments, communicating with other remote agents or test harness managed agents.
"},{"location":"guide/REMOTE_AGENT_TESTING/#prerequisites","title":"Prerequisites","text":"Before using the remote
option, make sure you have:
When running the test harness with remote agents, the basic command structure for setting remote agents is as follows:
./manage run -a remote --aep <acme_endpoint> -b remote --bep <bob_endpoint> -f remote --fep <faber_endpoint> -m remote --mep <mallory_endpoint> \n
For any of the agent flags, -a
, -b
, -f
, -m
, if the agent is set to remote
then the test harness will look for the long option of --aep
, --bep
, --fep
, and -mep
for the endpoint of that particular remote agent.
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io \\\nTAILS_SERVER_URL_CONFIG=https://tails.vonx.io \\\n./manage run \\\n -a remote --aep http://remote-acme.com \\\n -b acapy-main -f acapy-main -m acapy-main \\\n -t @T002-RFC0160\n
This example command will test a remote agent in the role if Acme, an issuer/verifier in conjuction with test harness managed acapy agents playing the other roles of Bob, Faber, and Mallory. Any combination of remote and test harness managed agents is testable, including all remote if one is so inclined.
"},{"location":"guide/REMOTE_AGENT_TESTING/#local-example","title":"Local Example","text":"To verify and see the remote implementation in the test harness working locally, you will need to run one of the test harness agents outside of the OATH docker network. Then use that agent as a remote agent.
Build the local agents:
./manage build -a acapy-main\n
Run a remote agent locally:
./start-remote-agent-demo.sh\n
Run the tests:
LEDGER_URL_CONFIG=http://test.bcovrin.vonx.io TAILS_SERVER_URL_CONFIG=https://tails.vonx.io ./manage run -a acapy-main -b remote --bep http://0.0.0.0:9031 -f acapy-main -m acapy-main -t @T002-RFC0160\n
Shutdown the remote agent
./end-remote-agent-demo.sh\n
"},{"location":"guide/REMOTE_AGENT_TESTING/#handling-errors","title":"Handling Errors","text":"If you encounter any issues while using the remote option, check the following:
The remote option in the Test Harness allows you to test verifiable credential interactions with agents running in remote environments. This flexibility essentially allows you to verify that your agent(s) can successfully interop with other agents for the implemented protocols.
For any extra troubleshooting please consult with the OWL maintainers on Discord.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/","title":"Retry Failed Test Scenarios","text":"This feature introduces the ability to retry failed test scenarios in your test runs. It provides flexibility in managing the number of retry attempts for scenarios that fail during testing.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#table-of-contents","title":"Table of Contents","text":"This feature addresses the issue of retrying failed test scenarios. It implements a mechanism to automatically rerun failed scenarios to improve the stability of test results.
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#changes-made","title":"Changes Made","text":"The following changes have been made to implement the retry functionality:
before_feature
hook in \\features\\environment.py
to handle retrying failed scenarios.TEST_RETRY_ATTEMPTS_OVERRIDE
via manage.py
to the Docker environment.There are two ways to override the number of attempts for retrying failed scenarios:
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#1-using-behaveini","title":"1. Usingbehave.ini
","text":"Add the following variable to the [behave.userdata]
section of the behave.ini
file:
[behave.userdata]\ntest_retry_attempts = 2\n
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#2-using-environment-variable","title":"2. Using Environment Variable","text":"Pass the TEST_RETRY_ATTEMPTS_OVERRIDE
variable as an environment variable while running tests or through deployment YAML files.
Example:
TEST_RETRY_ATTEMPTS_OVERRIDE=2 ./manage run -d acapy -b javascript -t @AcceptanceTest\n
"},{"location":"guide/RETRY-FAILED-SCENARIOS/#feedback-and-contributions","title":"Feedback and Contributions","text":"Your feedback and contributions are welcome! If you encounter any issues or have suggestions for improvement, please feel free to open an issue or submit a pull request.
"},{"location":"guide/RunningLocally/","title":"Running OATH Locally","text":""},{"location":"guide/RunningLocally/#running-locally-bare-metal-not-recommended","title":"Running Locally (Bare Metal) - NOT RECOMMENDED","text":"Note this is not recommended, however it may be desirable if you want to run outside of Docker containers. While this repo is in early iteration, we can only provide limited support in using this. These instructions cover what was done in initially setting up the ACA-Py and VCX backchannels before they were standardized. As such, they are included for historical purposes only, and may or may not still be accurate.
The backchannel for Aries Framework .NET only supports the standardized dockerized method for setting up backchannels. However the backchannel does support debugging the backchannel from inside the docker container, which is the most common reason for running locally. See DEBUGGING.md for more info on debugging.
We would FAR prefer help in being able in documenting the use of a debugger with the docker containers vs. documentation on running the test harness on bare-metal.
To run each agent, install the appropriate pre-requisites (the VCX adapter requires a local install of indy-sdk and VCX) and then run as follows.
Setup - you need to run an Indy ledger and a ledger browser. One way to run locally is to run the Indy ledger from the indy-sdk, and the browser from von-network.
In one shell, run the ledger (the nodes will be available on localhost):
git clone https://github.com/hyperledger/indy-sdk.git\ncd indy-sdk\ndocker build -f ci/indy-pool.dockerfile -t indy_pool .\ndocker run -itd -p 9701-9708:9701-9708 indy_pool\n
(Note that you will need the indy-sdk to build the Indy and VCX libraries to run the VCX backchannel.)
... and in a second shell, run the ledger browser:
git clone https://github.com/bcgov/von-network.git\ncd von-network\n# run a python virtual environment\nvirtualenv venv\nsource ./venv/bin/activate\n# install the pre-requisites and then run the ledger browser\npip install -r server/requirements.txt\nGENESIS_FILE=<your path>/aries-agent-test-harness/aries-backchannels/data/local-genesis.txt REGISTER_NEW_DIDS=true PORT=9000 python -m server.server\n
Open additional shells to run the agents.
For ACA-PY:
# install the pre-requisites:\ncd aries-agent-test-harness/aries-backchannels\npip install -r requirements.txt\n
Note that this installs the aca-py and vcx python libraries from ianco
forks of the gitub repositories.
cd aries-agent-test-harness/aries-backchannels\nLEDGER_URL=http://localhost:9000 python acapy_backchannel.py -p 8020\n
-p
specifies the backchannel port that the test harness will talk to. The backchannel adaptor starts up an ACA-PY agent as a sub-process, and will use additional ports for communication between the adaptor and agent. In general make sure there is a range of 10
free ports (i.e. in the example above reserve ports 8020 to 8029).
For VCX:
# install the pre-requisites:\ncd aries-agent-test-harness/aries-backchannels\npip install -r requirements-vcx.txt\n
cd aries-agent-test-harness/aries-backchannels\nLEDGER_URL=http://localhost:9000 python vcx_backchannel.py -p 8030\n
Note that you can run multiple instances of these agents.
Note also for VCX you need to install the Indy dependencies locally - libindy, libvcx, libnulpay - and you also need to run a dummy-cloud-agent
server. You need to install these from ianco
's fork and branch of the indy-sdk: https://github.com/ianco/indy-sdk/tree/vcx-aries-support
See the instructions in the indy-sdk for more details.
"},{"location":"guide/TEST-COVERAGE/","title":"Aries Agent Test Harness: Test Coverage","text":"The following test coverage is as of September 1, 2020.
AIP 1.0 Status:
Terminology Directly Tested: There is a Test Scenario that has it as it's goal to test that protocol feature. Tested as Inclusion: The Test Scenario's focus is not on this test case however it uses all or portions of other tests that use the protocol feature. Tested Indicrectly: The Test Scenario is testing a Protocol that is using, as part of it's operations, another protocol.
The Google Sheets Version of Coverage Matrix is also made available for better viewing.
Tested Indirectly Tested Indirectly Tested Indirectly Tested Indirectly RFC Feature Variation Test Type(s) Directly Tested Tested as Inclusion RFC0056 Service Decorator RFC0035 Report Problem RFC0025 DIDComm Transports RFC0015 Acks RFC0160 - Connection Protocol Establish Connection w/ Trust Ping Functional T001-AIP10-RFC0160 T001-AIP10-RFC0036 X T002-AIP10-RFC0036 X T003-AIP10-RFC0036 X T004-AIP10-RFC0036 X T001-AIP10-RFC0037 X T001.2-AIP10-RFC0037 X T001.3-AIP10-RFC0037 X T001.4-AIP10-RFC0037 X T002-AIP10-RFC0037 X T003-AIP10-RFC0037 X T003.1-AIP10-RFC0037 X T006-AIP10-RFC0037 X Establish Connection w/ Acks Functional X Establish Connection Reversed Roles Functional T001.2-AIP10-RFC0160 X Establish Connection final acknowledgment comes from inviter Functional T002-AIP10-RFC0160 X Establish Connection Single Use Invite Functional, Exception T003-AIP10-RFC0160 X T004-AIP10-RFC0160 X Establish Connection Mult Use Invite Functional T005-AIP10-RFC0160 (wip) X Establish Multiple Connections Between the Same Agents Functional T006-AIP10-RFC0160 X Establish Connection Single Try on Exception Funtional, Exception T007-AIP10-RFC0160 (wip) X X RFC0036 - Issue Credential Issue Credential Start w/ Proposal Functional T001-AIP10-RFC0036 T001-AIP10-RFC0037 X T001.2-AIP10-RFC0037 X T001.3-AIP10-RFC0037 X T001.4-AIP10-RFC0037 X T002-AIP10-RFC0037 X T003-AIP10-RFC0037 X T003.1-AIP10-RFC0037 X T006-AIP10-RFC0037 X Issue Credential Negotiated w/ Proposal Functional T002-AIP10-RFC0036 X Issue Credential Start w/ Offer Functional T003-AIP10-RFC0036 X Issue Credential w/ Offer w/ Negotiation Functional T004-AIP10-RFC0036 X Issue Credential Start w/ Request w/ Negotiation Functional T005-AIP10-RFC0036 (wip) X Issue Credential Start w/ Request Functional T006-AIP10-RFC0036 (wip) X RFC0037 - Present Proof Present Proof w/o Proposal, Verifier is not the Issuer, 1 Cred Type Functional T001-AIP10-RFC0037 X X T001.2-AIP10-RFC0037 X X T001.3-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is the Issuer, 1 Cred Type Functional T001-AIP10-RFC0037 X X T001.2-AIP10-RFC0037 X X T001.3-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is the Issuer, Multi Cred Types Functional T001.4-AIP10-RFC0037 X X Present Proof w/o Proposal, Verifier is not the Issuer, Multi Cred Types Functional T001.4-AIP10-RFC0037 X X Present Proof Connectionless w/o Proposal Functional T002-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Same Cred Type Different Attribute, Verifier is the Issuer Functional T003-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Same Cred Type Different Attribute, Verifier is not the Issuer Functional T003-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Different Cred Type, Verifier is the Issuer Functional T003.1-AIP10-RFC0037 X X Present Proof w/ Proposal as Response to a Request w/ Different Cred Type, Verifier is not the Issuer Functional T003.1-AIP10-RFC0037 X X Present Proof Connectionless w/ Proposal, Verifier is the Issuer Functional T004-AIP10-RFC0037 (wip) X X X Present Proof Connectionless w/ Proposal, Verifier is not the Issuer Functional T004-AIP10-RFC0037 (wip) X X X Present Proof w/o Proposal, Verifier Rejects Presentation Functional, Exception T005-AIP10-RFC0037 (wip) X X Present Proof Start w/ Proposal Functional T006-AIP10-RFC0037 X X"},{"location":"guide/TEST_DEV_GUIDE/","title":"Test Development Guidelines","text":""},{"location":"guide/TEST_DEV_GUIDE/#contents","title":"Contents","text":"The Aries Agent Test Harness utilizes a behavioral driven approach to testing. The Python toolset Behave is used to actualize this approach. [Gherkin] is the language syntax used to define test preconditions and context, actions and events, and expected results and outcomes.
The first step in developing a suite of tests for an Aries RFC is to write plain english Gherkin definitions, before any code is written. The only input to the test cases should be the RFC. The test cases should not be driven by agent or agent framework implementations.
The priority is to do \"happy path\" type tests first, leaving the exception & negative testing until there are multiple suites across protocols of happy path acceptance tests. Write one main scenario then get peers and others familiar with the RFC to review the test. This is important because the structure and language of this initial test may guide the rest of the tests in the suite.
Initial writing of the Gherkin tests themselves are done in a .feature file or in a GitHub issue detailing the test development work to be accomplished. If no GitHub issue exists for the test development work, create one.
To keep test definitions immune to code changes or nomenclatures in code, it is best to express the RFC in high level terms from the user level based on predefined persona, currently Acme
, Bob
and Mallory
, that can be interpreted at the business level without revealing implementation details. For example, When Acme requests a connection with Bob
instead of When Acme sends a connection-request to Bob
. Sometimes this may be cumbersome, so just make it as high level as makes sense. A full example from the connection protocol might look something like this;
Scenario Outline: establish a connection between two agents\nGiven we have \"2\" agents\n| name | role |\n| Acme | inviter |\n| Bob | invitee |\nWhen \"Acme\" generates a connection invitation\nAnd \"Bob\" receives the connection invitation\nAnd \"Bob\" sends a connection request to \"Acme\"\nAnd \"Acme\" receives the connection request\nAnd \"Acme\" sends a connection response to \"Bob\"\nAnd \"Bob\" receives the connection response\nAnd \"Bob\" sends <message> to \"Acme\"\nThen \"Acme\" and \"Bob\" have a connection\n\nExamples:\n| message |\n| trustping |\n| ack |\n
Utilize data tables and examples in the Gherkin definition where possible.
The test cases should use the test persona as follows:
Acme
: an enterprise agent with issuer and verifier capabilitiesBob
: a holder/prover personFaber
another enterprise agent with issuer and verifier capabilitiesMallory
: a malicious holder/prover personAs necessary, other persona will be added. We expect adding Carol
(another holder/prover person) and perhaps Thing
(an IOT thing, likely an issuer and verifier). Note that each additional persona requires updates to the running of tests (via the ./manage
script) and introduce operational overhead, so thought should be given before introducing new characters into the test suite.
The test harness run script supports the use of tags in the feature files to be able to narrow down a test set to be executed. The general tags currently utilized are as follows:
There will be cases where there will be a need for Protocol specific tags. This will usually reveal itself when there are optional implementations or where implementations can diverge into 2 or more options. Tests will need to be tagged with the least common option, where no tag means the other option. For example in the connection protocol there are specific tests that exercise the behavior of the protocol using a Multi Use Invite and a Single Use Invite. The tag @MultiUseInvite is used to differentiate the two, and by default it is expected that MultiUseInvite is the least common option.
Currently Existing Connection Protocol Tags
Defining specific tags should be discussed with the Aries Protocol test community.
"},{"location":"guide/TEST_DEV_GUIDE/#defining-backchannel-operations","title":"Defining Backchannel Operations","text":"Defining test steps require using and extending the commands and operations to be implemented by the backchannels. The commands and operations are documented in an OpenAPI specification located here. A rendered version of the OpenApi spec (from the main branch) can be viewed on the Aries Interop page here. As test developers add new steps to test cases, document the new operations on which they depend in the OpenAPI spec.
During development (and if using VSCode) there are some tools that can make it easier to work with the OpenAPI spec:
Defining a new operation is as simple as adding a new path to the OpenAPI spec file. If you're adding a new topic, make sure to add a new entry to the tags
at the top of the OpenAPI file. When adding a new endpoint try to group it with the existing commands, so proof commands should be grouped with other proof commands. When adding a new path, it is easiest to copy an already existing path.
Follow standard best practices for implementing test steps in Behave, writing the test steps as if the feature is fully supported and then adding code at all levels to support the new test. The process is something like this:
steps
Python codeExisting backchannels will throw a \"NotImplementedException\" for any steps that are not implemented in the backchannels, and should include information from the above-mentioned data file.
"},{"location":"guide/TEST_DEV_GUIDE/#github-actions-and-comparing-test-results-day-to-day","title":"Github Actions and Comparing Test Results Day-To-Day","text":"AATH has the capability of checking whether the test results change from day-to-day (in addition to checking that all tests have passed).
To enable this checking run AATH as follows:
PROJECT_ID=acapy ./manage run -d acapy-main -r allure -e comparison -t @AcceptanceTest -t ~@wip\n
In the above, PROJECT_ID
is the name of the Allure project (acapy
in the example above), the parameter -e comparison
is what invokes the comparison (can only be used with the -r allure
option) and the test scope (the -t
parameters) must match what is expected for the specified PROJECT_ID
(as used in the automated GitHub actions).
This comparison is done using a \"Known Good Results\" (\"KGR\") file that is checked into GitHub.
When adding a new test, or if a different set of tests is expected to pass or fail, this KGR file must be updated.
The KGR files are checked into this folder.
To update the file, run the test suite locally (as in the above command) - it will create a \"NEW-\" KGR file in this folder - just copy this file to replace the existing \"The-KGR-File-\" for the PROJECT_ID
under test, and check into GitHub.
See the README in the aries-backchannels
folder for details on writing backchannels.
The Aries Agent Test Harness(AATH), in utilizing the Behave test engine, has default test output that specifically shows passed/failed steps along with a test/feature summary after an execution run. This output can of course be piped to a file as the user sees fit.
The need will arise to give more formal and graphic reporting along with keeping historical trends of test executions under a certain configuration. The AATH integrates with the Allure Reporting framework to fulfill this requirement.
"},{"location":"guide/TEST_REPORTING/#allure-integration","title":"Allure Integration","text":"The AATH utilizes Allure in as much as Behave and Allure integrate. See Behave with Allure for details.
"},{"location":"guide/TEST_REPORTING/#using-allure-with-manage-script","title":"Using Allure with manage Script","text":"The test execution container that is ramped up with the manage script gets the Allure framework installed for use inside the continer. So to execute the tests and have Allure generated report files, use the -r allure
option on the manage
script.
cd aries-agent-test-harness\n./manage run -d acapy -r allure -t @AcceptanceTest -t ~@wip\n
Running locally and not in a build pipeline/continuous integration system, you will need to install the allure framework in order to generate and display the html report. You will also need the allure command line toolset. The brew example below is for Mac OS X, if on a different platform see the other options here. $ pip install allure-behave\n$ brew install allure\n
To generate the html report and start an allure report server. Use any IP or port in the open
command.
cd aries-test-harness\nallure generate --clean ./reports\nallure open -h 192.168.2.141 -p 54236\n
If keeping a history and reporting trends over time is important to do locally, the history folder inside the allure-report folder that was generated by Allure, will have to be copied into the reports folder before the next execution of the allure generate
command after another test run. cd aries-test-harness\n$ cp -r ./allure-report/history ./reports\nallure generate --clean ./reports\n
Allure reports with the Aries Agent Test Harness will resemble the following, "},{"location":"guide/TEST_REPORTING/#using-allure-at-the-command-line","title":"Using Allure at the Command Line","text":"For debugging or developing purposes you may not want to always be running the test containers with the manage script, but you still may wish to maintain the reporting locally. To do that, just follow the standard command line options used with behave with custom formatters and reporters. To run this command you will need to have Allure installed locally as above.
behave -f allure_behave.formatter:AllureFormatter -o ./reports -t @AcceptanceTest -t ~@wip --no-skipped -D Acme=http://0.0.0.0:8020 -D Bob=http://0.0.0.0:8030 -D Faber=http://0.0.0.0:8050\n
"},{"location":"guide/TEST_REPORTING/#using-allure-with-ci","title":"Using Allure with CI","text":"The AATH is executed with varying configurations and Aries Agent types at pre-determined intervals to make find issues and track deltas between builds of these agents. You can find the Allure reports for these test runs at the following links.
If your build pipeline is using junit style test results for reporting purposes, the AATH supports this as behave does. To use junit style report data, add the following to the behave.ini file or create your own ini file to use with behave that includes the following,
[behave]\njunit = true\njunit_directory = ./junit-reports\n
The above junit reports cannot be used in conjunction with Allure. "},{"location":"guide/TEST_REPORTING/#references","title":"References","text":"Behave formatters and reporters
Allure Framework
Allure for Python/Behave
This folder contains the Aries backchannels that have been added to the Aries Agent Test Harness, each in their own folder, plus some shared files that may be useful to specific backchannel implementations. As noted in the main repo readme, backchannels receive requests from the test harness and convert those requests into instructions for the component under test (CUT). Within the component backchannel folders there may be more than one Dockerfile to build a different Test Agents sharing a single backchannel, perhaps for different versions of the CUT or different configurations.
"},{"location":"guide/aries-backchannels/#writing-a-new-backchannel","title":"Writing a new Backchannel","text":"If you are writing a backchannel using Python, you're in luck! Just use either the ACA-Py
or VCX
backchannels as a model. They sub-class from a common base class (in the python
folder), which implements the common backchannel features. The Python implementation is data driven, using the txt file in the data
folder.
If you are implementing from scratch, you need to implement a backchannel which:
Once you have the backchannel, you need to define one or more docker files to create docker images of Test Agents to deploy in an AATH run. To do that, you must create a Dockerfile that builds a Docker image for the Test Agent (TA), including the backchannel, the CUT and anything else needed to operate the TA. The resulting docker image must be able to be launched by the common ./manage
bash script so the new TA can be included in the standard test scenarios.
The test harness interacts with each backchannel using a small set of standard set of web services. Endpoints are here:
That's all of the endpoints your agent has to handle. Of course, your backchannel also has to be able to communicate with the CUT (the agent or agent framework being tested). Likely that means being able to generate requests to the CUT (mostly based on requests from the endpoints above) and monitor events from the CUT.
See the OpenAPI definition located here for an overview of all current topics and operations.
"},{"location":"guide/aries-backchannels/#standard-backchannel-topics-and-operations","title":"Standard Backchannel Topics and Operations","text":"Although the number of endpoints is small, the number of topic and operation parameters is much larger. That list of operations drives the effort in building and maintaining the backchannel. The list of operations to be supported can be found in this OpenAPI spec. It lists all of the possible topic
values, the related operations
and information about each one, including such things as:
A rendered version of the OpenAPI spec can be found We recommend that in writing a backchannel, any Not Implemented
commands and operations return an HTTP 501
result code (\"Not Implemented\").
Support for testing new protocols will extend the OpenAPI spec with additional topics
and related operations
, adding to the workload of the backchannel maintainer.
The test harness interacts with each published backchannel API using the following common Python functions. Pretty simple, eh?
"},{"location":"guide/aries-backchannels/#docker-build-script","title":"Docker Build Script","text":"Each backchannel should provide one or more Docker scripts, each of which build a self-contained Docker image for the backchannel, the CUT and anything else needed to run the TA.
The following lists the requirements for building AATH compatible docker images:
Dockerfile.<TA>
. For example Dockerfile.acapy
, Dockerfile.vcx
../manage
script uses the <TA>
to validate command line arguments, to tag the agent, and for invoking docker build and run operations.acapy
backchannel where there the Dockerfile.acapy
builds the latest released version of ACA-Py, where Dockerfile.acapy-main
builds from the ACA-Py main
branch.<TA>
name.aries-backchannels
folder../manage
script looks for the TA Dockerfiles in those folders.See examples of this for aca-py (Dockerfile.acapy
) and aries-vcx (Dockerfile.vcx
).
./manage
Script Integration","text":"The ./manage
script builds images and runs those images as containers in test runs. This integration applies some constraints on the docker images used. Most of those constraints are documented in the previous section, but the following provides some additional context.
An image for each backchannel using the following command:
echo \"Building ${agent}-agent-backchannel ...\"\n docker build \\\n ${args} \\\n $(initDockerBuildArgs) \\\n -t \"${agent}-agent-backchannel\" \\\n -f \"${BACKCHANNEL_FOLDER}/Dockerfile.${agent}\" \"aries-backchannels/\"\n
where:
${agent}
is the name of the component under test (CUT)$(initDockerBuildArgs)
picks up any HTTP_PROXY environment variables, and${args}
are any extra arguments on the command line after standard options processing.Once built, the selected TAs for the run are started for the test roles (currently Acme, Bob and Mallory) using the following commands:
echo \"Starting Acme Agent ...\"\n docker run -d --rm --name acme_agent --expose 9020-9029 -p 9020-9029:9020-9029 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${ACME_AGENT} -p 9020 -i false >/dev/null\n echo \"Starting Bob Agent ...\"\n docker run -d --rm --name bob_agent --expose 9030-9039 -p 9030-9039:9030-9039 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${BOB_AGENT} -p 9030 -i false >/dev/null\n echo \"Starting Mallory Agent ...\"\n docker run -d --rm --name mallory_agent --expose 9040-9049 -p 9040-9049:9040-9049 -e \"DOCKERHOST=${DOCKERHOST}\" -e \"LEDGER_URL=http://${DOCKERHOST}:9000\" ${MALLORY_AGENT} -p 9040 -i false >/dev/null\n
Important things to note from the script snippet:
-expose
parameter), which are mapped to localhost-p
parameter, and not hard code them in the container.acapy
or aries-vcx
, etc.) is done earlier in the script by setting the ${ACME_AGENT}
etc. environment variablesDOCKERHOST
) and a url to the ledger genesis transactions (LEDGER_URL
)LEDGER_URL
assumed to be for a locally running instance of von-network
-p port
) and to use non-interactive mode (-i false
)Many of the BDD feature steps (and hence, backchannel requests) in the initial test cases map very closely to the ACA-Py \"admin\" API used by a controller to control an instance of an ACA-Py agent. This makes sense because both the ACA-Py admin API and the AATH test cases were defined based on the Aries RFCs. However, we are aware the alignment between the two might be too close and welcome recommendations for making the backchannel API more agnostic, easier for other CUTs. Likewise, as the test suite becomes ledger- and verifiable credential format-agnostic, we anticipate abstracting away the Indy-isms that are in the current test cases, making them test parameters versus explicit steps.
The Google Sheet list of operations has that same influence, referencing things like connection_id
, cred_exchange_id
and so on. As new backchannels are developed, we welcome feedback on how to make the list of operations easier to maintain backchannels.