harmony 鸿蒙arkXtest User Guide

  • 2023-02-03
  • 浏览 (1022)

arkXtest User Guide

Overview

arkXtest is an automated test framework that supports both the JavaScript (JS) and TypeScript (TS) programming languages. It consists of JsUnit and UiTest.

JsUnit is a unit test framework that provides basic APIs for compiling test cases and generating test reports for testing system and application APIs.

UiTest is a UI test framework that provides the UI component search and operation capabilities through simple and easy-to-use APIs, and allows you to develop automated test scripts based on GUI operations.

This document describes the main functions, implementation principles, environment setup, and test script compilation and execution of arkXtest.

Implementation

arkXtest is divided into two parts: unit test framework and UI test framework.

As the backbone of arkXtest, the unit test framework offers such features as identifying, scheduling, and executing test scripts, as well as summarizing test script execution results.

The UI test framework provides UiTest APIs for you to call in different test scenarios. The UI test scripts are executed on top of the unit test framework.

Unit Test Framework

Figure 1 Main functions of the unit test framework

Figure 2 Basic script process

NOTE

For details about the API in the unit test framework, see Function Definition.

UI Test Framework

Figure 3 Main functions of the UI test framework

Constraints

  • The features of the UI test framework are available only in OpenHarmony 3.1 Release and later versions.

  • The feature availability of the unit test framework varies by version. For details about the mappings between the features and versions, see arkXtest.

Preparing the Environment

Environment Requirements

Software: DevEco Studio 3.0 or later

Hardware: PC connected to an OpenHarmony device, such as the RK3568 development board

Setting Up the Environment

Download DevEco Studio and set it up as instructed on the official website.

Creating and Compiling a Test Script

Creating a Test Script

  1. Open DevEco Studio and create a project, in which the ohos directory is where the test script is located.
  2. Open the .ets file of the module to be tested in the project directory. Move the cursor to any position in the code, and then right-click and choose Show Context Actions > Create Ohos Test or press Alt+Enter and choose Create Ohos Test to create a test class.

Writing a Unit Test Script

The unit test script must contain the following basic elements:

  1. Import of the dependencies so that the dependent test APIs can be used.

  2. Test code, mainly about the related logic, such as API invoking.

  3. Invoking of the assertion APIs and setting of checkpoints. If there is no checkpoint, the test script is considered as incomplete.

The following sample code is used to start the test page to check whether the page displayed on the device is the expected page.

import { describe, it, expect } from '@ohos/hypium';
import abilityDelegatorRegistry from '@ohos.app.ability.abilityDelegatorRegistry';
import { BusinessError } from '@ohos.base';
import UIAbility from '@ohos.app.ability.UIAbility';

const delegator = abilityDelegatorRegistry.getAbilityDelegator()
function sleep(time: number) {
  return new Promise<void>((resolve: Function) => setTimeout(resolve, time));
}
export default function abilityTest() {
    describe('ActsAbilityTest', () =>{
    it('testUiExample',0, async (done: Function) => {
      console.info("uitest: TestUiExample begin");
      //start tested ability
      await delegator.executeShellCommand('aa start -b com.ohos.uitest -a EntryAbility').then((result: abilityDelegatorRegistry.ShellCmdResult) =>{
        console.info('Uitest, start ability finished:' + result)
      }).catch((err: BusinessError) => {
        console.info('Uitest, start ability failed: ' + err)
      })
      await sleep(1000);
      // Check the top display ability.
      await delegator.getCurrentTopAbility().then((Ability: UIAbility)=>{
        console.info("get top ability");
        expect(Ability.context.abilityInfo.name).assertEqual('EntryAbility');
      })
      done();
    })
  })
}

Writing a UI Test Script

The UI test is based on the unit test. The UI test script adds the invoking of the UiTest interface (providing a link) to the unit test script to complete the corresponding test activities. In this example, the UI test script is written based on the preceding unit test script. It implements the click operation on the started application page and checks whether the page changes as expected.

  1. Import the dependency.
import { Driver, ON } from '@ohos.UiTest'
  1. Write test code.
import { describe, it, expect } from '@ohos/hypium';
import abilityDelegatorRegistry from '@ohos.app.ability.abilityDelegatorRegistry';
import { Driver, ON } from '@ohos.UiTest'
import { BusinessError } from '@ohos.base';
import UIAbility from '@ohos.app.ability.UIAbility';

const delegator: abilityDelegatorRegistry.AbilityDelegator = abilityDelegatorRegistry.getAbilityDelegator()
function sleep(time: number) {
  return new Promise<void>((resolve: Function) => setTimeout(resolve, time));
}
export default function abilityTest() {
  describe('ActsAbilityTest', () => {
    it('testUiExample',0, async (done: Function) => {
      console.info("uitest: TestUiExample begin");
      //start tested ability
      await delegator.executeShellCommand('aa start -b com.ohos.uitest -a EntryAbility').then((result: abilityDelegatorRegistry.ShellCmdResult) =>{
        console.info('Uitest, start ability finished:' + result)
      }).catch((err: BusinessError) => {
        console.info('Uitest, start ability failed: ' + err)
      })
      await sleep(1000);
      // Check the top display ability.
      await delegator.getCurrentTopAbility().then((Ability: UIAbility)=>{
        console.info("get top ability");
        expect(Ability.context.abilityInfo.name).assertEqual('EntryAbility');
      })
      // UI test code
      // Initialize the driver.
      let driver = await Driver.create();
      await driver.delayMs(1000);
      // Find the button on text 'Next'.
      let button = await driver.findComponent(ON.text('Next'));
      // Click the button.
      await button.click();
      await driver.delayMs(1000);
      // Check text.
      await driver.assertComponentExist(ON.text('after click'));
      await driver.pressBack();
      done();
    })
  })
}

Running the Test Script

In DevEco Studio

You can run a test script in DevEco Studio in any of the following modes:

  1. Test package level: All test cases in the test package are executed.

  2. Test suite level: All test cases defined in the describe method are executed.

  3. Test method level: The specified it method, that is, a single test case, is executed.

Viewing the Test Result

After the test is complete, you can view the test result in DevEco Studio, as shown in the following figure.

Viewing the Test Case Coverage

After the test is complete, you can view the test case coverage.

In the CLI

Install the application test package on the test device and run the aa command with different execution control keywords in the CLI.

NOTE

Before running commands in the CLI, make sure hdc-related environment variables have been configured.

The table below lists the keywords in aa test commands.

Keyword Abbreviation Description Example
–bundleName -b Application bundle name. - b com.test.example
–packageName -p Application module name, which is applicable to applications developed in the FA model. - p com.test.example.entry
–moduleName -m Application module name, which is applicable to applications developed in the stage model. -m entry
NA -s <key, value> pair. - s unittest OpenHarmonyTestRunner

The framework supports multiple test case execution modes, which are triggered by the key-value pair following the -s keyword. The table below lists the available keys and values.

Key Description Value Example
unittest OpenHarmonyTestRunner object used for test case execution. OpenHarmonyTestRunner or custom runner name. - s unittest OpenHarmonyTestRunner
class Test suite or test case to be executed. {describeName}#{itName}, {describeName} -s class attributeTest#testAttributeIt
notClass Test suite or test case that does not need to be executed. {describeName}#{itName}, {describeName} -s notClass attributeTest#testAttributeIt
itName Test case to be executed. {itName} -s itName testAttributeIt
timeout Timeout interval for executing a test case. Positive integer (unit: ms). If no value is set, the default value 5000 is used. -s timeout 15000
breakOnError Whether to enable break-on-error mode. When this mode is enabled, the test execution process exits if a test assertion failure or error occurs. true/false (default value) -s breakOnError true
random Whether to execute test cases in random sequence. true/false (default value) -s random true
testType Type of the test case to be executed. function, performance, power, reliability, security, global, compatibility, user, standard, safety, resilience -s testType function
level Level of the test case to be executed. 0, 1, 2, 3, 4 -s level 0
size Size of the test case to be executed. small, medium, large -s size small
stress Number of times that the test case is executed. Positive integer -s stress 1000

Running Commands

  1. Open the CLI.
  2. Run the aa test commands.

Example 1: Execute all test cases.

 hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner

Example 2: Execute cases in the specified test suites, separated by commas (,).

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner -s class s1,s2

Example 3: Execute specified cases in the specified test suites, separated by commas (,).

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner -s class testStop#stop_1,testStop1#stop_0

Example 4: Execute all test cases except the specified ones, separated by commas (,).

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner -s notClass testStop

Example 5: Execute specified test cases, separated by commas (,).

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner -s itName stop_0

Example 6: Set the timeout interval for executing a test case.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner  -s timeout 15000

Example 7: Enable break-on-error mode.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner   -s breakOnError true

Example 8: Execute test cases of the specified type.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner   -s testType function

Example 9: Execute test cases at the specified level.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner   -s level 0

Example 10: Execute test cases with the specified size.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner   -s size small

Example 11: Execute test cases for a specified number of times.

  hdc shell aa test -b xxx -p xxx -s unittest OpenHarmonyTestRunner   -s stress 1000

Viewing the Test Result

  • During test execution in the CLI, the log information similar to the following is displayed:
OHOS_REPORT_STATUS: class=testStop
OHOS_REPORT_STATUS: current=1
OHOS_REPORT_STATUS: id=JS
OHOS_REPORT_STATUS: numtests=447
OHOS_REPORT_STATUS: stream=
OHOS_REPORT_STATUS: test=stop_0
OHOS_REPORT_STATUS_CODE: 1

OHOS_REPORT_STATUS: class=testStop
OHOS_REPORT_STATUS: current=1
OHOS_REPORT_STATUS: id=JS
OHOS_REPORT_STATUS: numtests=447
OHOS_REPORT_STATUS: stream=
OHOS_REPORT_STATUS: test=stop_0
OHOS_REPORT_STATUS_CODE: 0
OHOS_REPORT_STATUS: consuming=4
Log Field Description
OHOS_REPORT_SUM Total number of test cases in the current test suite.
OHOS_REPORT_STATUS: class Name of the test suite that is being executed.
OHOS_REPORT_STATUS: id Case execution language. The default value is JS.
OHOS_REPORT_STATUS: numtests Total number of test cases in the test package.
OHOS_REPORT_STATUS: stream Error information of the current test case.
OHOS_REPORT_STATUS: test Name of the current test case.
OHOS_REPORT_STATUS_CODE Execution result of the current test case.
0: pass.
1: error.
2: fail.
OHOS_REPORT_STATUS: consuming Time spent in executing the current test case, in milliseconds.
  • After the commands are executed, the log information similar to the following is displayed:
OHOS_REPORT_RESULT: stream=Tests run: 447, Failure: 0, Error: 1, Pass: 201, Ignore: 245
OHOS_REPORT_CODE: 0

OHOS_REPORT_RESULT: breakOnError model, Stopping whole test suite if one specific test case failed or error
OHOS_REPORT_STATUS: taskconsuming=16029

Log Field Description
run Total number of test cases in the current test package.
Failure Number of failed test cases.
Error Number of test cases whose execution encounters errors.
Pass Number of passed test cases.
Ignore Number of test cases not yet executed.
taskconsuming Total time spent in executing the current test case, in milliseconds.

When an error occurs in break-on-error mode, check the Ignore and interrupt information.

Recording User Operations

Using the Recording Feature

You can record the operations performed on the current page to /data/local/tmp/layout/record.csv. To end the recording, press Ctrl+C.

 hdc shell uitest uiRecord record

Viewing Recording Data

You can view the recording data in either of the following ways.

Reading and Printing Recording Data

 hdc shell uitest uiRecord read

Exporting the record.csv File

hdc file recv /data/local/tmp/layout/record.csv D:\tool  # D:\tool indicates the local save path, which can be customized.
  • The following describes the fields in the recording data: { "ABILITY": "com.ohos.launcher.MainAbility", // Foreground application page. "BUNDLE": "com.ohos.launcher", // Application. "CENTER_X": "", // X-coordinate of the center of the pinch gesture. "CENTER_Y": "", // Y-coordinate of the center of the pinch gesture. "EVENT_TYPE": "pointer", // "LENGTH": "0", // Total length. "OP_TYPE": "click", // Event type. Currently, click, double-click, long-press, drag, pinch, swipe, and fling types are supported. "VELO": "0.000000", // Hands-off velocity. "direction.X": "0.000000",// Movement along the x-axis. "direction.Y": "0.000000", // Movement along the y-axis. "duration": 33885000.0, // Gesture duration. "fingerList": [{ "LENGTH": "0", // Total length. "MAX_VEL": "40000", // Maximum velocity. "VELO": "0.000000", // Hands-off velocity. "W1_BOUNDS": "{"bottom":361,"left":37,"right":118,"top":280}", // Starting component bounds. "W1_HIER": "ROOT,3,0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,0,0,0,0", // Starting component hierarchy. "W1_ID": "", // ID of the starting component. "W1_Text": "", // Text of the starting component. "W1_Type": "Image", // Type of the starting component. "W2_BOUNDS": "{"bottom":361,"left":37,"right":118,"top":280}", // Ending component bounds. "W2_HIER": "ROOT,3,0,0,0,0,0,0,0,0,5,0,0,0,0,0,0,0", // Ending component hierarchy. "W2_ID": "", // ID of the ending component. "W2_Text": "", // Text of the ending component. "W2_Type": "Image", // Type of the ending component. "X2_POSI": "47", // X coordinate of the ending point. "X_POSI": "47", // X coordinate of the starting point. "Y2_POSI": "301", // Y coordinate of the ending point. "Y_POSI": "301", // Y coordinate of the starting point. "direction.X": "0.000000", // Movement along the x-axis. "direction.Y": "0.000000" // Movement along the y-axis. }], "fingerNumber": "1" // Number of fingers. }

FAQs

FAQs About Unit Test Cases

The logs in the test case are printed after the test case result

Problem

The logs added to the test case are displayed after the test case execution, rather than during the test case execution.

Possible Causes

More than one asynchronous API is called in the test case.
In principle, logs in the test case are printed before the test case execution is complete.

Solution

If more than one asynchronous API is called, you are advised to encapsulate the API invoking into the promise mode.

Error “fail to start ability” is reported during test case execution

Problem

When a test case is executed, the console returns the error message “fail to start ability”.

Possible Causes

An error occurs during the packaging of the test package, and the test framework dependency file is not included in the test package.

Solution

Check whether the test package contains the OpenHarmonyTestRunner.abc file. If the file does not exist, rebuild and pack the file and perform the test again.

Test case execution timeout

Problem

After the test case execution is complete, the console displays the error message “execute time XXms”, indicating that the case execution times out.

Possible Causes

  1. The test case is executed through an asynchronous interface, but the done function is not executed during the execution. As a result, the test case execution does not end until it times out.

  2. The time taken for API invocation is longer than the timeout interval set for test case execution.

  3. Test assertion fails, and a failure exception is thrown. As a result, the test case execution does not end until it times out.

Solution

  1. Check the code logic of the test case to ensure that the done function is executed even if the assertion fails.

  2. Modify the case execution timeout settings under Run/Debug Configurations in DevEco Studio.

  3. Check the code logic and assertion result of the test case and make sure that the assertion is passed.

    FAQs About UI Test Cases

The failure log contains “Get windows failed/GetRootByWindow failed”

Problem

The UI test case fails to be executed. The HiLog file contains the error message “Get windows failed/GetRootByWindow failed.”

Possible Causes

The ArkUI feature is disabled. As a result, the component tree information is not generated on the test page.

Solution

Run the following command, restart the device, and execute the test case again:

hdc shell param set persist.ace.testmode.enabled 1

The failure log contains “uitest-api does not allow calling concurrently”

Problem

The UI test case fails to be executed. The HiLog file contains the error message “uitest-api does not allow calling concurrently.”

Possible Causes

  1. In the test case, the await operator is not added to the asynchronous API provided by the UI test framework.

  2. The UI test case is executed in multiple processes. As a result, multiple UI test processes are started. The framework does not support multi-process invoking.

Solution

  1. Check the case implementation and add the await operator to the asynchronous API.
  2. Do not execute UI test cases in multiple processes.

The failure log contains “does not exist on current UI! Check if the UI has changed after you got the widget object”

Problem

The UI test case fails to be executed. The HiLog file contains the error message “does not exist on current UI! Check if the UI has changed after you got the widget object.”

Possible Causes

After the target component is found in the code of the test case, the device UI changes, resulting in loss of the found component and inability to perform emulation.

Solution

Run the UI test case again.

你可能感兴趣的鸿蒙文章

harmony 鸿蒙Application Test

harmony 鸿蒙SmartPerf User Guide

harmony 鸿蒙wukong User Guide

0  赞