miniapp icon indicating copy to clipboard operation
miniapp copied to clipboard

Testing

Open xfq opened this issue 2 years ago • 11 comments

We need to have a cross-vendor test suite for MiniApp specs. The tests can be used as proof that the MiniApp user agents have implemented the W3C specs. As a result, MiniApp developers can also write standard MiniApps with greater confidence.

If possible, we can design a framework to run tests automatically. If that's not possible, we need to write documentation for running tests manually.

xfq avatar Mar 28 '22 06:03 xfq

As an example, the browsers uses the web-platform-tests project maintained by the Interop team as the test suite for the Web-platform stack.

web-platform-tests.org contains introduction of the test suite.

wpt.fyi is an archive of test results collected from a few web browsers on a regular basis.

xfq avatar Mar 28 '22 06:03 xfq

And here are some non-browser standard test examples:

  • Verifiable Credentials
  • ARIA in HTML
  • Publication Manifest and Audiobooks
  • JSON-LD
  • IMSC
  • xfq avatar Apr 12 '22 09:04 xfq

    I've been checking how similar specifications have dealt with tests and I like how the EPUB group does it. We cannot reuse the web-platform-tests directly because of the different nature of the user agents, so I propose to define something like the EPUB tests.

    It would be a dedicated GitHub repository (e.g., w3c/miniapp-tests/) (see EPUB tests repo), where we might include the tests. Basically, this would be the structure:

    • tests/ --> directory to include the unit tests (properly identified + metadata) used to generate the documentation (example).

      • Each test is identified by an homogeneous pattern.
      • The test itself is based on a template -basic miniapp structure- (example)
      • We can include a specific document to host the metadata that will define the test (i.e., description, author, etc.).
    • reports/ directory to host the results of the tests we've performed. The reports are in form of JSON files, one per miniapp implementation. These JSON reports will be used to generate the documentation (example).

    The scripts auto-generate the documentation and the reports in human-readable format, including Information of the test suite. All is maintained in the repository.

    Something important is this clear methodology of how contribute (i.e., pre-requisites, workflow based on issues, templates to use, etc.).

    Of course we don't require an automatic process (we could just use tables or a spreadsheet), but this could help to maintain the tests in mid-long term. Comments? If you like the approach, I can initiate the first proposal based on this approach.

    espinr avatar May 23 '22 08:05 espinr

    Thank you for the proposal. Sounds like a good plan to me.

    xfq avatar May 24 '22 05:05 xfq

    I've worked on a proof of concept to show and explain what this approach would be like. As mentioned in my previous comment, this methodology and system are based on the EBUB tests. This methodology and tool are open to any contributor so that anyone can create tests on specific parts of the specifications.

    All the maintenance would be in Github, and the documentation update is done using GitHub CI actions. They are already included in the repository example.

    The final result is something like this: https://espinr.github.io/miniapp-tests/

    How does it work?

    • Every mandatory point of the specification (i.e., marked as MUST) should be covered by at least one test case. We can create also tests for the recommended features (i.e., SHOULD), but the mandatory ones should be the top priority.
    • Each test case might be linked to a specific section of the specification (there is a data attribute we can include in the markup to define the relation) -> this will enable us to link the tests from the specs. Good for easy tracking.

    Every test case:

    • includes the structure and content of the MiniApp to be tested (e.g., a sample miniapp without manifest.json to test that it should fail);
    • includes a metadata file with the description of the test (e.g., title, expected result...);
    • is classified by scope or specification: Manifest, LifeCycle, Packaging, Content...
    • will be packed (ZIP package) using a script, so the resulting code can be directly loaded on MiniApp user agents;
    • the MiniApp code and test definition will be in specific directories (named after their ID) under tests/.

    For instance, a simple test for the MiniApp Manifest's window.fullscreen member:

    • Test: if the window's fullscreen member is set to true, the app must be shown in full-screen mode.
    • Identified as: mnf-window-fullscreen-true, you can find the sample code (basic miniapp with a manifest.json with the proper configuration) at https://github.com/espinr/miniapp-tests/tree/main/tests/mnf-window-fullscreen-true
    • Is related to this algorithm in the specification.

    The definition of the test (see test.jsonld) would be something like this:

    {
        "@context": { },
        "dcterms:rights": "https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document",
        "dcterms:rightsHolder": "https://www.w3.org",
        "@type": "earl:TestCase",
        "dc:coverage": "Manifest",
        "dc:creator": ["Martin Alvarez"],
        "dc:date": "2022-05-25",
        "dc:title": "Fullscreen enabled in manifest",
        "dc:identifier": "mnf-window-fullscreen-true",
        "dc:description": "The window's fullscreen member is set to true in the manifest. The app must be shown in fullscreen.",
        "dcterms:isReferencedBy": [
          "https://www.w3.org/TR/miniapp-manifest/#dfn-process-the-window-s-fullscreen-member"
        ],
        "dcterms:modified": "2022-05-25T00:00:00Z"
    }
    

    This definition uses JSON-LD but we can simplify it.

    After updating the repository, the GitHub CI action will generate the documentation, resulting in something like this: https://espinr.github.io/miniapp-tests/#sec-manifest-data . As you can see, I've only included examples for three sections: packaging, content, and manifest. The documentation organizes the content accordingly.

    In the generated documentation, each test case is represented on a row. It is linked to the code itself (including the metadata that auto-describe the use case), the specification's feature to be tested, and the results of the tests.

    How to perform tests?

    Every test should be tested on any MiniApp platform, one by one. For instance, testing the miniapp in the previous example and noting if the result is the expected one. Results could be pass, fail or N/A.

    The testing results for each platform are specified in a simple JSON file like this:

    {
        "name": "Mini Program #2",
        "ref": "https://example.org/",
        "variant" : "Cross Platform",
        "tests": {
            "cnt-css-scoped-support": true,
            "mnf-window-fullscreen-default": true,
            "mnf-window-fullscreen-true": true,
            "mnf-window-orientation-default": true,
            "mnf-window-orientation-landscape": true,
            "mnf-window-orientation-portrait": true,
            "pkg-pages-same-filenames": false,
            "pkg-root-app-css-empty": true        
        }
    }
    

    This sample platform (called Mini Program #2) passes all the tests except one. The results, linked to the documentation, are represented visually in a table.

    The testing results for two different miniapp vendors (see all the sample reports) are in this document https://espinr.github.io/miniapp-tests/results.html

    I'll be happy to present this idea at the next meeting. If you have suggestions, I'll be glad to update this proposal.

    Please, note that this testing methodology is complementary to a MiniApp validator, as proposed in the previous meeting.

    EDIT: I've created an example that shows hot to link the tests from the specifications (see the links to the test in this section of the packaging spec)

    espinr avatar Jun 22 '22 10:06 espinr

    This proposal was presented during the last CG and WG meetings. No objections were raised, so I suggest we move forwards with this proposal so we can start testing as soon as possible to detect the weakest points in the specs.

    I think the best way is organizing all the miniapp tests under the same repository. We can use something like w3c/miniapp-tests/. In the documentation we will be able to define a taxonomy to classify the tests by topic or specification (Content, Packaging, Lifecycle...). @xfq, do you think could we have this repository? Other suggestions?

    espinr avatar Jul 04 '22 10:07 espinr

    Sounds good to me. Do you want me to create the repo?

    xfq avatar Jul 05 '22 05:07 xfq

    Sounds good to me. Do you want me to create the repo?

    Yes, please.

    espinr avatar Jul 05 '22 07:07 espinr

    Done: https://github.com/w3c/miniapp-tests

    xfq avatar Jul 06 '22 11:07 xfq

    Great, thank you!! We can leave this issue open to collect and discuss the ideas of the MiniApp validator discussed in previous meetings.

    espinr avatar Jul 08 '22 08:07 espinr

    Through my discussion with some vendors, i.e. Alibaba, Baidu, Huawei, etc, they prefer to set up a formal open-source project and have a professional open-source community to supervise the project. Since it can coordinate more resources to participate and facilitate the organization, supervision, and management of testing, especially for developers, who can have relevant test references to guide their practice.

    Therefore, at the last WG meeting, we discuss some proposals to set up this project, the following issues may need further discussion (Attach some fruit for thought)

    • The community, considering currently most of MiniApp vendors come from China, that there list some open-source communities from China and friendly to other countries as well,
    1. Mulan, an open-source community in China, is good at incubating small- and medium-sized projects and using GitHub\gitee to host code.
    2. Open Atom Foundation
    3. OW2
    • The project’s name

    1- Mustard. Sentences from Buddhist sutras "纳须弥于芥子", the meaning is similar to "Little Things Make Big Things Happen" 2- MAPT (MiniApp Platform Test), Simplicity is beauty.

    • The members Working group members, and all interested organizations and individuals

    • The basic function and milestone Some functions are described in https://github.com/w3c/miniapp-tests.

    • Milestone, [TBD]

    • The License

    1. BSD?
    2. Apache 2.0?
    3. some loose licenses defined by some organizations?

    Others?

    MichaelWangzitao avatar Dec 07 '22 09:12 MichaelWangzitao