Skip to content

Conversation

@lucasssvaz
Copy link
Member

Description of Change

This pull request adds robust support for multi-device tests in the CI build and run scripts. The changes introduce new functions for detecting, building, and running multi-device tests, and ensure that these tests are handled separately from regular single-device tests. The updates also improve argument handling, error reporting, and test selection logic for both build and run workflows.

Multi-device test support:

  • Added is_multi_device_test functions to both .github/scripts/tests_build.sh and .github/scripts/tests_run.sh to detect tests configured for multiple devices via multi_device in ci.yml. [1] [2]
  • Implemented build_multi_device_sketch and build_multi_device_test functions in tests_build.sh to build all required device sketches for multi-device tests, using custom build directories and handling errors per device.
  • Added run_multi_device_test function in tests_run.sh to run multi-device tests with proper argument construction, port assignment, and result aggregation, including retries and platform checks.

Build and run workflow changes:

  • Updated both scripts to search for and handle multi-device test directories when a sketch name is provided, falling back to regular sketch lookup if not found. [1] [2]
  • Modified chunked and regular build/run logic to process all multi-device tests first, then proceed with regular single-device tests, ensuring proper error handling and reporting for each. [1] [2] [3]

General improvements and bug fixes:

  • Improved argument parsing and error messages for missing targets and ports, including default port assignment for multi-device scenarios.
  • Refactored test invocation to consistently use arrays for pytest command construction, improving quoting and argument handling.
  • Updated count_sketches in sketch_utils.sh to skip sketches that are part of multi-device tests, preventing duplicate builds.

Test Scenarios

Tested locally

Related links

Requires espressif/pytest-embedded#393

@lucasssvaz lucasssvaz requested a review from me-no-dev November 24, 2025 23:55
@lucasssvaz lucasssvaz self-assigned this Nov 24, 2025
@lucasssvaz lucasssvaz added Type: CI & Testing Related to continuous integration, automated testing, or test infrastructure. Status: Blocked upstream 🛑 PR is waiting on upstream changes to be merged first labels Nov 24, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 24, 2025

Messages
📖 This PR seems to be quite large (total lines of code: 1578), you might consider splitting it into smaller PRs

👋 Hello lucasssvaz, we appreciate your contribution to this project!


📘 Please review the project's Contributions Guide for key guidelines on code, documentation, testing, and more.

🖊️ Please also make sure you have read and signed the Contributor License Agreement for this project.

Click to see more instructions ...


This automated output is generated by the PR linter DangerJS, which checks if your Pull Request meets the project's requirements and helps you fix potential issues.

DangerJS is triggered with each push event to a Pull Request and modify the contents of this comment.

Please consider the following:
- Danger mainly focuses on the PR structure and formatting and can't understand the meaning behind your code or changes.
- Danger is not a substitute for human code reviews; it's still important to request a code review from your colleagues.
- Addressing info messages (📖) is strongly recommended; they're less critical but valuable.
- To manually retry these Danger checks, please navigate to the Actions tab and re-run last Danger workflow.

Review and merge process you can expect ...


We do welcome contributions in the form of bug reports, feature requests and pull requests.

1. An internal issue has been created for the PR, we assign it to the relevant engineer.
2. They review the PR and either approve it or ask you for changes or clarifications.
3. Once the GitHub PR is approved we do the final review, collect approvals from core owners and make sure all the automated tests are passing.
- At this point we may do some adjustments to the proposed change, or extend it by adding tests or documentation.
4. If the change is approved and passes the tests it is merged into the default branch.

Generated by 🚫 dangerJS against 38b9808

@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Test Results

0 tests   0 ✅  0s ⏱️
0 suites  0 💤
0 files    0 ❌

Results for commit 38b9808.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 25, 2025

Memory usage test (comparing PR against master branch)

The table below shows the summary of memory usage change (decrease - increase) in bytes and percentage for each target.

MemoryFLASH [bytes]FLASH [%]RAM [bytes]RAM [%]
TargetDECINCDECINCDECINCDECINC
ESP32C5000.000.00000.000.00
ESP32P4000.000.00000.000.00
ESP32S3💚 -1600.000.00000.000.00
ESP32S2000.000.00000.000.00
ESP32C3000.000.00000.000.00
ESP32C6000.000.00000.000.00
ESP32H2000.000.00000.000.00
ESP32000.000.00000.000.00
Click to expand the detailed deltas report [usage change in BYTES]
TargetESP32C5ESP32P4ESP32S3ESP32S2ESP32C3ESP32C6ESP32H2ESP32
ExampleFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAMFLASHRAM
libraries/BLE/examples/Server000000--00000000
libraries/Insights/examples/MinimalDiagnostics00--00000000--00
libraries/NetworkClientSecure/examples/WiFiClientSecure000000000000--00
libraries/ESP32/examples/Camera/CameraWebServer----0000------00
ESP32/examples/Camera/CameraWebServer (2)----💚 -16000------00
ESP32/examples/Camera/CameraWebServer (3)----00----------

@lucasssvaz lucasssvaz added the CI Failure Expected For PRs where CI failure is expected label Nov 25, 2025
@me-no-dev me-no-dev requested a review from Copilot November 26, 2025 10:13
Copilot finished reviewing on behalf of me-no-dev November 26, 2025 10:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds comprehensive support for multi-device testing in the CI infrastructure, enabling tests that require communication between multiple ESP32 devices (e.g., WiFi AP/client, BLE server/client). The changes introduce detection, building, and running capabilities for multi-device tests while maintaining backward compatibility with single-device tests.

Key Changes

  • Added multi-device test infrastructure with functions to detect, build, and run tests requiring multiple devices
  • Implemented two example multi-device test suites: WiFi AP/client and BLE server/client communication tests
  • Enhanced documentation with comprehensive guide for creating, building, and running multi-device tests

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 18 comments.

Show a summary per file
File Description
tests/validation/wifi_ap/test_wifi_ap.py Python test orchestrating WiFi AP and client communication
tests/validation/wifi_ap/ap/ap.ino Arduino sketch for WiFi Access Point device
tests/validation/wifi_ap/client/client.ino Arduino sketch for WiFi client device
tests/validation/wifi_ap/ci.yml Configuration defining multi-device setup and platform support
tests/validation/ble/test_ble.py Python test coordinating BLE server/client pairing and secure communication
tests/validation/ble/server/server.ino BLE server sketch with secure characteristic support
tests/validation/ble/client/client.ino BLE client sketch with service discovery and authentication
tests/validation/ble/ci.yml BLE test configuration with multi-device specification
tests/conftest.py Added helper functions for IP validation and random string generation
.github/scripts/tests_run.sh Enhanced with multi-device test detection, port management, and execution logic
.github/scripts/tests_build.sh Updated with multi-device build functions and custom build directory handling
.github/scripts/sketch_utils.sh Modified sketch counting to skip multi-device test sketches
docs/en/contributing.rst Comprehensive documentation for creating and running multi-device tests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

Testing and CI
--------------

In our repository, we have a Continuous Integration (CI) system that runs tests and fixes on every Pull Request. This system will run the tests
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing word: "In our repository, we have a Continuous Integration (CI) system that runs tests and fixes on every Pull Request." should be "...runs tests and applies fixes..." or "...runs tests and code formatting fixes..." to clarify what "fixes" means in this context.

Suggested change
In our repository, we have a Continuous Integration (CI) system that runs tests and fixes on every Pull Request. This system will run the tests
In our repository, we have a Continuous Integration (CI) system that runs tests and code formatting fixes on every Pull Request. This system will run the tests

Copilot uses AI. Check for mistakes.
############
""""""""""""

The ``ci.yml`` file is used to specify how the test suite and sketches will handled by the CI system. It can contain the following fields:
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar error: "will handled" should be "will be handled".

Suggested change
The ``ci.yml`` file is used to specify how the test suite and sketches will handled by the CI system. It can contain the following fields:
The ``ci.yml`` file is used to specify how the test suite and sketches will be handled by the CI system. It can contain the following fields:

Copilot uses AI. Check for mistakes.
The ``ci.yml`` file is used to specify how the test suite and sketches will handled by the CI system. It can contain the following fields:

* ``multi_device``: A dictionary that matches the device names to their sketch directories. Each device must have its own sketch folder.
There are two devices in a multi-device test suite: ``device0`` and ``device1``. This field is only valid for multi-device test suites.
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation states "There are two devices in a multi-device test suite: device0 and device1", but this is overly restrictive. The code in .github/scripts/tests_run.sh (line 109-116) shows error handling for device counts != 2, suggesting the infrastructure was designed to potentially support more than 2 devices in the future. The documentation should clarify that currently only 2-device tests are supported, rather than implying this is a hard limitation of the system design.

Suggested change
There are two devices in a multi-device test suite: ``device0`` and ``device1``. This field is only valid for multi-device test suites.
Currently, only two devices are supported in a multi-device test suite: ``device0`` and ``device1``. This is a current limitation; the infrastructure may allow for more devices in the future. This field is only valid for multi-device test suites.

Copilot uses AI. Check for mistakes.
Comment on lines +176 to +241
target=""
remaining_args=()
while [ -n "$1" ]; do
case $1 in
-t )
shift
target=$1
remaining_args+=("-t" "$target")
;;
* )
remaining_args+=("$1")
;;
esac
shift
done

if [ -z "$target" ]; then
echo "ERROR: Target (-t) is required for chunk builds"
exit 1
fi

# Find and build all multi-device tests in the test folder
multi_device_error=0
if [ -d "$test_folder" ]; then
for test_dir in "$test_folder"/*; do
if [ -d "$test_dir" ] && [ "$(is_multi_device_test "$test_dir")" -eq 1 ]; then
build_multi_device_test "$test_dir" "$target" "${args[@]}" "${remaining_args[@]}"
result=$?
if [ $result -ne 0 ]; then
multi_device_error=$result
fi
fi
done
fi

# Now build regular (non-multi-device) tests using chunk_build
BUILD_CMD="${SCRIPTS_DIR}/sketch_utils.sh chunk_build"
args+=("-p" "$test_folder" "-i" "0" "-m" "1")
args+=("-p" "$test_folder")
${BUILD_CMD} "${args[@]}" "${remaining_args[@]}"
regular_error=$?

# Return error if either multi-device or regular builds failed
if [ $multi_device_error -ne 0 ]; then
exit $multi_device_error
fi
exit $regular_error
else
BUILD_CMD="${SCRIPTS_DIR}/sketch_utils.sh build"
args+=("-s" "$test_folder/$sketch")
fi
# Check if this is a multi-device test
test_dir="$test_folder/$sketch"
if [ -d "$test_dir" ] && [ "$(is_multi_device_test "$test_dir")" -eq 1 ]; then
# Extract target from remaining arguments
target=""
remaining_args=()
while [ -n "$1" ]; do
case $1 in
-t )
shift
target=$1
remaining_args+=("-t" "$target")
;;
* )
remaining_args+=("$1")
;;
esac
shift
done
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code duplicates the target extraction logic (lines 176-190 and lines 226-241). This duplication could be avoided by extracting this logic into a separate function that returns the target and remaining arguments, improving maintainability.

Copilot uses AI. Check for mistakes.
Comment on lines +429 to +446
# For multi-device tests, we need to find the test directory, not the device sketch directory
# First, try to find a test directory with this name
if [ -d "tests/validation/$sketch" ] && [ -f "tests/validation/$sketch/ci.yml" ]; then
test_type="validation"
test_folder="$PWD/tests/$test_type"
elif [ -d "tests/performance/$sketch" ] && [ -f "tests/performance/$sketch/ci.yml" ]; then
test_type="performance"
test_folder="$PWD/tests/$test_type"
else
# Fall back to finding by .ino file (for regular tests)
tmp_sketch_path=$(find tests -name "$sketch".ino | head -1)
if [ -z "$tmp_sketch_path" ]; then
echo "ERROR: Sketch $sketch not found"
exit 1
fi
test_type=$(basename "$(dirname "$(dirname "$tmp_sketch_path")")")
test_folder="$PWD/tests/$test_type"
fi
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code duplicates the logic to detect if a sketch is a multi-device test (lines 429-446 are nearly identical to lines 145-162 in tests_build.sh). This logic should be extracted into a shared utility function to avoid duplication and ensure consistency between build and run scripts.

Copilot uses AI. Check for mistakes.
# Wait for server to be ready
LOGGER.info("Waiting for server to start advertising...")
server.expect_exact("[SERVER] Characteristics configured", timeout=10)
m = server.expect(rf"\[SERVER\] Advertising started with name: {re.escape(server_name)}", timeout=10)
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assignment to 'm' is unnecessary as it is redefined before this value is used.

Suggested change
m = server.expect(rf"\[SERVER\] Advertising started with name: {re.escape(server_name)}", timeout=10)
server.expect(rf"\[SERVER\] Advertising started with name: {re.escape(server_name)}", timeout=10)

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,156 @@
import re
import pytest
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'pytest' is not used.

Suggested change
import pytest

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,156 @@
import re
import pytest
import random
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'random' is not used.

Suggested change
import random

Copilot uses AI. Check for mistakes.
import re
import pytest
import random
import string
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'string' is not used.

Suggested change
import string

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,80 @@
import re
import pytest
Copy link

Copilot AI Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'pytest' is not used.

Suggested change
import pytest

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI Failure Expected For PRs where CI failure is expected Status: Blocked upstream 🛑 PR is waiting on upstream changes to be merged first Type: CI & Testing Related to continuous integration, automated testing, or test infrastructure.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants