commit | 5f21f71aa09c83569c0bba316cc4dff726a57ec4 | [log] [tgz] |
---|---|---|
author | Kevin Lund <kglund@google.com> | Wed Aug 18 20:38:42 2021 +0000 |
committer | Commit Bot <commit-bot@chromium.org> | Thu Aug 19 01:37:58 2021 +0000 |
tree | 5bf31ec3cc6c82f79c3f9989198c00a749d89998 | |
parent | 2b936a245d76d7279888c4eda5c1b4a6dba95249 [diff] |
Revert "autotest: Three way Perf setup" This reverts commit 320f70e6f8be50c312100404022b0876f4e7a2ee. Reason for revert: This change caused some devices to end up in a bad state with ip routes that interfered with other tests. After this revert, the test should be updated to call the ip teardown code even when the test fails. Original change's description: > autotest: Three way Perf setup > > Change the WiFi_Perf tests to ustilize the pcap device as a Netperf > endpoint in throughput testing. This setup utilizes some hard-coded > values for ethernet interface names and IP addresses for the LAN ports > on the pcap and router Gale devices. This is necessary because the > router device is not configured with the daemons to handle the > assignment automatically. This also means that some IP routes need to be > set up manually on the devices to allow traffic to move as needed. > > Specific arguments for the LAN interface device names and IP addresses > may be provided at the command line. If not provided, default values > will be used instead. > > The throughput results after making this change are improved as > expected, but not completely as high as expected. There is still a > bottleneck on the router CPU in some cases. In other cases, we are > measuring the total transmitted data from the source device, rather than > measuring the received data on the target device. This results in some > test cases that have artificially inflated throughput values. These > problems could all be resolved by transition from Netperf to Iperf. > > BUG=b:172211699 > TEST=test_that $DUT suite:wifi_perf > test_that --fast $DUT network_WiFi_Perf.vht80 --args='router_lan_ip_addr="192.168.1.49"' > > Change-Id: I73bd1e955611547611187a4d0a3fbcf14e0063e0 > Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/autotest/+/3064788 > Tested-by: Kevin Lund <kglund@google.com> > Commit-Queue: Kevin Lund <kglund@google.com> > Reviewed-by: Arowa Suliman <arowa@chromium.org> BUG=b:172211699, b:197104627 TEST=Perf test fails as expected, but does not leave DUT in bad state. Change-Id: I798eacb41f0371c697aa55f9756fdccdaa19253b Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/autotest/+/3104725 Commit-Queue: Kevin Lund <kglund@google.com> Tested-by: Kevin Lund <kglund@google.com> Reviewed-by: Arowa Suliman <arowa@chromium.org>
Autotest is a framework for fully automated testing. It was originally designed to test the Linux kernel, and expanded by the Chrome OS team to validate complete system images of Chrome OS and Android.
Autotest is composed of a number of modules that will help you to do stand alone tests or setup a fully automated test grid, depending on what you are up to. A non extensive list of functionality is:
A body of code to run tests on the device under test. In this setup, test logic executes on the machine being tested, and results are written to files for later collection from a development machine or lab infrastructure.
A body of code to run tests against a remote device under test. In this setup, test logic executes on a development machine or piece of lab infrastructure, and the device under test is controlled remotely via SSH/adb/some combination of the above.
Developer tools to execute one or more tests. test_that
for Chrome OS and test_droid
for Android allow developers to run tests against a device connected to their development machine on their desk. These tools are written so that the same test logic that runs in the lab will run at their desk, reducing the number of configurations under which tests are run.
Lab infrastructure to automate the running of tests. This infrastructure is capable of managing and running tests against thousands of devices in various lab environments. This includes code for both synchronous and asynchronous scheduling of tests. Tests are run against this hardware daily to validate every build of Chrome OS.
Infrastructure to set up miniature replicas of a full lab. A full lab does entail a certain amount of administrative work which isn't appropriate for a work group interested in automated tests against a small set of devices. Since this scale is common during device bringup, a special setup, called Moblab, allows a natural progressing from desk -> mini lab -> full lab.
See the guides to test_that
and test_droid
:
See the best practices guide, existing tests, and comments in the code.
git clone https://chromium.googlesource.com/chromiumos/third_party/autotest
See the coding style guide for guidance on submitting patches.
You need to run utils/build_externals.py
to set up the dependencies for pre-upload hook tests.