firmware_TouchMTB: add the metric for CountTrackingIDValidator

The metrics of some validators will be hidden by default unless
there are failures. Such validators include CountTrackingIDValidator,
CountPacketsValidator, and PinchValidator.

Display the metric for CountTrackingIDValidator if there are failures.
Otherwise, hide the metric unless a flag "-m a" is specified. The
metric shows the percentage of the incorrect test cases over the total
test cases.

The printing format with failures looks like:

Metrics statistics by validator: fw_12  description
------------------------------------------------------------------------------
 CountTrackingIDValidator
  pct of incorrect cases (%)   :  4.76  pct of incorrect cases over total cases
 ...

The detailed raw metrics values will be printed if a flag "-m f"
is specified. The raw metric value pair represents
(actual tracking IDs, expected tracking IDs). The format looks like:

 CountTrackingIDValidator
  pct of incorrect cases (%)
   ** Note: value below represents (actual tracking IDs, expected tracking IDs)
    drag_edge_thumb.left_to_right (20130506_032458)        : (5, 1)
    one_finger_physical_click.bottom_left (20130506_032458): (1, 1)
    ...

  Note that in the drag_edge_thumb gesture above, the expected number
of tracking IDS (or finger IDs) should be 1. However, 5 tracking IDs
are actually observed due to some firmware problems.

  For the one_finger_physical_click gesture, exactly 1 tracking ID is
observed, while the number of expected finger IDs is 1.

  We also unify the "-m" (i.e., metrics) flag in summary.sh script.
  $ ./summary.sh -m p     # show the statistics of the primary metrics
  $ ./summary.sh -m a     # show the statistics of all metrics, where
                          # all metrics = the primary metrics + the hidden ones
  $ ./summary.sh -m f     # show the full raw metrics values on file basis

BUG=chromium:235753
TEST=Run the command on a chromebook machine to generate a summary report.
$ cd /usr/local/autotest/tests/firmware_TouchMTB

Replay the logs.
$ tools/machine_replay.sh -b tests/logs/lumpy -s

Generate the summary report with the metrics statistics.
$ ./summary.sh -d /tmp -m a

Generate the summary report with the full raw metrics values on file basis.
$ ./summary.sh -d /tmp -m f

Also make sure all unit tests pass.
$ python tests/run_all_unittests.py

Change-Id: Ie385bfd54bac0a45211fc2a678ad57d9472e4da2
Reviewed-on: https://gerrit.chromium.org/gerrit/62827
Commit-Queue: Joseph Shyh-In Hwang <josephsih@chromium.org>
Reviewed-by: Joseph Shyh-In Hwang <josephsih@chromium.org>
Tested-by: Joseph Shyh-In Hwang <josephsih@chromium.org>
19 files changed