The Chromium OS “metrics” package contains utilities for client-side user metric collection.
When Chromium is installed, Chromium will take care of aggregating and uploading the metrics to the UMA server.
When Chromium is not installed (e.g. an embedded/headless build) and the metrics_uploader USE flag is set,
metrics_daemon will aggregate and upload the metrics itself.
libmetrics implements the basic C and C++ API for metrics collection. All metrics collection is funneled through this library. The easiest and recommended way for a client-side module to collect user metrics is to link libmetrics and use its APIs to send metrics to Chromium for transport to UMA. In order to use the library in a module, you need to do the following:
Add a dependence (DEPEND and RDEPEND) on chromeos-base/metrics to the module's ebuild.
Link the module with libmetrics (for example, by passing
-lmetrics to the module's link command). Both
libmetrics.a are built and installed into the sysroot libdir (e.g.
$SYSROOT/usr/lib/). By default
-lmetrics links against
libmetrics.so, which is preferred.
To access the metrics library API in the module, include the
<metrics/metrics_library.h> header file. The file is installed in
$SYSROOT/usr/include/ when the metrics library is built and installed.
The API is documented in metrics_library.h. Before using the API methods, a MetricsLibrary object needs to be constructed and initialized through its Init method.
For more information on the C API, see c_metrics_library.h.
Samples are sent to Chromium only if the
/home/chronos/Consent To Send Stats file exists or the metrics are declared enabled in the policy file (see the AreMetricsEnabled API method).
On the target platform, shortly after the sample is sent, it should be visible in Chromium through
The library includes a CumulativeMetrics class which can be used for histograms whose samples represent accumulation of quantities on the same device across a period of time: for instance, how much time was spent playing music on each device and each day of use. Please see the CumulativeMetrics section below.
metrics_client is a command-line utility for sending histogram samples and user actions. It is installed under /usr/bin on the target platform and uses libmetrics. It is typically used for generating metrics from shell scripts.
For usage information and command-line options, run
metrics_client on the target platform or look for
Usage: in metrics_client.cc.
metrics_daemon is a daemon that runs in the background on the target platform and is intended for passive or ongoing metrics collection, or metrics collection requiring input from other modules. For example, it listens to D-Bus signals related to the user session and screen saver states to determine if the user is actively using the device or not and generates the corresponding data. The metrics daemon also uses libmetrics.
The recommended way to generate metrics data from a module is to link and use libmetrics directly. However, the module could instead send signals to or communicate in some alternative way with the metrics daemon. Then the metrics daemon needs to monitor for the relevant events and take appropriate action -- for example, aggregate data and send the histogram samples.
The CumulativeMetrics class in libmetrics helps keep track of quantities across boot sessions, so that the quantities can be accumulated over stretches of time (for instance, a day or a week) without concerns about intervening reboots or version changes, and then reported as samples. For this purpose, some persistent state (i.e. partial accumulations) is maintained as files on the device. These “backing files” are typically placed in /var/lib//metrics. (The metrics daemon is an exception, with its backing files being in /var/lib/metrics.)
The memd subdirectory contains a daemon that collects data at high frequency during episodes of heavy memory pressure.
for more information on choosing name, type, and other parameters of new histograms. The rest of this README is a super-short synopsis of that document, and with some luck it won't be too out of date.
Use TrackerArea.MetricName. For example:
If the histogram data is visible in
chrome://histograms, it will be sent by an official Chromium build to UMA, assuming the user has opted into metrics collection. To make the histogram visible on “chromedashboard”, the histogram description XML file needs to be updated (steps 2 and 3 after following the “Details on how to add your own histograms” link under the Histograms tab). Include the string “Chrome OS” in the histogram description so that it's easier to distinguish Chromium OS specific metrics from general Chromium histograms.
The UMA server logs and keeps the collected field data even if the metric's name is not added to the histogram XML. However, the dashboard histogram for that metric will show field data as of the histogram XML update date; it will not include data for older dates. If past data needs to be displayed, manual server-side intervention is required. In other words, one should assume that field data collection starts only after the histogram XML has been updated.
You should set the values to a range that covers the vast majority of samples that would appear in the field. Values below |min| are collected in the “underflow bucket” and values above |max| end up in the “overflow bucket”. The reported mean of the data is precise, i.e. it does not depend on range and number of buckets.
You should allocate as many buckets as necessary to perform proper analysis on the collected data. Most data is fairly noisy: 50 buckets are plenty, 100 buckets are probably overkill. Also consider that the memory allocated in Chromium for each histogram is proportional to the number of buckets, so don't waste it.
Enumeration histograms should really be used only for sampling enumerated events and, in some cases, percentages. Normally, you should use a regular histogram with exponential bucket layout that provides higher resolution at the low end of the range and lower resolution at the high end. Regular histograms are generally used for collecting performance data (e.g., timing, memory usage, power) as well as aggregated event counts.