For each major class of crash reports, we define a dedicated collector. This is a simple way to encapsulate all related logic in a single module.
When we run crash_reporter, depending on its mode, it simply iterates through all registered collectors.
The crash_collector.cc code isn‘t a real collector, it’s the base class to hold common logic for all collectors. Similarly, user_collector_base.cc isn‘t a real collector, it’s the base class to hold common logic for all user related collectors.
The core_collector program is just a utility tool and not a collector in the sense of all these. It probably should have used a different naming convention.
Each collector is designed to generate and queue crash reports. They get uploaded periodically by crash_sender.
These are the collectors that run once at boot. They are triggered via the crash-boot-collect.conf init service. They do not, by design, block the boot of the system. They are run in the background as a non-critical service.
This collects EC (Chrome OS Embedded Controller) failures.
The program name is embedded-controller
and might be referred to as eccrash
.
/sys/kernel/debug/cros_ec/
./sys/kernel/debug/cros_ec/panicinfo
is created.This collects kernel crashes that caused the system to reboot.
It is built on top of pstore and doesn't support any other data source. We currently support the ramoops
and efi
backend drivers.
The program name is kernel
and might be referred to as kcrash
.
ramoops
, CrOS firmware (e.g. coreboot) dedicate a chunk of RAM.efi
, the EFI firmware provides data in its own NVRAM space.printk
writes to and what dmesg
reads from)./sys/fs/pstore/
.dmesg-ramoops-*
& dmesg-efi-*
, and generate a report for each one.console-ramoops-*
and hope the events just before the reset are enough to triage the problem.Collects unclean shutdown events.
/var/lib/crash_reporter/pending_clean_shutdown
) to indicate that the system hasn't gone through a clean shutdown.--clean_shutdown
flag. The stateful partition file is removed to indicate the system has gone through a clean shutdown.Here are the collectors that are triggered on demand while the OS is running. They are invoked either by the kernel or by other program.
Collects ARC++ crashes from programs inside the ARC++ container. This handles Android NDK and Java crashes. It does not handle crashes from ARC++ support daemons that run outside of the container as those are collected like any other userland crash via the main user_collector.
arc_collector shares a lot of code with user_collector so it can overlay ARC++-specific processing details.
Collects Chrome browser crashes. The browser will hand us the minidump directly, so we only attach system metadata and queue it.
crash_reporter will be called by the kernel for Chrome crashes like any other user_collector crash, but we actually ignore these crashes. Chrome is supposed to catch the crash in its parent process and handle it itself; it links in Google Breakpad directly to do so. This is because Chrome is better suited to know what memory regions to ignore (e.g. large heaps or file memory maps or graphics buffers), as well as what metadata to attach (e.g. the last URL visited, whether the process was a renderer, browser, plugin, or other kind of process, chrome://flags
, etc...). Otherwise Chrome coredumps can easily consume 3GB+ of memory!
This does mean the system may miss crashes if Chrome's handling itself is buggy.
This collects crash/error events triggered by udev events. It is invoked via the udev rules and relies heavily on callbacks in the crash_reporter_logs.conf file.
The program name is udev
.
These reports are largely device specific as they try to capture whatever state the device/firmware needs to triage.
TODO: Add devcoredump details if we ever enable them.
Collects all userland crashes where the kernel dumps core. Basically any program that segfaults, aborts, violates a seccomp policy, or is otherwise unceremoniously killed.
.dmp
). This process involves reading the core file contents to determine number of threads, register sets of all threads, and threads' stacks' contents. This is fundamental to our out-of-process design.chronos
, we enqueue its crash to /home/user/<user_hash>/crash/
which is on the user-specific cryptohome when a user is logged in since it might have user PII in it. If the crashed process was running as any other user, we enqueue the crash in /var/spool/crash
..log
file.Here are the collectors that anomaly_collector runs on syslog. That service is spawned early during boot via anomaly-collector.conf.
/run/anomaly-collector/
based on the specific collector.Collects WARN() messages from anywhere in the depths of the kernel. Could be drivers, subsystems, or core logic.
The program name is kernel-warning
or kernel-xxx-warning
(where xxx
is a common subsystem/area) and might be referred to as kcrash
.
WARN()
or WARN_ON(...)
or any similar helper, it generates a standard log message including stack traces.kernel-warning
is used everywhere, but the location of drivers in the backtrace are used to further refine the name.Collects SELinux policy violations.
The program name is selinux-violation
.
name=
and scontext=
) and used to create the magic signature.Collects warnings from the init (e.g. Upstart) for services that failed to startup or exited unexpectedly at runtime. This catches syntax errors in the init scripts and daemons that simply exit non-zero but didn't otherwise trigger an abort or crash.
The program name is service-failure
.
init:
are processed.<daemon> <job phase> process (<pid>) terminated with status <status>
.