tree: c58b76a89fc9862c107de270fa5dae543d1e67ce [path history] [tgz]
  1. dbus/
  2. docs/
  3. fuzzers/
  4. init/
  5. mojom/
  6. proto/
  7. seccomp/
  8. benchmark.cc
  9. benchmark.h
  10. benchmark_test.cc
  11. BUILD.gn
  12. command_line.cc
  13. daemon.cc
  14. daemon.h
  15. dlcservice_client.cc
  16. dlcservice_client.h
  17. dlcservice_client_test.cc
  18. grammar_checker_impl.cc
  19. grammar_checker_impl.h
  20. grammar_library.cc
  21. grammar_library.h
  22. grammar_library_test.cc
  23. grammar_proto_mojom_conversion.cc
  24. grammar_proto_mojom_conversion.h
  25. grammar_proto_mojom_conversion_test.cc
  26. graph_executor_impl.cc
  27. graph_executor_impl.h
  28. graph_executor_impl_test.cc
  29. handwriting.cc
  30. handwriting.h
  31. handwriting_proto_mojom_conversion.cc
  32. handwriting_proto_mojom_conversion.h
  33. handwriting_proto_mojom_conversion_test.cc
  34. handwriting_recognizer_impl.cc
  35. handwriting_recognizer_impl.h
  36. handwriting_test.cc
  37. machine_learning_service_impl.cc
  38. machine_learning_service_impl.h
  39. machine_learning_service_impl_test.cc
  40. main.cc
  41. metrics.cc
  42. metrics.h
  43. model_impl.cc
  44. model_impl.h
  45. model_impl_test.cc
  46. model_metadata.cc
  47. model_metadata.h
  48. OWNERS
  49. README.md
  50. request_metrics.cc
  51. request_metrics.h
  52. simple.cc
  53. simple.h
  54. simple_test.cc
  55. soda.cc
  56. soda.h
  57. soda_proto_mojom_conversion.cc
  58. soda_proto_mojom_conversion.h
  59. soda_proto_mojom_conversion_test.cc
  60. soda_recognizer_impl.cc
  61. soda_recognizer_impl.h
  62. soda_recognizer_impl_fake.cc
  63. soda_test.cc
  64. tensor_view.cc
  65. tensor_view.h
  66. test_utils.cc
  67. test_utils.h
  68. testrunner.cc
  69. text_classifier_impl.cc
  70. text_classifier_impl.h
  71. util.cc
  72. util.h
  73. util_test.cc
ml/README.md

Chrome OS Machine Learning Service

Summary

The Machine Learning (ML) Service provides a common runtime for evaluating machine learning models on device. The service wraps the TensorFlow Lite runtime and provides infrastructure for deployment of trained models. The TFLite runtime runs in a sandboxed process. Chromium communicates with ML Service via a Mojo interface.

How to use ML Service

You need to provide your trained models to ML Service first, then load and use your model from Chromium using the client library provided at //chromeos/services/machine_learning/public/cpp/. See this doc for more detailed instructions.

Optional: Later, if you find a need for it, you can add your model to the ML Service internals page.

Note: The sandboxed process hosting TFLite models is currently shared between all users of ML Service. If this isn't acceptable from a security perspective for your model, follow this bug about switching ML Service to having a separate sandboxed process per loaded model.

Metrics

The following metrics are currently recorded by the daemon process in order to understand its resource costs in the wild:

  • MachineLearningService.MojoConnectionEvent: Success/failure of the D-Bus->Mojo bootstrap.
  • MachineLearningService.TotalMemoryKb: Total (shared+unshared) memory footprint every 5 minutes.
  • MachineLearningService.PeakTotalMemoryKb: Peak value of MachineLearningService.TotalMemoryKb per 24 hour period. Daemon code can also call ml::Metrics::UpdateCumulativeMetricsNow() at any time to take a peak-memory observation, to catch short-lived memory usage spikes.
  • MachineLearningService.CpuUsageMilliPercent: Fraction of total CPU resources consumed by the daemon every 5 minutes, in units of milli-percent (1/100,000).

Additional metrics added in order to understand the resource costs of each request for a particular model:

  • MachineLearningService.|MetricsModelName|.|request|.Event: OK/ErrorType of the request.
  • MachineLearningService.|MetricsModelName|.|request|.TotalMemoryDeltaKb: Total (shared+unshared) memory delta caused by the request.
  • MachineLearningService.|MetricsModelName|.|request|.CpuTimeMicrosec: CPU time usage of the request, which is scaled to one CPU core, i.e. the units are CPU-core*microsec (10 CPU cores for 1 microsec = 1 CPU core for 10 microsec = recorded value of 10).

|MetricsModelName| is specified in the model's metadata for builtin models and is specified in |FlatBufferModelSpec| by the client for flatbuffer models. The above |request| can be following:

  • LoadModelResult
  • CreateGraphExecutorResult
  • ExecuteResult (model inference)

The request name “LoadModelResult” is used no matter the model is loaded by |LoadBuiltinModel| or by |LoadFlatBufferModel|. This is valid based on the fact that for a particular model, it is either loaded by |LoadBuiltinModel| or by |LoadFlatBufferModel| and never both.

There is also an enum histogram “MachineLearningService.LoadModelResult” which records a generic model specification error event during a |LoadBuiltinModel| or |LoadFlatBufferModel| request when the model name is unknown.

Original design docs

Note that aspects of the design may have evolved since the original design docs were written.