Presentation is loading. Please wait.

Presentation is loading. Please wait.

Darwin Phones: the Evolution of Sensing and Inference on Mobile Phones Emiliano Miluzzo *, Cory T. Cornelius *, Ashwin Ramaswamy *, Tanzeem Choudhury *,

Similar presentations


Presentation on theme: "Darwin Phones: the Evolution of Sensing and Inference on Mobile Phones Emiliano Miluzzo *, Cory T. Cornelius *, Ashwin Ramaswamy *, Tanzeem Choudhury *,"— Presentation transcript:

1 Darwin Phones: the Evolution of Sensing and Inference on Mobile Phones Emiliano Miluzzo *, Cory T. Cornelius *, Ashwin Ramaswamy *, Tanzeem Choudhury *, Zhigang Liu **, Andrew T. Campbell * * CS Department – Dartmouth College ** Nokia Research Center – Palo Alto

2 miluzzo@cs.dartmouth.eduEmiliano Miluzzo

3 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evolution of sensing and inference on mobile phones

4 Emiliano Miluzzo PR time miluzzo@cs.dartmouth.edu

5 Emiliano Miluzzo

6 miluzzo@cs.dartmouth.eduEmiliano Miluzzo

7 miluzzo@cs.dartmouth.eduEmiliano Miluzzo

8 miluzzo@cs.dartmouth.eduEmiliano Miluzzo

9 miluzzo@cs.dartmouth.eduEmiliano Miluzzo

10 ok… so what ?? miluzzo@cs.dartmouth.eduEmiliano Miluzzo

11 miluzzo@cs.dartmouth.eduEmiliano Miluzzo density

12 miluzzo@cs.dartmouth.eduEmiliano Miluzzo accelerometer digital compass microphone WiFi/bluetooth GPS …. light sensor/camera sensing

13 miluzzo@cs.dartmouth.eduEmiliano Miluzzo accelerometer digital compass microphone WiFi/bluetooth GPS light sensor/camera gyroscope air quality / pollution sensor sensing ….

14 miluzzo@cs.dartmouth.eduEmiliano Miluzzo - free SDK - multitasking - free SDK - multitasking programmability

15 miluzzo@cs.dartmouth.eduEmiliano Miluzzo - 600 MHz CPU - up to 1GB application memory - 600 MHz CPU - up to 1GB application memory hardware computation capability is increasing

16 miluzzo@cs.dartmouth.eduEmiliano Miluzzo application distribution

17 miluzzo@cs.dartmouth.eduEmiliano Miluzzo application distribution deploy apps onto millions of phones at the blink of an eye

18 miluzzo@cs.dartmouth.eduEmiliano Miluzzo application distribution collect huge amount of data for research purposes deploy apps onto millions of phones at the blink of an eye

19 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support

20 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support

21 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support we want to push intelligence to the phone

22 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support preserve the phone user experience (battery lifetime, ability to make calls, etc.) preserve the phone user experience (battery lifetime, ability to make calls, etc.)

23 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support - sensing - run machine learning algorithms locally (feature extraction + inference)

24 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support - sensing - run machine learning algorithms locally (feature extraction + inference) run machine learning algorithms (learning)

25 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support store and crunch big data (fusion) run machine learning algorithms (learning) - sensing - run machine learning algorithms locally (feature extraction + inference)

26 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support run machine learning algorithms (learning) store and crunch big data (fusion) 3 to 5 years from now our phones will be as powerful as a - sensing - run machine learning algorithms locally (feature extraction + inference)

27 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support run machine learning algorithms (learning) store and crunch big data (fusion) 3 to 5 years from now our phones will be as powerful as a - sensing - run machine learning algorithms locally (feature extraction + inference)

28 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support run machine learning algorithms (learning) store and crunch big data (fusion) 3 to 5 years from now our phones will be as powerful as a - sensing - run machine learning algorithms locally (feature extraction + inference)

29 miluzzo@cs.dartmouth.eduEmiliano Miluzzo cloud infrastructure cloud - backend support - Sensing - run machine learning algorithms locally (feature extraction + learning + inference) run machine learning algorithms (learning) store and crunch big data (fusion) 3 to 5 years from now our phones will be as powerful as a

30 miluzzo@cs.dartmouth.eduEmiliano Miluzzo sensing programmability cloud infrastructure

31 miluzzo@cs.dartmouth.eduEmiliano Miluzzo sensing programmability cloud infrastructure ??

32 miluzzo@cs.dartmouth.eduEmiliano Miluzzo societal scale sensing global mobile sensor network reality mining using mobile phones will play a big role in the future reality mining using mobile phones will play a big role in the future

33 end of PR – now darwin Emiliano Miluzzomiluzzo@cs.dartmouth.edu

34 a small building block towards the big vision Emiliano Miluzzomiluzzo@cs.dartmouth.edu

35 Emiliano Miluzzo from motes to mobile phones

36 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evolution of sensing and inference on mobile phones from motes to mobile phones

37 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evolution of sensing and inference on mobile phones from motes to mobile phones darwin - classification model evolution - classification model pooling - collaborative inference

38 miluzzo@cs.dartmouth.eduEmiliano Miluzzo microphone camera GPS/WiFi/ cellular air quality pollution sensingapps social context audio / pollution / RF fingerprinting image / video manipulation darwin applies distributed computing and collaborative inference concepts to mobile sensing systems darwin - classification model evolution - classification model pooling - collaborative inference

39 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo mobile phone sensing today

40 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo train classification model X in the lab mobile phone sensing today

41 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo deploy classifier X mobile phone sensing today train classification model X in the lab

42 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo train classification model X in the lab deploy classifier X train classification model X in the lab mobile phone sensing today

43 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo deploy classifier X mobile phone sensing today train classification model X in the lab train classification model X in the lab

44 why darwin? miluzzo@cs.dartmouth.eduEmiliano Miluzzo train classification model X in the lab deploy classifier X a fully supervised approach doesnt scale! mobile phone sensing today train classification model X in the lab

45 why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.eduEmiliano Miluzzo

46 why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.eduEmiliano Miluzzo

47 why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.eduEmiliano Miluzzo

48 why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin creates new classification models transparently from the user (classification model evolution ) darwin creates new classification models transparently from the user (classification model evolution )

49 miluzzo@cs.dartmouth.eduEmiliano Miluzzo why darwin? ability for an application to rapidly scale to many devices

50 miluzzo@cs.dartmouth.eduEmiliano Miluzzo why darwin? ability for an application to rapidly scale to many devices darwin re-uses classification models when possible (classification model pooling ) darwin re-uses classification models when possible (classification model pooling )

51 miluzzo@cs.dartmouth.eduEmiliano Miluzzo why darwin? leverage the large ensemble of in-situ resources

52 miluzzo@cs.dartmouth.eduEmiliano Miluzzo why darwin? leverage the large ensemble of in-situ resources darwin exploits spatial diversity and co- operate to alleviate the sensing context problem ( collaborative inference ) darwin exploits spatial diversity and co- operate to alleviate the sensing context problem ( collaborative inference )

53 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin design

54 miluzzo@cs.dartmouth.eduEmiliano Miluzzo speaker recognition (subject to audio noise, sensing context, etc.)

55 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin phases

56 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin phases initial training (derive model seed) supervised

57 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin phases initial training (derive model seed) classification model evolution supervised unsupervised

58 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin phases initial training (derive model seed) classification model evolution classification model pooling supervised unsupervised

59 miluzzo@cs.dartmouth.eduEmiliano Miluzzo darwin phases initial training (derive model seed) classification model evolution classification model pooling collaborative inference supervised unsupervised

60 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model training sensed event

61 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model training sensed event filtering (silence suppression + voicing)

62 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model training sensed event filtering (silence suppression + voicing) feature extraction (MFCC) feature extraction (MFCC)

63 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model training filtering (silence suppression + voicing) feature extraction (MFCC) feature extraction (MFCC) model training (GMM) model training (GMM) model baseline sensed event send model + baseline back to phone send MFCC to backend to train the model backend

64 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model training phone : feature extraction (low computation) backend backend : model training (high computation)

65 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution phone : determines when to evolve

66 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution phone : determines when to evolve training

67 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution phone : determines when to evolve trainingsampled

68 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution phone : determines when to evolve match? YES do not evolve

69 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution phone : determines when to evolve match? NO evolve ( train new model using backend as before)

70 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution new speaker voice model training

71 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution new speaker voice model training

72 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model evolution new speaker voice model training

73 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling

74 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker C s model Speaker B s model Speaker C s model

75 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker C s model Speaker B s model Speaker C s model we have two options 1. train a new classifier for each speaker (costly for power, inference delay)

76 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker C s model Speaker B s model Speaker C s model we have two options 1. train a new classifier for each speaker (costly for power, inference delay) 2. re-use already available classifiers

77 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker C s model Speaker B s model Speaker C s model we have two options 1. train a new classifier for each speaker (costly for power, inference delay) 2. re-use already available classifiers

78 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker C s model Speaker B s model Speaker C s model

79 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker Bs model Speaker Cs model Speaker B s model Speaker A s model Speaker C s model

80 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker Bs model Speaker Cs model Speaker B s model Speaker As model Speaker Cs model Speaker A s model Speaker B s model

81 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker Bs model Speaker Cs model Speaker As model Speaker Bs model Speaker As model Speaker Cs model

82 miluzzo@cs.dartmouth.eduEmiliano Miluzzo classification model pooling Speaker A s model Phone A Phone B Phone C Speaker Bs model Speaker Cs model Speaker As model Speaker Bs model Speaker As model Speaker Cs model ready to run the collaborative inference algorithm - local inference first - final inference later ready to run the collaborative inference algorithm - local inference first - final inference later

83 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference two phases

84 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference 1. local inference (running independently in parallel on each mobile phone) two phases

85 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference 1. local inference (running independently in parallel on each mobile phone) two phases 2. final inference (after collecting Local Inference results, to get better confidence about the final classification result)

86 local inference (LI) miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C

87 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! local inference (LI)

88 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! local inference (LI) As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

89 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! local inference (LI) As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

90 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 local inference (LI) Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

91 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 local inference (LI) Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

92 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! local inference (LI) As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

93 miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C speaker A speaking!!! local inference (LI) As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 individual classification can be misleading!

94 final inference (FI) miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference Phone A Phone B Phone C each phone gathers LI results As LI results Cs LI results Bs LI results As LI results Cs LI results Bs LI results

95 final inference (FI) miluzzo@cs.dartmouth.eduEmiliano Miluzzo collaborative inference on each phone As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10

96 miluzzo@cs.dartmouth.eduEmiliano Miluzzo As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x final inference (FI) collaborative inference on each phone

97 miluzzo@cs.dartmouth.eduEmiliano Miluzzo As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 = final inference (FI) collaborative inference on each phone

98 miluzzo@cs.dartmouth.eduEmiliano Miluzzo As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x = FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 final inference (FI) collaborative inference on each phone

99 miluzzo@cs.dartmouth.eduEmiliano Miluzzo As LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 Cs LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Bs LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x = collaborative inference compensates the inaccuracies of individual inferences FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 final inference (FI) collaborative inference on each phone

100 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evaluation

101 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evaluation C/C++ & implemented on Nokia N97 and iPhone in support of a speaker recognition app

102 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evaluation C/C++ & unix server implemented on Nokia N97 and iPhone in support of a speaker recognition app

103 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evaluation C/C++ & unix server lightweight reliable protocol to transfer models from the server and between phones implemented on Nokia N97 and iPhone in support of a speaker recognition app

104 miluzzo@cs.dartmouth.eduEmiliano Miluzzo evaluation C/C++ & UDP multicast protocol to distribute local inference results between phones implemented on Nokia N97 and iPhone in support of a speaker recognition app

105 miluzzo@cs.dartmouth.eduEmiliano Miluzzo experimental scenarios up to eight people in conversation in three different scenarios (quiet indoor, down the street, in a restaurant)

106 miluzzo@cs.dartmouth.eduEmiliano Miluzzo some numerical results

107 miluzzo@cs.dartmouth.eduEmiliano Miluzzo need for evolution train indoor, evaluate outdoor

108 miluzzo@cs.dartmouth.eduEmiliano Miluzzo need for evolution accuracy improvement after evolution accuracy

109 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

110 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

111 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

112 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

113 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

114 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table

115 miluzzo@cs.dartmouth.eduEmiliano Miluzzo indoor quiet scenario 8 people talking around a table collaborative inference + classification model evolution boost the performance of a mobile sensing app

116 miluzzo@cs.dartmouth.eduEmiliano Miluzzo impact of the number of mobile phones

117 miluzzo@cs.dartmouth.eduEmiliano Miluzzo impact of the number of mobile phones

118 miluzzo@cs.dartmouth.eduEmiliano Miluzzo impact of the number of mobile phones

119 miluzzo@cs.dartmouth.eduEmiliano Miluzzo impact of the number of mobile phones

120 miluzzo@cs.dartmouth.eduEmiliano Miluzzo impact of the number of mobile phones the larger the number of mobile phones collaborating, the better the final inference result

121 miluzzo@cs.dartmouth.eduEmiliano Miluzzo battery lifetime Vs inference responsiveness

122 miluzzo@cs.dartmouth.eduEmiliano Miluzzo battery lifetime Vs inference responsiveness

123 miluzzo@cs.dartmouth.eduEmiliano Miluzzo high responsiveness battery lifetime Vs inference responsiveness

124 miluzzo@cs.dartmouth.eduEmiliano Miluzzo short battery life battery lifetime Vs inference responsiveness

125 miluzzo@cs.dartmouth.eduEmiliano Miluzzo longer battery duration battery lifetime Vs inference responsiveness

126 miluzzo@cs.dartmouth.eduEmiliano Miluzzo low responsiveness battery lifetime Vs inference responsiveness

127 miluzzo@cs.dartmouth.eduEmiliano Miluzzo battery lifetime Vs inference responsiveness smart duty-cycling techniques and machine learning algorithms with better performance in terms of energy usage on mobile phones need to be identified

128 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere, lets exploit their collective sensing and computation capabilities

129 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere – lets exploit their collective sensing and computation capabilities smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities

130 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere – lets exploit their collective sensing and computation capabilities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities

131 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere – lets exploit their collective sensing and computation capabilities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities

132 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere – lets exploit their collective sensing and computation capabilities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities ok I think Im done…

133 miluzzo@cs.dartmouth.eduEmiliano Miluzzo a quick recap smartphones are everywhere – lets exploit their collective sensing and computation capabilities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities but please bear in mind…

134 miluzzo@cs.dartmouth.eduEmiliano Miluzzo Mobile Phone Sensing is the Next Big Thing!

135 Thank you!! miluzzo@cs.dartmouth.eduEmiliano Miluzzo Mobile Sensing Group http://sensorlab.cs.dartmouth.edu


Download ppt "Darwin Phones: the Evolution of Sensing and Inference on Mobile Phones Emiliano Miluzzo *, Cory T. Cornelius *, Ashwin Ramaswamy *, Tanzeem Choudhury *,"

Similar presentations


Ads by Google