[ad_1]
Wearable sensors have permeated into folks’s lives, ushering impactful functions in interactive techniques and exercise recognition. Nonetheless, practitioners face important obstacles when coping with sensing heterogeneities, requiring customized fashions for various platforms. On this paper, we conduct a complete analysis of the generalizability of movement fashions throughout sensor areas. Our evaluation highlights this problem and identifies key on-body areas for constructing location-invariant fashions that may be built-in on any machine. For this, we introduce the biggest multi-location exercise dataset (N=50, 200 cumulative hours), which we make publicly accessible. We additionally current deployable on-device movement fashions reaching 91.41% frame-level F1-score from a single mannequin no matter sensor placements. Lastly, we examine cross-location information synthesis, aiming to alleviate the laborious information assortment duties by synthesizing information in a single location given information from one other. These contributions advance our imaginative and prescient of low-barrier, location-invariant exercise recognition techniques, catalyzing analysis in HCI and ubiquitous computing.
[ad_2]
Source link