Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
How accurate is Spren? This article details a study that compares its accuracy to DEXA and other smart scales.
How accurate is Spren? This article details a study that compares its accuracy to DEXA and other smart scales.
Authors: Dr. Kamiar Kordari, Keanu Spies
Date: March 2024
This white paper presents a comprehensive overview and analysis of Spren's innovative body composition algorithm, which employs advanced computer vision and deep learning techniques to estimate body fat percentage accurately using smartphone camera images. Highlighting the importance of body fat measurement in assessing overall health and fitness, the paper contrasts Spren's non-invasive method with traditional body composition measurement techniques, underscoring its advantages in terms of accessibility, safety, and convenience. Through extensive data collection from a diverse participant pool and rigorous testing, the algorithm demonstrates remarkable accuracy and reliability across various demographics, with a mean absolute error (MAE) of 2.27 and strong correlation 0f 0.96 indicating high agreement with gold-standard Dual-energy X-ray Absorptiometry (DXA) measurements. The findings validate the algorithm's potential to revolutionize personal health monitoring by making body composition analysis more accessible and engaging for users, promoting informed lifestyle choices and continuous health improvement. Spren's technology represents a significant leap forward in the democratization of health technology, enabling users to track and understand their body composition changes conveniently from their homes.
The purpose of this white paper is to provide details and analysis of the Spren body composition algorithm. Our algorithm uses advanced computer vision and deep learning techniques to analyze a full body image taken by the camera to estimate body composition and fat percentage with high accuracy. In this paper, we will discuss our deep learning AI model and present the results of the study to establish its accuracy and reliability.
We used deep learning Convolutional Neural Network (CNN) models to train an AI model to estimate body fat percentage and other body composition values from a user's full-body image. The model is trained to match the values generated by DXA scans, ensuring it accommodates variations in body shapes, sizes, ages, genders, and ethnicities. This approach guarantees accuracy and robustness across a diverse population. Data from 5,572 subjects were used to train the Spren body composition model.
Measuring body fat is an important aspect of overall health and fitness. Body fat percentage is a better indicator of health than body weight alone as it reflects the amount of fat versus muscle in the body. High levels of body fat have been linked to an increased risk of various health conditions, including heart disease, stroke, and diabetes.
In addition to these health risks, measuring body fat can provide valuable insights into the effectiveness of diet and exercise regimens. Tracking body fat percentage over time can help individuals understand how their body composition is changing and make adjustments to their lifestyle as needed. BMI and weight are often used interchangeably to gauge an individual's health, particularly their risk of developing obesity-related diseases. However, these metrics do not differentiate between muscle mass and fat, leading to the potential misclassification of individuals with high muscle mass as overweight or obese. Furthermore, BMI and weight alone do not account for the distribution of fat in the body, which is a critical factor in assessing health risks such as cardiovascular disease and type 2 diabetes.
Various methods are employed to measure body fat percentage, each offering a range of accuracies and levels of accessibility. Dual-Energy X-ray Absorptiometry (DXA) stands as the gold standard, capable of providing detailed differentiation between bone mass, fat mass, and lean muscle mass by passing low-dose X-rays through the body while the individual lies on a table. Despite its high accuracy, DXA scans require specialized, costly equipment and are less accessible for regular use. Bioelectrical Impedance Analysis (BIA), found in consumer smart scales and handheld devices, offers a more accessible alternative by estimating body composition through the resistance to a small electrical current passed through the body. However, its accuracy can be impacted by hydration levels, recent physical activity, and other factors, making it less reliable than DXA. Skinfold measurements with calipers, estimating body fat by measuring the thickness of fat at various body sites, and hydrostatic weighing, calculating body density by measuring a person's mass while submerged in water, are other methods with their applications and limitations. Each method's accuracy and practicality can vary, with limitations including the need for specialized equipment, potential discomfort, variability in results due to external factors, and the requirement for trained personnel to conduct some of the measurements accurately. These shortcomings highlight the need for continued innovation and development in body composition analysis to enhance accuracy, accessibility, and ease of use.
The Spren body fat estimation app revolutionizes the approach to monitoring body composition by leveraging a sophisticated deep learning and computer vision-based algorithm accessible via a smartphone's camera. This method offers a significant leap in convenience and safety by providing highly accurate estimates of body fat percentage without the need for X-ray exposure, as is necessary with DXA scans. By eliminating the need for medical facility visits, our app addresses the cost, accessibility, and time constraints typically associated with traditional body composition measurement methods.
One of the standout features of our app is the ability for users to conduct scans more frequently than would be possible or practical with DXA. This frequent scanning capability is not just a matter of convenience; It allows users to gain much deeper insights into their wellness journey with greater granularity. Such regular monitoring is invaluable for understanding how different dietary, exercise, and lifestyle choices affect body composition over time, offering a granularity of feedback that DXA scans cannot match due to their higher costs, the need for specialized equipment, and the logistical challenges of scheduling and traveling to medical appointments.
Moreover, the non-invasive nature of our app, combined with its avoidance of X-ray exposure, makes it a safer option for regular use. Users can track their progress towards health and fitness goals from the comfort and privacy of their homes, making informed decisions to adjust their wellness strategies as needed. This level of frequent, accessible, and detailed feedback empowers users to stay engaged and motivated on their wellness journey, offering a clear picture of how their efforts are translating into tangible changes in body composition.
Deep learning (DL) and computer vision (CV) and a dataset from a diverse group of subjects consisting of full body images and associated DXA values can be used to learn the complex patterns and visual cues present in images of the human body and how these patterns are associated with body fat percentage. The principle behind estimating bf% from images is grounded in the understanding that certain visual factors correlate with body fat levels, muscle mass, and overall body composition.
Several visual factors contribute to the accurate estimation of body fat percentage. These include the visibility of muscle definition, the presence of specific body shape indicators related to fat distribution, and observable physical traits that suggest the percentage of body fat versus lean mass. For instance, well-defined abs or visible muscle separation are indicators of lower body fat levels, whereas a rounder shape might indicate a higher bf%. The distribution of fat in different areas of the body, such as around the waist or on the legs, also provides critical clues.
Deep learning models excel at detecting complex patterns that are not immediately obvious to the human eye. By analyzing vast datasets containing images of individuals with a wide range of ages, genders, races, body shapes, heights, weights, bf%, and varying levels of fitness and exercise routines, DL models can learn to accurately estimate bf% from images. These models account for the nuanced visual factors and the diversity in human bodies to make precise estimates. This process requires a rich and diverse dataset to ensure the algorithm's accuracy and applicability across different populations.
Two individuals with the same body fat percentage can look markedly different due to several factors. Differences in muscle mass, muscle density, and muscle separation can greatly influence body appearance, with more muscular individuals appearing more toned and defined. Additionally, the distribution of fat in the body, which is influenced by genetics, hormonal functions, gender, and race, can alter body shape. Factors such as muscle pump, fat mass storage (e.g., in breasts for women), body water content, body volume, bone structure, and the type of fat (subcutaneous, visceral, intramuscular) further contribute to these differences. For example, someone with more visceral fat may have a larger belly, while another with more subcutaneous fat may have more loose, hanging skin. Posture, core strength, and even medical conditions like diabetes, which tends to promote fat storage in the abdominal area, also play a role in how body fat manifests physically.
Learning these variations is crucial for the models estimating bf% from images. It highlights the importance of not only recognizing the amount of body fat but also understanding its distribution and the interplay with other body composition factors.
We benchmarked the accuracy of the Spren body composition model against a diverse group of 133 participants. The participant gender composition was 52% female and 48% male. The sample was ethnically and racially diverse, comprising 54.8% White, 21.0% Black, 12.0% Asian, 6.0% Hispanic, 4.5% Multiracial, and the remaining 1.5% categorized as 'Other'. The participants' mean age was 35.4 ± 13.8 years (range: 18–82 years), with a BMI of 25.7 ± 4.9 kg/m² (range: 16.6–52.4 kg/m²). The DXA-measured body fat percentage (%BF) was 32.1 ± 8.55% for women and 21.9 ± 7.7% for men.
Data collection was carried out at a clinical laboratory. The process involved participants wearing minimal clothing and standing at a fixed distance ~5ft away from a smartphone camera stationed in the laboratory. Each participant was instructed to stand with their hands raised from the elbows and face the camera.
Each participants ground-truth body composition, including fat percentage, was measured with a DXA machine.
The images captured in the laboratory were passed to the AI model for processing. Additional demographic information—specifically, age, gender, and race—was also provided to the AI model. The AI model analyzed the images to estimate the body fat percentage of each participant.
To validate the accuracy of the AI model’s body fat percentage estimates, we conducted a comparative analysis with the actual measurements obtained from DXA scans. By comparing the AI-generated estimates against the DXA-measured values, we were able to assess the model's accuracy and reliability in estimating body fat percentage.
The results of our testing study shows the model’s high accuracy in estimating body fat percentage, measured by a mean absolute error (MAE) of 2.2 with a standard deviation (STD) of ±2.6 and its ability to generalize to never seen before subjects.
In this analysis, "error" for a subject is defined as the discrepancy between the model's predicted values for body fat percentage and the actual (true) values or ground-truth measure with DXA. Overall error (or overall accuracy) quantifies the model's performance by aggregating the errors across all subjects in the test set. Overall accuracy offers a comprehensive view of how closely the model's predictions align with the actual values.
Below is a detailed breakdown of various metrics that highlight the algorithm's accuracy:
The chart below shows the ground truth versus estimated body fat percentage for the test participants.
This next chart displays the Bland-Altman plot. The Bland-Altman plot illustrates the agreement between the ground truth and estimated body fat percentages for the test participants. It includes:
Our algorithm has proven to exhibit remarkable consistency and accuracy across a wide array of demographics, demonstrating its effectiveness regardless of gender, BMI categories, and ethnic backgrounds. This adaptability highlights the algorithm's robustness and its advanced analytical capabilities, ensuring its applicability to a diverse user base.
This analysis by gender, race, and BMI supports the algorithm's consistency, showing only minor differences in MAE values, which illustrates its broad effectiveness.
To evaluate the variation in the estimated body fat percentage derived from multiple images of the same individual, we calculated the difference between the predicted body fat percentage and the actual (ground truth) body fat percentage for each image. Then, for each individual in the test group, we averaged these differences across all the images associated with that person. This approach provides a measure of how much the predictions for each person deviate, on average, from their true body fat percentage, taking into account all the images provided per individual.
The result of this calculation, the Overall Mean Difference, came out to be 0.75. This figure represents the average deviation of the predictions from the actual body fat percentages across all individuals and all their images. This Overall Mean Difference suggests a relatively small average deviation, demonstrating the algorithm's capability to provide consistent body fat estimates across multiple images of the same person.
A recent studies (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8122302/), compared the accuracy of smart scales to the DXA method for measuring body fat.
Here is the summary of their comparative analysis for three commercial smart scales, showing the median error and Interquartile Range (IQR) for the fat mass estimation:
Here are those metrics calculated for Spren’s test data:
This analysis shows that the smart scales tend to underestimate fat mass significantly compared to DXA measurements. This underestimation could mislead users about their actual body composition and potentially affect health and fitness decisions.
In contrast, Spren's method exhibits a markedly smaller range of error and a minimal median error. This suggests not only a closer alignment with DXA but also greater reliability and accuracy in fat mass estimation.
This white paper validates Spren's body fat estimation model as a highly accurate, non-invasive, and convenient tool for body composition analysis. It signifies a major advancement in personal health technology, leveraging the widespread use of smartphones.