External validation of ELASTIC NET regression models including newborn metabolomic markers for postnatal gestational age estimation in East and South-East Asian infants [version 1; peer review: 1 approved, 3 approved with reservations]

Background: Postnatal gestational age (GA) algorithms derived from newborn metabolic profiles have emerged as a novel method of acquiring population-level preterm birth estimates in low resource settings. To date, model development and validation have been carried out in North American settings. Validation outside of these settings is warranted. Methods: This was a retrospective database study using data from newborn screening programs in Canada, the Philippines and China. ELASTICNET machine learning models were developed to estimate GA in a cohort of infants from Canada using sex, birth weight Open Peer Review

and metabolomic markers from newborn heel prick blood samples. Final models were internally validated in an independent group of infants, and externally validated in cohorts of infants from the Philippines and China. Results: Cohorts included 39,666 infants from Canada, 82,909 from the Philippines and 4,448 from China. For the full model including sex, birth weight and metabolomic markers, GA estimates were within 5 days of ultrasound values in the Canadian internal validation (mean absolute error (MAE) 0.71, 95% CI: 0.71, 0.72), and within 6 days of ultrasound GA in both the Filipino (0.90 (0.90, 0.91)) and Chinese cohorts (0.89 (0.86, 0.92)). Despite the decreased accuracy in external settings, our models incorporating metabolomic markers performed better than the baseline model, which relied on sex and birth weight alone. In preterm and growth-restricted infants, the accuracy of metabolomic models was markedly higher than the baseline model. Conclusions: Accuracy of metabolic GA algorithms was attenuated when applied in external settings. Models including metabolomic markers demonstrated higher accuracy than models using sex and birth weight alone. As innovators look to take this work to scale, further investigation of modeling and data normalization techniques will be needed to improve robustness and generalizability of metabolomic GA estimates in low resource settings, where this could have the most clinical utility.

Introduction
Global-and population-level surveillance of preterm birth is challenging. Inconsistent use of international standards to define preterm birth and gestational age (GA) categories, the range of methods and timing used for GA assessment, and inadequate jurisdictional or national health data systems all hamper reliable population estimates of preterm birth 1 . As complications related to preterm birth continue to be the most common cause of mortality for children under five 2 , robust data on the burden of preterm birth are needed to maximize the effectiveness of resource allocation and global health interventions.
Newborn screening is a public health initiative that screens infants for rare, serious, but treatable diseases. Most of the target diseases are screened through the analysis of blood spots taken by heel-prick sampling. Samples are typically collected within the first few days after birth, but under special circumstances (e.g., preterm birth, neonatal transfer) may be collected later. Newborn samples are analyzed for a range of diseases, such as inborn errors of metabolism, hemoglobinopathies, and endocrine disorders, using tandem mass spectrometry, colorimetric and immunoassays, and high-performance liquid chromatography 3 . Postnatal GA algorithms derived from newborn characteristics and metabolic profiles have emerged as a novel method of estimating GA after birth. Using anonymized data from state and provincial newborn screening programs, three groups in North America have developed algorithms capable of accurately estimating infant GA to within 1 to 2 weeks 4-6 . Recent work to refine metabolic GA models 7 , as well as internally and externally validate their performance in diverse ethnic groups and in low-income settings, has demonstrated the potential of these algorithms beyond proof-of-concept applications 8,9 .
Published approaches to model development and validation to date have been carried out in cohorts of infants from in North American settings 4-6 . Although internal validation of these models has been conducted among infants from diverse ethnic backgrounds 4,8 , external validation of model performance outside of the North American context is essential to evaluate the generalizability of models to low income settings where they would have the most clinical utility. Birth weight, a significant covariate in all published models, is strongly correlated with GA and varies significantly by ethnicity 10 . Metabolic variations in newborn screening profiles that result from variation in genetic and in utero exposures may also affect the performance of established algorithms across ethnic or geographic subpopulations 11 . Importantly, as innovators seek to take this work to scale, validation of metabolic models using data stemming from different laboratories is warranted. In this study, we sought to validate a Canadian metabolic GA estimation algorithm in data derived from newborn screening databases based in the Philippines and China.

Study design
This was a retrospective database study that relied on secondary use of newborn screening data from three established newborn screening programs: Newborn Screening Ontario (Ottawa, Canada); Newborn Screening Reference Centre (Manila, the Philippines); and the Shanghai Neonatal Screening Center (Shanghai, China). Approval for the study was obtained from the Ottawa Health Sciences Network Research Ethics Board (20160056-01H), and research ethics committees at both the University of the Philippines Manila (2016-269-01), and the Xinhua Hospital (XHEC-C-2016). The need for express informed consent from participants was waived by the ethics committees for this retrospective study.

Study population and data sources
Infants whose blood spot samples were collected more than 48 hours after birth were excluded from model development in the Ontario cohort. In the China and Philippines datasets, the age of infant at sample collection was only available to the nearest calendar day. Samples were excluded from analysis if they were collected more than 72 hours after birth as most samples would have been excluded if the >48-hour exclusion were applied to these validation cohorts.
Newborn Screening Ontario (NSO): a provincial newborn screening program that coordinates the screening of infants born in Ontario, Canada. The program screens approximately 145,000 infants (>99% population coverage) annually for 29 rare conditions, including metabolic and endocrine diseases, sickle cell disease, and cystic fibrosis 12 . Newborn screening data collected between January 2012 and December 2014 were used in model building and internal validation.
Newborn Screening Reference Center: coordinates screening across six operations sites in the Philippines. The program screens approximately 1.5 million infants (68%) annually, offering two screening panels, either a basic panel of six disorders or an expanded panel of 28 disorders. Data from this study were obtained from one of the newborn screening centers, the National Institutes of Health at the University of the Philippines Manila. Data were included for infants born between January 2015 and October 2016 who were screened using the expanded panel of 28 disorders. Disorders screened included metabolic disorders, and hemoglobinopathies.
Shanghai Neonatal Screening Center, National Research Center for Neonatal Screening and Genetic Metabolic Diseases: coordinates the screening of infants born in Shanghai, China. The program screens approximately 110,000 infants (>98%) annually for between 4 and over 20 rare conditions including metabolic and endocrine diseases. Four screening tests -for phenylketonuria, congenital adrenal hyperplasia, hypothyroidism and Glucose-6-phosphate dehydrogenase deficiency -are funded by the government. Screening tests reliant on tandem mass spectrometry are funded by the newborn's family or the Shanghai neonatal screening center. Data collected from the Shanghai Jiaotong University School of Medicine Xinhua Hospital were used for this study. Infants born between February 2014 and December 2016 and for whom tandem mass spectrometry data were available were included.
Reference GA assessment. In newborn cohorts from Ontario and China, GA was measured using gold-standard first trimester gestational dating ultrasound in approximately 98% of cases, and was reported in weeks and days of gestation (for example 37 weeks and 6 days would be reported as 37.86 weeks). In the Philippines cohort, mothers who delivered in private hospitals generally received gestational dating ultrasounds while other infants' GAs were generally measured using Ballard Scoring. GAs were reported in completed weeks (for example 37 weeks and 6 days would be recorded as 37 weeks). Therefore, for the Philippines cohort only, model-based GA estimates were rounded down in the same way for comparison to reference GA in the presentation of validation results.
Specific data elements used in this study from each respective newborn screening program are provided in Table 1. The Newborn Screening Ontario (Canada) disease panel included the greatest number of analytes. All analytes included in the newborn screening panels of the Newborn Screening Reference Centre (the Philippines) and the Shanghai Newborn Screening Program (China) were also available from Newborn Screening Ontario.

Statistical methods
Data cleaning and normalization. Data from the Ontario cohort were used for model development. Many of the details of the data preparation, model building, and internal validation have been reported previously 7 . A series of steps were taken to prepare the newborn screening analyte data for modeling: 1) In the Ontario cohort, all screen-positive results were excluded from analysis, which had the effect of removing a large proportion of extreme outliers and atypical metabolic profiles. Further, samples used in model development were limited to those collected within 48 hours of birth given that GA estimation is intended to be applied in LMICs where samples are expected to be collected almost exclusively within the first few hours after birth.
Statistical modelling. The Ontario-derived dataset was randomly divided into three sub-cohorts: 1) a model development sub-cohort (50%); 2) an internal validation sub-cohort (25%); and 3) a test sub-cohort (25%). Stratified random sampling was used to ensure that these three sub-cohorts retained the same distribution of GA as the overall cohort.
A total of 47 newborn screening analytes, as well as sex, birth weight and multiple birth status, were used in our original model development. GA at birth (in weeks) determined by first trimester gestational dating ultrasound was the dependent variable. Multiple birth status and a subset of screening analytes were not available in the external cohorts, therefore we developed restricted Ontario models including those covariates available in each of the two external cohorts. Three main models were derived and evaluated (with variations in the included analyte predictors based on availability in each external cohort) ( Table 2): Model 1: Baseline model containing only infant sex, birth weight (grams), and the interaction between these, sex and birth weight. MSE, also referred to as root mean square error (RMSE), is also expressed in the same units as GA (weeks). Lower values of both MAE and standard error of estimation (RMSE) reflects more accurate model estimated GA. For example, a reported MAE of 1.0 weeks reflects that the average discrepancy between model estimated GA and reference GA was 7 days. We also calculated the percentage of infants with GAs correctly estimated within 7 and 14 days of reference GA. We assessed model performance overall and in important subgroups: preterm birth (<37 weeks GA), and small-for-gestational age: below the 10 th (SGA10) and 3 rd (SGA3) percentile for birth weight within categories of gestational week at delivery and infant sex based on INTERGROWTH-21 gestational weight for GA percentiles 13 . Parametric standard error estimates were not readily calculable for all of our performance metrics, therefore we calculated 95% bootstrap percentile confidence intervals for all validation performance metrics, based on the 2.5 th and 97.5 th percentiles for performance metrics over 1000 bootstrap replicates for each validation cohort 14 . Replication code is available as Extended data 15 .

Cohort characteristics
Cohort characteristics are presented in Table 3. In all, the final infant cohorts for model validation included 39,666 infants from Ontario, Canada, 82,909 infants from the Manila,  (Table 4).
Restricted models including the subset of analytes available in the Philippines and China cohorts performed comparably to the unrestricted Ontario models overall. When applied to the Ontario internal validation cohort, accuracy of both the Chinaand Philippines-restricted models was slightly lower overall and lower in important subgroups, most notably in preterm and growth restricted infants for cohort (Model 2 and Model 3 China restricted and Philippines restricted) ( Model performance across spectrum of GA In all models applied to both external validation cohorts, GA estimates were most accurate in term infants and accuracy tended to be lower in preterm infants ( Figure 1). Across the spectrum of ultrasound-assigned GA, Model 3 provided the most accurate estimates overall.

Discussion
In this study, we demonstrated that the performance of gestational dating algorithms developed in a cohort of infants from Ontario, Canada including newborn screening metabolomic markers from dried blood-spot samples was attenuated when the models were applied to data derived from external laboratories and populations. When these Canadian-based models were tailored to the analytes available from newborn screening programs in Shanghai, China and Manila, Philippines, the models were less accurate in estimating absolute GA in infant cohorts from these locations than when the same models were applied to an Ontario infant cohort. Models including analytes generally demonstrated improved accuracy over those relying on sex and birth weight alone, but the added benefit of models including blood-spot metabolomic markers (Model 2 and Model 3) was not substantial when looking at overall accuracy. However, our models that included metabolomic markers did demonstrate markedly improved accuracy over sex and birth weight in important subgroups (preterm and growth restricted infants), with Model 3 which included sex, birth weight and metabolomic markers demonstrating the best performance in almost all settings. The exception to this observation was in growth restricted infants (SGA10 and SGA3), where Model 2 often performed the best. This is not surprising, as birth weight is clearly a misleading predictor of GA in growth restricted infants, and although Model 3 still outperformed Model 1, its accuracy was impacted by the Data are presented as the mean and 2.5 th and 97.5 th bootstrap percentiles for MAE, RMSE and the percentage of model estimates within 1 and 2 weeks of ultrasound GA for 1000 bootstrap samples generated from each cohort Figure 1. Agreement between algorithmic gestational age estimations compared to ultrasound-assigned gestational age.
(A) Legend, and overall MAE (95% CI) for each model applied to data from the Philippines and China. Dot size in plots is proportional to sample size in each gestational age category. Performance of each model by ultrasound-assigned gestational age when applied to data from (B) the Philippines (C) China. MAE, mean absolute error (average absolute deviation of observed vs. predicted gestational age in weeks).
inclusion of birth weight in addition to metabolomic markers. Therefore, the decision of whether to prefer Model 2 or Model 3 may hinge on whether the prevalence of growth restriction is known to be high in the setting where the GA estimation algorithm is to be deployed. When we compared preterm birth rates (<37 weeks GA) calculated based on model estimates, to those calculated based on reference GA in each cohort, the modelbased estimates from the best performing model (Model 3) agreed reasonably well with the reference preterm birth rates (4.2% vs 4.8% for China and 4.2% vs 4.6% for the Philippines). Unfortunately, as with any dichotomization of a continuous measure (GA), there are significant edge effects that can contribute to perceived misclassification (e.g. GA of 36.9 weeks is classified as preterm while a GA of 37.1 weeks is classified as term, despite a difference in GA of only about 1 day).
There are several reasons why the metabolic gestational dating algorithm we developed from a North American newborn cohort may not have performed as well using data derived from other infant populations. First, as observed in the differences in performance across the birth weight-only models developed in the three cohorts, the predictive utility of anthropomorphic measurements for estimating GA may vary across populations. Second, metabolic profiles may be influenced by the differences in genetic and environmental exposures experienced by each cohort, as well as non-biological heterogeneity attributable to different laboratories conducting the screening assays.
Previous validation of our models among infants born to landedimmigrant mothers from eight different countries across Asia and North and Sub-Saharan Africa suggested that inherent biological differences may not be a significant contributor to newborn metabolic data and the performance of our algorithms 8 , but in this study, as well as in an external validation of previously developed GA estimation models in a prospective cohort from South Asia 9 , differences were more pronounced. Third, variations in the clinical measures of GA used across the cohorts may have impeded the accuracy of our algorithms. Our GA models were originally developed with first trimester ultrasoundassigned GA as the dependent variable. Whereas first trimester ultrasounds were the gold standard in the Ontario and China cohorts, GAs for the Philippines cohort were determined by a mixture of gestational dating ultrasound and Ballard scores, and were only available to the nearest completed week of GA. Lastly, and perhaps most importantly, variations in the collection procedures and analytical methods used by each of the newborn screening programs are likely to have impacted the measurable relationship between the analytes and newborn GA. At the newborn screening program in Shanghai, China, samples were collected, on average, about one day later after birth particularly among preterm infants with the majority being collected between 48-72 hours. Variations in temperature, climate, sample handling, and storage among the three newborn screening laboratories may have also contributed to heterogeneity of findings. The screening laboratories in Ontario, Shanghai, China and Manila also likely relied on different equipment, assays and reagents to quantify the measured analytes. We attempted to address these sources of heterogeneity and bias through our data preparation steps, which involved local standardization of analyte values and birth weight. Extreme outliers, skewed distributions, heteroscedasticity, and systematic biases within and between laboratories are all factors that may obscure biological signals. Normalization and other data pre-processing steps are therefore crucial to the analysis of metabolomic data, and we continue to investigate the impact of alternative data normalization techniques in improving the generalizability of our GA estimation models, while still taking care to preserve the biological signals of interest. This is an active area of active research as it relates to the use of 'omics data in prognostic models more generally 16,17 .
Our study has several strengths and limitations. Notable strengths include the size of our Ontario, China and Philippines cohorts, the commonality of a preponderance of the analytes across populations, the ability to tailor models to the specific analytes available for each cohort, and the methodological rigor we imposed in our modeling and validation. Limitations include the inability to examine the impact of environmental factors (socio-economic conditions, dietary and environmental exposures during pregnancy), variations in approaches to newborn screening that may not have been accounted for in our analyses, and generally smaller sample sizes for more severely preterm children.
While there are numerous options currently available to health care providers to determine postnatal GA, none are as accurate as first trimester dating ultrasound 18 . Where access to antenatal dating technologies are limited, and the reliability of postnatal assessments is variable, there is a recognized need for new and innovative approaches to ascertaining population-level burdens of preterm birth in low resource settings 18,19 . Metabolic GA estimation models in particular have proven particularly promising 19 , and we continue to refine and evaluate these models in a variety of populations 6,7,20 and laboratories in an effort to ready this innovation for broader application. The findings of this study suggest that the accuracy of metabolic gestational dating algorithms may be improved where newborn samples can be analyzed in the same laboratories from which the algorithms were originally derived and underscore our previous findings of their potential particularly among low birth weight or SGA infants 7 . Validation of our ELASTIC NET machine learning models is also being completed in prospective cohorts of infants from low income settings in Bangladesh and Zambia 20 , with validation of previously developed models already completed in Bangladesh 9 . The effects of laboratory-specific variables are being mitigated through the standardization of collection and analytical procedures applied to newborn samples; preliminary results are promising. As efforts to optimize gestational dating algorithms based on newborn metabolic data continue, and innovators seek to take this work to scale, future work should identify opportunities to develop algorithms locally where newborn screening laboratories exist, and to build capacity in low resource settings for these purposes.

Data availability Underlying data
The data from Ontario, Canada used to develop models, and the data for the external validation cohorts in which model performance was evaluated were obtained through bilateral data sharing agreements with the Ontario Newborn Screening Program and BORN Ontario, and newborn screening laboratories at Xinhua Hospital in Shanghai, China and University of the Philippines, Manila, Philippines. These data sharing agreements prohibited the sharing of patient-level data beyond our research team.

Ontario data
Those wishing to request access to Ontario screening data can contact newbornscreening@cheo.on.ca, and the request will be assessed as per NSO's data request and secondary use policies. For more information, please visit the NSO website: https:// www.newbornscreening.on.ca/en/screening-facts/screening-faq ('What happens when a researcher wants to access stored samples for research'); https://www.newbornscreening.on.ca/en/ privacy-and-confidentiality.

Philippines data
Researchers can request access to the de-identified data (sex, birthweight, gestational age and screening analyte levels) from the Philippines for future replication of the study by sending a request letter to the Director of Newborn Screening Reference Center stating the study objectives in addition to: a. A copy of the study protocol approved by a technical and ethics review board that includes methods and statistical analysis plans; b. Full name, designation, affiliation of the person with whom the data will be shared; and, c. Time period that the data will be accessed.

Authors: Suman Chaurasia, Ramesh Agarwal
We are pleased to go through this interesting article by Hawken et al., dealing with estimation of gestational age among neonates born in low-resource settings that illustrates novel clinical and metabolomic parameter-based models. We congratulate the authors for taking up this study on account of several outstanding features: Conceptualizing the study idea of estimating gestational age postnatally, using a mix of conventional and novel metabolomic based objective parameters.

○
Enrolling neonates in large numbers from population based cohort to develop the model as well as validating internally.
○ Stratified random distribution of neonates among the derivation sub-cohorts to match the overall gestational age (GA) of the development cohort.

○
Using efficient study design of retrospective databases to source the samples -often stringently available in neonatal prospective cohort studies.

○
Utilizing machine-learning approaches to refine the algorithm.

○
Undertaking the enormous task of external validation rigorously -involving settings that may find the algorithm most useful, recruiting huge cohorts, harmonizing tools and processes across the sites, etc.
○ Finally, ensuring that the data is accessible to all interested in taking up future studies.
○ ○ However, we have a few comments to make, especially from the clinical rather than public health viewpoints. Major comments are: the model's performance in term infants seems more reliable than preterms or SGAs. However the latter are the subgroups where gestation estimation maybe much more useful in clinical settings. One of the reasons could have been because the derivation cohort itself had very few preterm infants (less than 5%) and so future studies should be planned for preterms and SGA categories. Also, as elaborated further, there appears to be much scope for conducting large multi-centric prospective study to remarkably improve and validate the GA algorithm, given that the article presents a promising alternative to grapple with this longstanding issue.
We are of the opinion that GA estimation by the algorithm beyond ± 1 week may not be clinically as useful; so, the model performance in Table 4 in the main text should primarily depict this parameter and avoid parameters like "% ± 14 d". The latter may actually be shifted to the web appendix if needed at all. This may also make the Firstly, birthweight remains the most fundamental factor to predict GA with the lion's share of explanatory attribute; this implies possible variability in measuring birthweight, quite plausible in resource-constrained settings and appears to be the crucial factor and should be minimized.

○
Secondly, the addition of analytes may not substantially improve beyond the bithweight's robust contribution in the algorithm unless we consider critical remedies. One most likely pointer towards the solution may be attributed to the postnatal age cut off taken to collect the samples for metabolomic studies. The Ontario cohort's sample collection cut off was 48 h compared to the other two cohorts of 72 h. Though the authors do mention that in the latter two cohorts, "most samples would have been excluded" with 48 h cut off, it may be pertinent to give the break up to describe the "most".
○ Further it will be worthwhile to see how the model performs by removing the samples between 48 h and 72 h, and the same be included in the appendix.

○
If possible, analyzes involving samples more closer to birth e.g. within 24 h should also be alluded to give a broader understanding to the audience. Such exercises may aid towards improving the current performance of the model at nearly 75% of samples being predicted with GA ± 1 w. We do believe certainly that the latter target should be much higher-maybe close to 90%. ○ ○ It will also be worthwhile to have a view at the agreement plots (as in Figure 3) after removing the SGAs -SGA10, SGA3 and both in that order, especially in the Philippine cohort where SGAs constitute around 13% infants. This may also improve the average MAEs across the models 1 -3. Additionally, separate agreement plots for SGAs should be explored as well, and considered to be included in the appendix if they make sense. This may particularly helpful for deveral LMICs regions having high prevalence of IUGR like Southeast Asian Region.

○
We have noted a typo error: the proportion of infants predicted by Philippines restricted model for the Ontario cohort within +/-2 weeks is mentioned as 69.9; this should perhaps be 96.9, going by the 95% CIs.

Are sufficient details of methods and analysis provided to allow replication by others? Yes
If applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.

Julie Courraud
Section for Clinical Mass Spectrometry, Statens Serum Institut, Copenhagen, Denmark The authors report the results of a validation study of previously developed algorithms to predict gestational age in post-natal settings. This is a complex task considering the many differences between the countries and the authors have gathered substantial datasets for this study. One model is based on sex and birthweight only. The two other models include metabolic profiles measured on dried blood spot samples during newborn screening for congenital disorders. The models were applied on data acquired in different laboratories in China and the Philippines in comparison with data from a Canadian laboratory where it was developed. The ultimate goal is legitimate and, although the results are promising, given the limitations observed for preterm infants, several aspects must be addressed and explored before the method can be used in the clinic.
I could not assess the relevance or correct application of the deep learning method used, as well as bootstrap percentile confidence intervals, as these are not within my area of expertise.

Introduction:
In paragraph 2, you state that "Samples are typically collected within the first few days after birth, but under special circumstances (e.g., preterm birth, neonatal transfer) may be collected later". In your study, you selected samples collected within 48 hours only, and explain that the reason is that LMIC usually collect the samples in this timeframe. However, you also state that "most samples would have been excluded if the >48-hour exclusion were applied to these validation cohorts" (Methods, paragraph 2), so it seems that many samples ○ were collected between 48 and 72 hours as mentioned in the discussion.
Please discuss whether your algorithm could then be used to target the right population, i.e. preterm birth, when the samples might not be collected within 48 or even 72 hours.

○
In untargeted metabolomics of newborn dried blood spots, it has been shown that the baby's age at sampling is a critical variable when one considers metabolic profiles, and only a few days difference has significant impact. Have you investigated the extent of impact of this variable on your targeted metabolic profiling? How do you intend to address this in your future research or when applying your algorithm? Have you considered integrating age at sampling as a variable in the algorithm (or as a stratification variable for the partitioning into subsets)? See also my comment regarding the discussion.

○
Please discuss the limitation of applying the algorithm outside the age at sampling range on which it was developed (or mention/rephrase this limitation more specifically in the discussion: "one day later after birth than the samples used for model development" as no sample >48 hours was included during model development).

○
In untargeted metabolomics of newborn dried blood spots, another crucial covariate impacting metabolic profiling is month of birth (see Courraud et al. 2021), or so at least in Denmark. Being born in summer or winter is remarkably visible. Such effect might be or not be visible in various countries. Have you investigated this potential covariate in your targeted profiling and/or considered integrating it in the algorithm?

Methods:
Paragraphs 3-5. Please specify which analytical methods are used in each center included in the study. Is it mass spectrometry everywhere? Do they use a marketed kit or laboratorydeveloped tests? Consider giving more methodological details as supplementary material as different platforms may not give the same analytical performance. ○ Paragraph 4. Please clarify why some infants get the expanded screening panel of 28 diseases and discuss the risk of selection bias when choosing these infants for the validation.

○
Paragraph 5. Please discuss the risk of selection bias when choosing to include only infants for whom tandem mass spectrometry data were available. Does this mean that all metabolites have been measured with this method? (Is this the method used to screen for phenylketonuria, congenital adrenal hyperplasia, hypothyroidism and Glucose-6-phosphate dehydrogenase deficiency?).

○
If the applicability of the algorithm is dependent on the family's income (being able to pay for extra screening), will it achieve its goal to reflect preterm birth globally, given that preterm birth is more frequent in families struggling economically? Please discuss. ○ Paragraph 6 on GA assessment: for the Philippines, please indicate the proportion of infants for whom GA was assessed by ultrasound or using Ballard Scoring. A note on the precision of the Ballard scoring with a relevant reference would help the reader.

○
On the same topic, you later state that "model performance was assessed by comparing the estimated GA from the model to the ultrasound-derived GA". So it is unclear whether or not the infants for whom GA was assessed using Ballard Scoring are included at all. Please clarify.
○ Table 1: Please indicate what "C0", "C2", etc. refer to precisely. It might be obvious for someone in the field, but not to many readers for whom C18 might just be a free fatty acid and not the acyl-carnitine. You could for instance provide a list in supplementary data with full names and PubChemIDs. It helps bridging with the untargeted metabolomics community who is also working on the topic.

○
Please be more specific as to which metabolites are included in model 2 and 3. It's not clear, especially considering the "restricted models" in Table 4. ○ Models including newborn screening analytes: How did you cope with the metabolites missing in the validation cohorts? In the result section, you mention "Philippines-restricted" models, etc., please introduce them in the method section. Are the equations the same, just removing the missing metabolites or did you "re-develop" the models? Or? ○ Are your models "resistant" to missing values? (In the real world, there will be missing values.) ○ Would it be possible to report which metabolites have the biggest influence in each model?

○
Have you considered a "model 4" restricted to the few metabolites measured for the "basic" screening panels offered in China and the Philippines? It would be accessible to more people as far as I understand, and might still perform better than just birthweight and sex.

Discussion:
You write: "First, as observed in the differences in performance across the birth weight-only models developed in the three cohorts, the predictive utility of anthropomorphic measurements for estimating GA may vary across populations".
"birth weight-only models developed " Do you mean the unique model 1 (sex + birth weight) applied in the 3 cohorts? This sentence is confusing as it implies that there are several models that were developed, when I had understood that you developed one model 1 based on the Canadian infants and applied "the final equations" to the other cohorts no involved in the development. Please clarify. ○ ○ Also, why do you think that the "predictive utility of anthropomorphic measurements for estimating GA may vary across populations"? It could be that anthropomorphic measurements are indeed too different between Canada and Asian populations, so the models developed with Canadian data are not performing in Chinese infants. But why question the utility of the measurement itself? (To make a comparison with, for instance, month of birth, one could argue that seasonal variation is relevant in some climates but not in others. I'm not sure why birthweight would be more or less relevant and I'm just curious as to whether you have a more specific hypothesis.) ○ Sentence starting with "Previous validation of our models among": Please split this sentence as it is too long and difficult to know what you are referring to when you end with "differences were more pronounced" (between what? These different subgroups? More pronounced compared to?). When you write "inherent biological differences", do you mean both genetic and environmental? Please clarify.

Are sufficient details of methods and analysis provided to allow replication by others? Partly
If applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.

Are the conclusions drawn adequately supported by the results? Yes
Competing Interests: No competing interests were disclosed.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
days after birth, but under special circumstances (e.g., preterm birth, neonatal transfer) may be collected later". In your study, you selected samples collected within 48 hours only, and explain that the reason is that LMIC usually collect the samples in this timeframe. However, you also state that "most samples would have been excluded if the >48-hour exclusion were applied to these validation cohorts" (Methods, paragraph 2), so it seems that many samples were collected between 48 and 72 hours as mentioned in the discussion.

Response: Thank you for your careful review of the manuscript. We have removed the second half of the sentence as we now see that it could confuse reviewers. We have now provided more details on the exclusion criteria applied to the Ontario cohort for model development. The two criteria leading to the most exclusions were 1) requiring gold-standard GA measurement via 1st-trimester dating ultrasound, and 2) screening bloodspot collection within 48 hours of birth. The first exclusion criteria may have excluded infants born in rural or underserved areas of the province where access to comprehensive prenatal care was lower. In many cases however, this is more likely to be a data quality issue, where dating ultrasound was used but not recorded as such. The second criteria led to the disproportionate exclusion of preterm infants who more often had delayed sample collection, despite this not being recommended practice. Although this exclusion biased the rate of preterm gestation observed in our Ontario study cohort downward, but it was unlikely to have had any important impact on GA model development, as we still had a large sample size across the full spectrum of gestational ages at birth to allow robust model development and performance evaluation. Further, the inclusion of samples collected later than 48 hours would introduce a large amount of heterogeneity in analyte levels which had to be balanced against the impact of exclusions. Although our intention was to exclude samples collected later than 48 hours for the external cohorts as well, it was not possible to take this approach because only calendar day of sample collection was available. Since hourly data was not available, a 48-hour cut off would have excluded most samples. Therefore, we relaxed the exclusion criteria to >72 hours. We have reorganized the methods and provided additional details which will clarify some of these points.
Please discuss whether your algorithm could then be used to target the right population, i.e. preterm birth, when the samples might not be collected within 48 or even 72 hours.

Response: We have reviewed data collected outside of the recommended time frame. In our first external validation study in Bangladesh (Murphy et al, eLife 2019), mean sample collection time was ~14 hours as mothers were often discharged before the 24-hour time frame. If samples are collected too early, hemoglobin values still reflect maternal values and decreases the accuracy of the algorithm. Ultimately for the algorithm to be viable, it needs to be effective across a range of sample collection timings to accommodate for early or late collection times. For accuracy of newborn screening, it is recommended that samples are collected between 24-48 hours as heterogeneity begins to appear after 48 hours. Limiting the collection time for algorithm development reduces the variability in the data and improves the accuracy of the algorithm.
2. In untargeted metabolomics of newborn dried blood spots, it has been shown that the baby's age at sampling is a critical variable when one considers metabolic profiles, and only a few days difference has significant impact. Have you investigated the extent of impact of this variable on your targeted metabolic profiling? How do you intend to address this in your future research or when applying your algorithm? Have you considered integrating age at sampling as a variable in the algorithm (or as a stratification variable for the partitioning into subsets)? See also my comment regarding the discussion.

Response: This is an important consideration. We did incorporate time of sample collection in earlier exploratory models, and though a significant term retained in the model, the effect of time at collection appeared to mostly be the addition of noise/heterogeneity rather than having a monotonic relationship with gestational age that improved model estimates.
3. Please discuss the limitation of applying the algorithm outside the age at sampling range on which it was developed (or mention/rephrase this limitation more specifically in the discussion: "one day later after birth than the samples used for model development" as no sample >48 hours was included during model development).

Response: We have made the suggested edit in the discussion as follows:
"Another source of bias was that samples in infants born preterm in China, and to a lesser extent the Philippines, were much more likely to be collected later than in term infants, and often after 48-72 hours. Samples collected beyond 48 hours were more heterogenous than samples collected within 48 hours but excluding these would have excluded an unacceptably large proportion of infants overall, especially preterm infants. Our compromise approach of relaxing the criteria to exclude samples that were collected later than 72 hours after birth, included more infants at the cost of increased heterogeneity, but even so, still excluded a disproportionate number of preterm infants in the China and Philippines. A combination of these factors likely contributed to the lower than expected preterm birth rates observed in China and in the Philippines validation cohorts, as well as leading to decreased apparent model performance due to more heterogeneous samples (collected 48-72 hours after birth) being included." 4. In untargeted metabolomics of newborn dried blood spots, another crucial covariate impacting metabolic profiling is month of birth (see Courraud et al. 2021), or so at least in Denmark. Being born in summer or winter is remarkably visible. Such effect might be or not be visible in various countries. Have you investigated this potential covariate in your targeted profiling and/or considered integrating it in the algorithm? ;46(1-2):133-8. doi: 10.1016/j.clinbiochem.2012.09.013

Methods:
1. Paragraphs 3-5. Please specify which analytical methods are used in each center included in the study. Is it mass spectrometry everywhere? Do they use a marketed kit or laboratorydeveloped tests? Consider giving more methodological details as supplementary material as different platforms may not give the same analytical performance. 2. Paragraph 4. Please clarify why some infants get the expanded screening panel of 28 diseases and discuss the risk of selection bias when choosing these infants for the validation.

RESPONSE: Although the newborn screening initiatives are meant to be universal in these populations, some tests were paid for by the families. The expanded panel is now covered by National Health Insurance in the Philippines. As our study only included samples for which the full panel of analytes was available, this could have contributed to a selection bias towards more affluent families. We have added the following to the discussion:
"The preterm birth rate that we estimated in the current cohort (Philippines:4.6%, China: 4.8%) was less than previously estimated (Philippines: 15%, China: 7.1%) (Blencowe et al., 2012). Although newborn screening initiatives are meant to be universal in these populations, some tests are paid for by the families. Considering we only tested samples which the full panel was available this could have contributed to selection bias in our sample population towards more affluent families. 3.Paragraph 5. Please discuss the risk of selection bias when choosing to include only infants for whom tandem mass spectrometry data were available. Does this mean that all metabolites have been measured with this method? (Is this the method used to screen for phenylketonuria, congenital adrenal hyperplasia, hypothyroidism and Glucose-6-phosphate dehydrogenase deficiency?). Table 1 and in Figure 1 4.If the applicability of the algorithm is dependent on the family's income (being able to pay for extra screening), will it achieve its goal to reflect preterm birth globally, given that preterm birth is more frequent in families struggling economically? Please discuss.

Response: Universal access and coverage for newborn screening is improving around the world. Since the initiation of this project, newborn screening is now fully funded in the Philippines. The authors agree that if the algorithm is dependent on a family's ability to pay for screening, this could bias preterm birth rates and is not an ideal scenario for implementation of this approach. However, the approach we are evaluating isn't intended for implementation in either the Philippines or China -this was an opportunistic use of external cohorts with large retrospective screening databases where we could test our models in different geographic settings with results from local screening labs. Our approach is ultimately targeted to LMICs in Africa and South Asia, based on priorities defined by the Gates foundation assuming the necessary panel of analytes would be collected for the purpose of GA estimation.
5. Paragraph 6 on GA assessment: for the Philippines, please indicate the proportion of infants for whom GA was assessed by ultrasound or using Ballard Scoring. A note on the precision of the Ballard scoring with a relevant reference would help the reader.

Response: Although we knew general practice patterns of gestational dating method used in the Philippines, we did not have individual-level data on what method was used in each individual pregnancy, hence we accepted this as an additional source of validation error which would likely have led to larger MAE/RMSE. This was presented as a limitation in the discussion and we have clarified the description in the methods. It now reads:
"In the Philippines cohort, mothers who delivered in private hospitals generally received gestational dating ultrasounds while other infants' GAs were generally measured using Ballard Scoring, however individual-level data identifying which GA measurement method was used was not available." We also added a statement in the discussion about the precision of the Ballard scoring: "Ballard tends to overestimate gestational age with a wide margin of error, particularly in preterm infants (Lee et al., 2016)." 6. On the same topic, you later state that "model performance was assessed by comparing the estimated GA from the model to the ultrasound-derived GA". So it is unclear whether or not the infants for whom GA was assessed using Ballard Scoring are included at all. Please clarify.

Response: Thank you for pointing this out. We added a qualifying statement to correct this in the methods. The sentence now reads as follows.
"For each infant, model performance was assessed by comparing the estimated GA from the model to the ultrasound-derived GA (or ultrasound or Ballard in the Philippines) and calculating validation metrics that reflect the precision of model estimates compared to reference GA values." Table 1: 7. Please indicate what "C0", "C2", etc. refer to precisely. It might be obvious for someone in the field, but not to many readers for whom C18 might just be a free fatty acid and not the acyl-carnitine. You could for instance provide a list in supplementary data with full names and PubChemIDs. It helps bridging with the untargeted metabolomics community who is also working on the topic.

Response: As this journal does not allow supplementary materials, the authors feel that providing the full names of all species would be more detail than is needed. We have added a reference to a previous paper which defines the list more exhaustively. We've also added subheadings which identify the 'C' species as acylcarntines.
8. Please be more specific as to which metabolites are included in model 2 and 3. It's not clear, especially considering the "restricted models" in Table 4 Response: We have added in a new figure. (Figure 1) to make it more clear which analytes are used in the restricted models.
9. Models including newborn screening analytes: How did you cope with the metabolites missing in the validation cohorts? In the result section, you mention "Philippines-restricted" models, etc., please introduce them in the method section. Are the equations the same, just removing the missing metabolites or did you "re-develop" the models? Or?

Response: The internal validation of models 1-3 was conducted on an independent test dataset of Ontario infants. Since not all of the predictors available for the Ontario dataset were also available for the external datasets, we tailored the models 2 and 3 to include the maximum number of available predictors in each of the external datasets (which we called 'restricted models'). The list of analytes available at each site is presented in Table 1. The tailored models were fit in the Ontario dataset, and validated in the external datasets. We have added a figure to clarify the different models tested. The Methods section now states:
"A total of 47 newborn screening analytes, as well as sex, birth weight and multiple birth status, were used in the original Ontario model development. GA at birth (in weeks) determined by first trimester gestational dating ultrasound was the dependent variable. Multiple birth status and a subset of screening analytes were not available in the external cohorts (Table 1). Three main models were developed and evaluated in the Ontario cohort ( Table 2). For models 2 and 3, we also developed restricted models including only the covariates available in each of the two external cohorts (Figure 1). Restricted models were trained on the Ontario datasets but deployed in the external cohorts." 10. Are your models "resistant" to missing values? (In the real world, there will be missing values.)

Response: In the current study the sample size was large enough to exclude any samples with missing values and thus no imputation of missing values was done. In our external validation studies (Murphy et al., 2019, Bota AB et al., 2020 and Hawken et al., 2021) where sample size was limited, we used multiple imputation for missing analyte values. If these models were to be implemented in real-world settings, we would use the same methods to methods section, and have now provided formulas (please see manuscript text for formulas):
"Model accuracy metrics were based on residual errors: the difference between modelestimated GA and reference GA. Although mean square error (MSE) is typically the loss function used in maximum-likelihood model fitting for continuous outcomes, it is not necessarily the best metric for assessing average agreement in model validation, as it is based on sum of squared differences, and hence is sensitive to large and small residuals. Therefore, the primary metric we have presented is the mean absolute error (MAE). MAE is the average of absolute values of residuals (values of the model estimate minus the reference GA) across all observations. MAE reflects the average deviation of the model estimate compared to the reference estimate, expressed in the same units as GA (weeks).
For completeness, as well as for comparability to other published validations, we also report the square root of the MSE (RMSE). Also known as the standard error of estimation, RMSE is also expressed in the same units as GA (weeks).
Lower values of both MAE and RMSE reflects more accurate model estimated GA."

Results and discussion:
Can you comment on the high percentage of SGA in the Filipino cohort? Could it be that the thresholds used (ref 14) are not applicable to this population? Could it also be why models generally perform better in the Filipino cohort for the SGA infants as compared to Canadian and Chinese cohorts? (More power). 10. Same comment for estimated preterm birth rate in the Chinese cohort using model 2.

Response: We have corrected this in the manuscript, thank you.
11. Figure 1 (A) It would be more informative to describe models as follows: Model 1: sex + birth weight; Model 2: sex + analytes; Model 3: sex + birth weight + analytes. "analyte model" and "full model" are not very clear.

Discussion:
You write: "First, as observed in the differences in performance across the birth weight-only models developed in the three cohorts, the predictive utility of anthropomorphic measurements for estimating GA may vary across populations".
"birth weight-only models developed " Do you mean the unique model 1 (sex + birth weight) applied in the 3 cohorts? This sentence is confusing as it implies that there are several models that were developed, when I had understood that you developed one model 1 based on the Canadian infants and applied "the final equations" to the other cohorts no involved in the development. Please clarify. ○ 1.

Response: We developed 3 models in the Ontario cohort which were applied to the external cohorts. Model 1 was a multivariable regression model including sex, birthweight and their interaction. Model 2 included ELASTIC NET regression model including sex, analytes and pairwise interactions among predictors, whereas model 3 used ELASTIC NET regression model including sex, birth weight, analytes and pairwise interactions among predictors.
These were then applied to the external cohorts using analytes available in these settings. To clarify this, we have replaced all references to "birth weight only models" with referenced to Model 1 throughout the manuscript.
2. Also, why do you think that the "predictive utility of anthropomorphic measurements for estimating GA may vary across populations"? It could be that anthropomorphic measurements are indeed too different between Canada and Asian populations, so the models developed with Canadian data are not performing in Chinese infants. But why question the utility of the measurement itself? (To make a comparison with, for instance, month of birth, one could argue that seasonal variation is relevant in some climates but not in others. I'm not sure why birthweight would be more or less relevant and I'm just curious as to whether you have a more specific hypothesis.)

RESPONSE: This is an important consideration. As our intent was to externally validate models derived in Ontario infants, we applied Model 1 based on sex and birthweight model coefficients derived in Ontario infants. Although birth weight was locally standardized within each of the three cohorts, it is possible that deriving local models in China and the Philippines and then applying them locally would yield better performance. And we did see evidence of this in previous exploratory modeling we have done. However our intent here was to deploy pre-trained models to use in estimating GA that don't require an existing database in each new country that is large enough to derive a robust country-specific model.
3. Sentence starting with "Previous validation of our models among": Please split this sentence as it is too long and difficult to know what you are referring to when you end with "differences were more pronounced" (between what? These different subgroups? More pronounced compared to?). When you write "inherent biological differences", do you mean both genetic and environmental? Please clarify.

Response: Thank you we have edited the sentence and it now reads:
"Previous validation of our models among infants born in Ontario to landed-immigrant mothers from eight different countries across Asia and North and Sub-Saharan Africa suggested that inherent biological differences may not be a significant contributor to newborn metabolic data and the performance of our algorithms (Hawken et al., 2017). However in an external validation of previously developed GA estimation models in a prospective cohort from South Asia (Murphy et al., 2019), the drop in performance was more pronounced, despite the centralized analysis of samples in the Ontario Newborn Screening lab."

Main comments:
There is no doubt that an accurate estimate of GA is key. However, the authors propose a postnatal estimation of GA which does very little in advancing and encouraging determination of GA in early pregnancy. This is the recommendation of the WHO, The Brighton Collaboration GAIA definitions Prematurity and assessment of gestational age, The National Institute for Health and Care Excellence (NICE) Guideline for Routine Antenatal Care (2008), and International Society of Ultrasound in Obstetrics and Gynaecology (ISUOG).

○
There is a clear gap identified in the accuracy of determining GA especially in LMIC and I strongly doubt this approach will help in better characterising of the burden of vulnerable newborns and the potential impact towards achieving better estimates for population rates of preterm birth, low birth weight, small-for-gestational age, and combinations of these to identify other vulnerable newborn phenotypes.

○
Overall reporting of results warrants improvement, the authors have decided to report overall mean agreement in GA between gold standard GA with model predicted GA and yet there are clearly large differences in precision and this differs according to GA (Figure 1). The authors should state explicitly what is meant by agreement within 7 days. For example, if on average, model estimate agrees within 7 days, this technically means ± 7 days and therefore for a given fetus say model GA estimate is 32 weeks + 0 days, this would mean that the true GA ranges between 31 weeks + 0 days and 33 weeks + 0 days which is effectively 2 weeks. Following this, the best model estimate (model 3) on average will be accurate to within 10 days at best. The authors should show a plot of true GA vs. predicted GA as this will evidently show the variability of the prediction as a function of GA as opposed to the aggregated estimates they have presented by GA in Figure 1.
○ Across all models, great discrepancies and perhaps unacceptable discrepancies are observed for GA before 39 weeks. I am not convinced this approach offers any benefit/added value/utility compared to other methods in common use such as best obstetric methods for ascertaining GA.

○
The team used blood spot samples collected within 48hrs of delivery -there is considerable extra effort involved, time, and cost for drawing blood spots and processing of analytes. I ○ do not see how this would be a feasible alternative especially for LMIC where accurate estimation of GA is a key data gap. The merits of the proposed approach have to clearly outweigh the performance of other known methods for postnatal GA determination such as the Ballard Score. In the Ontario cohort, all screen-positive results were excluded from analysis, which had the effect of removing a large proportion of extreme outliers and a typical metabolic profiles.
Q: What is the meaning of screen positive? Does this mean that all children who had a metabolic disorder which might have abnormal values for some metabolites were removed? If so, was it done in the other two datasets and will the model then be not applicable to children who show abnormal values for the metabolites.

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? No this exclusion biased the rate of preterm gestation observed in our Ontario study cohort downward, but it was unlikely to have had any important impact on GA model development, as we still had a large sample size across the full spectrum of gestational ages at birth to allow robust model development and performance evaluation. Further, the inclusion of samples collected later than 48 hours would introduce a large amount of heterogeneity in analyte levels which had to be balanced against the impact of exclusions. We have added more details to the methods and discussion reflecting these considerations.

Reviewer:
The full model seems to include addition Hb ratio's and analyses not part of the newborn screening routinely, in terms of implications and discussion this needs better discussed. While in results a result that was obtained with metabolic screen with clinical variable routinely available as birth weight needs to be the key primary estimate and other estimates need to be provided as secondary exploratory results. The distinction as provided current is blurred and confuses the reader.

Response:
Hemoglobin types (eg HbF HbA -fetal and adult hemoglobin types) are measured during routine NBS in Ontario and the Philippines, in the course of identifying mutant types associated with hemoglobinopathies. These are reported as "peak percentages" with respect to total hemoglobin. We have used the peak percentages for normal HbF, HbF1 and HbA to construct a ratio of fetal to adult hemoglobin by calculating (HbF+HbF1)/(HbF+HbF1+HbA), to measure the proportion of normal fetal hemoglobin, relative to the proportion of total normal fetal + adult hemoglobin types. This is strongly predictive of gestational age as the transition from fetal to adult hemoglobin occurs apace with fetal development. We have added these details to the Methods.

Reviewer:
Other concerns: Reference GA assessment Statement: In the Philippines cohort, mothers who delivered in private hospitals generally received gestational dating ultrasounds while other infants' GAs were generally measured using Ballard Scoring Q: It is a discrepancy since Ontario based models trained data against USG confirmed GA. Then under External validation where Philippines samples were used: How can both ultrasound and Ballard scoring used under same bracket.

Response:
Although we knew general practice patterns of gestational dating method used in the Philippines, we did not have individual-level data on what method was used in each individual pregnancy, hence we accepted this as an additional source of validation error which would lead to larger MAE/RMSE. This was presented as a limitation in the discussion and we have clarified the description in the methods. It now reads: 'In the Philippines cohort, mothers who delivered in private hospitals generally received gestational dating ultrasounds while other infants' GAs were generally measured using Ballard Scoring, however individual-level data identifying which GA measurement method was used was not available.'

Reviewer:
Internal validation of model performance in Ontario, Canada Q: Was the internal validation performed with previously developed models including 47 analytes, birth weight, sex or the restricted model, needs clarification and discussed

Response:
The internal validation of models 1-3 was conducted on an independent test dataset of Ontario infants using all 47 analytes, multiple gestation, birthweight and sex. Since not all of the predictors available for the Ontario dataset were also available for the external datasets (multiple gestation and a small subset of analytes were absent), we tailored the models to include the maximum number of available predictors in each of the external datasets (which we called 'restricted models'). The list of analytes available at each site is presented in Table 1. The tailored models were fit in the Ontario dataset, and validated in the Ontario test set and external datasets. We have added a figure (Figure 1) to clarify the different models tested. The Methods section now states: 'A total of 47 newborn screening analytes, as well as sex, birth weight and multiple birth status, were used in the original Ontario model development. GA at birth (in weeks) determined by first trimester gestational dating ultrasound was the dependent variable. A subset of screening analytes, as well as multiple gestation status were not available in the external cohorts (Table 1). Three main models were developed and evaluated in the Ontario cohort (Table 2). For models 2 and 3, we also developed restricted models including only the covariates available in each of the two external cohorts (Figure 1). Restricted models were trained on the Ontario datasets but deployed in the external cohorts.'

Reviewer:
Q: Restricted model definition The proper definition of the restricted model is missing. Was a restricted model built separately for Manila and Shanghai or Separate models model were made for Manila and Shanghai

Response:
Restricted models were built separately to be applied in Manila and Shanghai based on availability of screening analytes/predictors in each setting. These models were trained in the Ontario data and then deployed in the external cohorts. We have updated the text and the methods now read: A total of 47 newborn screening analytes, as well as sex, birth weight and multiple birth status, were used in the original Ontario model development. GA at birth (in weeks) determined by first trimester gestational dating ultrasound was the dependent variable. A subset of screening analytes as well as multiple gestation were not available in the external cohorts (Table  1). Three main models were developed and evaluated in the Ontario cohort (Table 2). Model 1 was developed excluding multiple gestation status, and for models 2 and 3, we also developed restricted models including only the covariates available in each of the two external cohorts (Figure 1). All of the restricted models were trained on the Ontario datasets and deployed in the external cohorts.

Reviewer:
Statistical methods Statement: In the Ontario cohort, all screen-positive results were excluded from analysis, which had the effect of removing a large proportion of extreme outliers and atypical metabolic profiles.
Q: What is the meaning of screen positive? Does this mean that all children who had a metabolic disorder which might have abnormal values for some metabolites were removed? If so, was it done in the other two datasets and will the model then be not applicable to children who show abnormal values for the metabolites.

Response:
Screen positive refers to infants who tested positive for a disorder in the screening panel. We have clarified this statement in the methods. These infants were excluded from the Ontario population as they tend to have extreme outlying values for some analytes which impact negatively on model development. Additionally, we employed a strategy of winsorizing extreme values that lay more than three IQRs above the third quartile or three IQRs below the first quartile. Winsorizing replaces these extreme outliers with the upper and lower boundary value for the analyte, which preserves the extremeness, but reduces the impact of the original value. The same winsorization algorithm was applied in the external cohorts. Screen positive data points were not explicitly removed from the Philippines and China datasets. The reviewer is correct that the model may not be as accurate for children with abnormal values, however in the external settings where this algorithm is being deployed, it would not be known whether infants had a disorder at birth, so the model would need to be as robust as possible in estimating GA under these conditions. Because of the approach we took, the impact of abnormal/extreme values would be attenuated by our data normalization strategy which included both log transformation and Winsorization of extreme outliers. The occurrence of extreme outliers for either screen positive infants or for other reasons was extremely low, so would only affect a small number infants, but our strategy allowed us to produce a GA estimate in these infants that was robust to extreme values and less likely to produce a wildly inaccurate estimate. We have clarified these details in the Methods.
Competing Interests: No competing interests were disclosed.