The role of artificial intelligence in sexual and reproductive health and rights: technical brief

Publication date: 2024

i The role of artificial intelligence in sexual and reproductive health and rights Technical brief 1 Background Sexual and reproductive health and rights (SRHR) are fundamental to universal health coverage (UHC) and ensuring that all individuals can access and receive quality health services and information, without discrimination or financial distress. Sexual and reproductive health (SRH) domains encompass: sexual health; reproductive cancers; sexually transmitted infections (STIs), including HIV; infertility; intimate partner violence and sexual violence; contraception and family planning; safe abortion; maternal and perinatal health; menopause; comprehensive sexuality education; and female genital mutilation (1, 2). Services and care for health issues within the SRH domains should be provided based on need and within a rights-based and life-course approach. As many of these issues are commonly perceived as sensitive or stigmatized, SRHR inherently embodies multifaceted dynamics and disparities rooted in sociocultural norms, political landscapes and access barriers (3, 4). The convergence of digital innovations and the field of SRHR presents opportunities to improve access to and quality of services, but it also presents its share of challenges – especially with regard to ensuring that the use of these innovations is safe, rights-based, equitable and effective (4, 5). The use of digital tools has broadened access to SRHR, such as fertility care and contraception, sexuality education and expression, safe abortion and maternal health care. Expanded penetration of mobile devices, including within low-resource contexts, has made profound changes in the way people access SRHR information and services. Digitalization has also led to the accumulation of vast amounts of information, through electronic medical records, exchanges on social media, data from wearable devices, and interactions on mobile health applications. Combined with improved computer processing capabilities, these advances have paved the way for artificial intelligence (AI), which refers to the capability of algorithms integrated into systems and tools to learn from data so that they can perform automated tasks without explicit programming of every step by a human (6, 7). Within SRHR, as with other areas of health care, AI has emerged as a transformative force for health system efficiencies but has also introduced critical risks and rights-related considerations, including potential impact on bodily autonomy and amplification of targeted disinformation, in a field already prone to the effects of ideologically driven narratives. AI has the potential to accelerate the shift towards people-centred care and strengthen the quality of care, by facilitating people’s agency in navigating health systems and bridging the workforce gaps. Specifically within SRHR, individuals’ desires for confidentiality and privacy when seeking SRHR information and services position digital tools and AI as critical conduits for expanding access (8–10). However, the amassing of SRHR data also raises concerns about how to protect individuals’ privacy and prevent data breaches and exploitation that may endanger people’s rights and safety. As such, the responsible and ethical use of AI in SRHR requires concerted efforts among stakeholders, including policy-makers, commercial actors, funding agencies, developers, health workers and civil society, to mitigate the rising risks, while harnessing the potential of AI to address long-standing challenges in the field of SRHR. Abbreviations AI artificial intelligence IVF in vitro fertilization LLM large language model LMIC low- and middle-income country LMM large multi-modal model ML machine learning SaMD software as a medical device SRHR sexual and reproductive health and rights STI sexually transmitted infection UHC universal health coverage 2 Objectives of this document This technical brief provides an overview of the landscape surrounding the use of AI in SRHR, and highlights the related risks, implications and policy considerations. Considering the rapidly evolving nature of AI, this brief seeks to provide clarity in understanding how AI is being applied in SRHR and flag key issues to ensure AI is used effectively, inclusively, sustainably and with due consideration for human rights. This document targets implementers, policy-makers, technology developers, funding agencies and researchers working at the intersection of AI and SRHR and aims to facilitate joint understanding among these stakeholders. Methodology This technical brief was developed based on consultations with external experts and findings from a scoping review. The World Health Organization (WHO) convened an in-person meeting to frame the topic areas and prioritize the key issues to be explored through further review of the literature. In addition, WHO conducted a scoping review, based on a published protocol (11), to provide an overview of the ways AI is being used in SRHR. The scoping review retrieved over 12 000 articles, of which 3670 were included for full-text review after their abstracts were independently screened by two individuals. Peer-reviewed articles that were selected after abstract screening (11) were tagged based on the SRH domain covered (e.g. infertility and fertility care, reproductive cancers, antenatal, intrapartum and postnatal care) and the AI function (e.g. health promotion and education, screening and diagnosis). Of the 3670 screened articles, 1500 were randomly selected to be mapped onto Fig. 1. A follow-up consultation was convened online with the external experts to achieve a consensus on the categories. All contributors reviewed the full draft of this document and provided feedback, which was incorporated into this final publication. Artificial intelligence in health AI encompasses a broad range of technologies to process data and algorithms. Machine learning (ML) is a category of AI that uses statistical and mathematical modelling methods to define and analyse data, and subsequently the learned patterns are applied to perform or guide certain tasks and make predictions (7, 12, 13). ML models used in public health may be characterized by how their predictive capabilities function, in terms of whether they are predicting outcomes and probabilities based on predefined boundaries or generating new content. These two types of models are further described below. Discriminative AI: Models that analyse relationships between variables to make predictions, such as a particular risk for a health condition or outcome (14). Classification is an example of a discriminative model that learns patterns and assigns data into predefined categories, such as whether a condition is normal or abnormal, positive or negative, or unknown. Generative AI: Models in which algorithms are trained on data sets to create new content, such as text, images and videos (13). Generative AI models have been a catalyst for large language models (LLMs), which generate text-based responses and can be embedded within conversational agents, colloquially known as “chatbots” (13).1 More recently, generative AI is also being applied to large multi-modal models (LMMs), which can process various types of data sets, including biosensor, audio and image data, to generate outputs in different formats (13). 1 There may also be AI-powered conversational agents that are not generative AI. 3 Generative AI has also led to the development of general-purpose foundational models (13), such as LLMs like ChatGPT that have broad applicability (15), which can potentially be adapted for SRHR-related contexts and use. This contrasts with prior approaches in AI ML models that were designed for predefined use cases, such as prediction of a specific condition. While there may be a risk that discriminative AI models could provide erroneous predictions and misdiagnoses, the ability of generative AI models to create new information and content has implications in terms of boundaries and applicability for different purposes. Furthermore, AI may be used within software to provide diagnostic and medical functions without being part of a medical device; this is known as “software as a medical device” (SaMD) (16). Artificial intelligence in sexual and reproductive health and rights The use of AI in SRHR includes applying AI to: facilitate access to health information, education and promotion; support screening and triage of health conditions; tailor treatment and care regimens; monitor personal and population health; assist in health system management needs; and accelerate clinical research and drug discovery. Furthermore, the use of AI in SRHR can involve a range of stakeholders, including health service users (individuals), health workers, health system managers and researchers. It should be noted that these are illustrative examples that have been mentioned in peer- reviewed literature (11), and this brief does not provide any evaluation of their benefits and harms. Health information, education and promotion AI models can be leveraged to develop interventions for health education and to promote health behaviours. For example, virtual conversational agents or chatbots can provide information in what may be perceived as a more anonymous and non-judgemental manner compared with personal interactions. These tools, often used directly by individuals, are gaining traction for their potential to overcome access barriers to traditionally stigmatized and sensitive areas of health care, such as sexual health, contraception and STIs (9, 17, 18). Screening and diagnosis By analysing extensive amounts of health data, from sources such as electronic medical records, medical images, laboratory test results and free text clinical notes, AI can identify trends, patterns and risk factors. This may include analysing imaging data to support the detection of abnormalities or lesions, such as cervical pre-cancer lesions (19, 20), or using the predictive capabilities of AI to identify pregnant women who may be at risk for particular adverse outcomes, such as postpartum haemorrhage, pre-eclampsia, gestational diabetes or preterm labour (21, 22). This AI function may also be used for triage of patients, as part of efforts to target interventions and reduce waiting times or volume load (23, 24). AI algorithms may also be embedded in ultrasonography and other medical devices; for example, to identify fetal distress (25–27). Furthermore, this use of AI can facilitate task sharing in settings where access to specialized health workers, such as radiologists, may be limited (28). 4 Treatment and care management The predictive capabilities of AI may also serve as an adjunct to tailor treatment regimens. For example, AI algorithms are being used to optimize antiretroviral therapy dosing options to guide clinical care and minimize side-effects for people living with HIV (29). Within fertility care, AI algorithms are being used to personalize in vitro fertilization (IVF) based on a couple’s specific characteristics, to increase the chances of successful conception (30–32). This may involve use of AI to improve the selection of sperm cells, oocytes and embryos, and to generate predictive models for the outcome of IVF (30, 33). Other examples including triangulating different sources of data to predict whether insulin treatment will be needed for pregnant women and individuals with gestational diabetes, as well as guiding clinicians through the use of AI-enabled conversational agents to assist with managing health issues, such as ovarian disease (34, 35). Personal health monitoring Machine learning approaches within AI can be used to analyse and interpret health-related data collected from individuals to support preventative care and self-monitoring – for example, through wearable and mobile devices. This may include personalized fertility trackers using AI to predict ovulation and fertile windows (36) and AI-based tools for monitoring perimenopausal symptoms and providing personalized recommendations for symptom management (37). Understanding health trends AI can be used to analyse data on a large scale to monitor public health trends and converging issues. This can include identifying associations across various data sets – for example, environmental exposure data and population health (38), analysing comments about STIs on social media to identify areas in need of targeted interventions (39, 40), using AI to analyse global health data to discern trends in contraceptive use and effectiveness (41). Health system management The predictive modelling functions of AI can enable forecasting of needs and can assist with targeting interventions for strategic planning and policy development. Examples may include predicting patient inflow (42) and commodity stock-outs to optimize inventory management for medicines and other supplies (43), and using data analytics to evaluate the effectiveness of different SRHR interventions and policies (44, 45). Clinical research and drug discovery AI can assist researchers and clinicians in analysing complex data sets to accelerate clinical research and drug discovery. Examples include using AI algorithms to model molecular and genomic data to predict outcomes related to new therapeutics and drug resistance, particularly for HIV (46, 47), and using machine learning models to identify genetic markers associated with reproductive health conditions (48). AI is also being employed in research and development for treatments related to pregnancy and infertility, among other conditions (49–51). Figure 1 provides a high-level overview of the general patterns identified through a scoping review (11). The resulting patterns show that AI is predominately used in the following SRHR areas: reproductive cancers (primarily cervical cancer); antenatal, intrapartum and postnatal care; infertility and fertility care; and STIs, including HIV. Broader uses of AI with relevance to SRHR that are not reflected in Fig. 1 include knowledge management to curate evidence and to support medical education (13). 5 Fi g. 1 . P at te rn s of u se o f a rt ifi ci al in te lli ge nc e (A I) a cr os s se xu al a nd r ep ro du ct iv e he al th (S R H ) d om ai ns SR H d om ai n a A I p ur po se Co m pr eh en si ve se xu al it y ed uc at io n Se xu al h ea lt h ST Is , i nc lu di ng H IV R ep ro du ct iv e ca nc er s Co m pr eh en si ve ab or ti on ca re In ti m at e pa rt ne r an d se xu al v io le nc e In fe rt ili ty a nd fe rt ili ty c ar e Co nt ra ce pt io n an d fa m ily pl an ni ng A nt en at al , in tr ap ar tu m a nd po st na ta l c ar e M en op au se G en er al / m ul ti - pu rp os e SR H b H ea lth in fo rm at io n, ed uc at io n an d pr om ot io n Sc re en in g an d di ag no st ic s Tr ea tm en t an d ca re m an ag em en t Pe rs on al h ea lth m on ito ri ng U nd er st an di ng he al th tr en ds H ea lth s ys te m s m an ag em en t Cl in ic al re se ar ch a nd dr ug d is co ve ry a SR H d om ai ns a re b as ed o n th e U H C Co m pe nd iu m o f i nt er ve nt io ns . F em al e ge ni ta l m ut ila tio n is a n SR H d om ai n in th e U H C Co m pe nd iu m , b ut n o st ud ie s w er e fo un d fo r th is a re a (1 ). b Fo cu s on n on sp ec ifi c an d m ul tip le S RH d om ai ns (e .g . A I f or g yn ae co lo gi ca l i ss ue s) w ith ou t s pe ci fy in g th e ar ea s. So ur ce : a da pt ed fr om W H O , 2 02 1 (1 ) a nd W H O , 2 01 7 (2 ). N um be r of s tu di es 0 1– 10 0 10 1– 20 0 20 1+ 6 Risks and implications The responsible use of AI brings with it a set of ethical, legal and human rights implications relating to issues of data governance, transparency and explainability, inclusiveness and equity, responsibility and accountability, as detailed in WHO’s guidance on Ethics and governance of artificial intelligence for health (7). While these issues are applicable across all areas of health care, the norms and power relations that frame decision-making within the field of SRHR accentuate some of these considerations. AI systems and tools are not inherently malicious, but the risks and the ways in which AI is applied to SRHR are largely shaped by the underlying policies and other characteristics of the environment (4). For example, in contexts where there are differing views on aspects of SRHR, such as access to contraception, abortion or sexual health, AI may serve as a tool to limit access to services and information, or potentially be misused or manipulated for persecution. This section provides examples of some of the nuanced risks and challenges that may be amplified by the use of AI within SRHR. Data governance and bodily autonomy Through personal health monitoring, such as fertility monitoring, and the use of AI tools for health information, education and promotion, SRHR data are increasingly being generated outside of medical settings. As such, these types of health-related data may not be accorded the regulatory protections of traditional medical data, such as the protections under the Health Insurance Portability and Accountability Act of 1996 in the United States of America (52). This can infringe on individuals’ rights to bodily autonomy, for example, if personal data, such as menstrual health information, are being collected and shared with third parties for targeted marketing and, in some cases, used to track women who are seeking or may have had an abortion (53–55). The use of AI in managing SRHR decisions can limit individuals’ freedom to make informed choices, as they may not be aware of how their data could be used and the consequences of the data being shared. This can impact not only their rights relating to health information and services, but also contribute to harmful narratives and practices related to sexual and gender- based violence (54). There is, consequently, a risk that AI systems and tools could be used to impede rather than facilitate access to SRHR, especially for people in vulnerable situations or experiencing discrimination, by not adequately respecting individual rights and agency in handling health data (4, 5, 53–55). 7 Risk of data breaches To develop and test models of AI-assisted health-related tools, AI systems rely on access to extensive amounts of data, including personal health information. The handling of SRHR data always requires a high level of attention to privacy and data security, since breaches in this context can have profoundly damaging consequences for individuals, particularly for young women, sex workers, gender-diverse people and members of other communities that may be subject to widespread stigma and discrimination (4, 5). Unauthorized access to personal health information is a general risk, which is compounded by the sensitivity of data on sexual and reproductive health. In situations where legal protections of SRHR are limited, AI applications that process data on topics such as abortion and sexual health present a risk of data breaches, which can cause emotional distress and financial damage, and endanger individuals’ safety. For example, research studies have used machine learning methods to predict missed abortions among individuals undergoing IVF (56) or sexual orientation based on images of faces (57), a process that could potentially put individuals at risk. In environments where SRHR issues are contentious, personal data may be used for persecution or to prevent access to health care and other services. Further, even if identity is anonymized within a data set, there is a risk that it might be possible for AI systems to identify individuals through triangulation of sources of information. Misinformation and targeted disinformation A major challenge in the use of AI for SRHR is potential misinformation due to AI models being trained on large data sets from the internet and social media platforms, which may have poor data quality and a range of biases (13). AI “hallucinations”, which are prevalent in many AI systems, occur when generative AI systems and tools present false information that is indistinguishable from accurate information (13). For example, an AI conversational agent, trained using unverified data sets, might spread myths about contraceptive methods and abortion, such as misconceptions regarding links to infertility or suitability for specific demographics (58). There may also be risks of manipulation and targeted disinformation; for example, information intentionally designed to discourage access to SRH services, such as safe abortion. Data limitations and bias in AI AI systems in SRHR frequently encounter issues with biased data sets, which may lead to poor accuracy and further disadvantage individuals from underrepresented communities. This bias arises from a lack of data covering a diverse spectrum of needs and populations. For example, there is a paucity of data on normal sperm parameters from low- and middle- income countries (LMICs), which means that AI trained using a data set on this topic can provide skewed output as a basis for informing IVF procedures, which can lead to systematic biases being reproduced (59). Consequently, health care advice and predictions generated by these AI systems may fail to meet the SRHR needs across different populations. 8 Digital divide The digital divide significantly affects the equity and inclusivity of AI applications within SRHR, particularly impacting individuals in vulnerable situations. Limited access to technology or connectivity due to gender, geography or cost barriers remains a major issue, especially in LMICs, with reports demonstrating inequality in mobile ownership and internet use, with least access among women living in rural areas (60). Furthermore, settings in LMICs also face a scarcity of AI-compatible data formats, limited processing power, and other technological constraints that prevent people from making use of AI systems and tools. In settings with limited availability and accessibility of essential hardware and software resources, the development of specialized or even general AI models is particularly challenging. In addition to the significant costs of operating and using these systems and tools, the expertise and infrastructure required to effectively operate them in such environments is also often lacking, especially where data systems are still maturing (13). These disparities in access to essential resources and expertise compound the challenges faced in low-resource settings when it comes to applying AI to assist with local needs, such as SRHR. Despite these challenges, the evolution of voice- based and multimedia AI systems and tools has the potential to mitigate these issues and assist people with differing levels of digital literacy. Context and cultural awareness The foundations of text- or speech-based AI technologies, such as LLMs, rely heavily on language, which requires a localized or culturally nuanced approach in the field of SRHR. For example, decisions related to contraception can be influenced by personal beliefs, psychology, culture and socioeconomic factors. As most AI systems and tools are trained on a minority of well resourced languages, the limited representation of other languages, especially from LMIC settings, can create further challenges of cross-lingual transfer and manual translation, as well as misalignment with local context and limited user engagement (61). For example, an AI system using clinical language to provide advice on contraception may be seen as inappropriate or offensive in cultures where these topics are discussed in a more nuanced or indirect manner (10). 9 Policy and operational considerations Building on the cross-cutting recommendations detailed in WHO’s guidance on Ethics and governance of artificial intelligence for health (7), the following policy and operational considerations seek to mitigate the key risks and implications of AI use within SRHR. Revisit data protection regulations and redress mechanisms With the amassing and commodification of data for AI use, data protection laws need to be strengthened to prevent and manage potential digital breaches. This may include clear data use limitations, covering data sharing and repurposing, robust data privacy protections, and opt-out mechanisms (53). Furthermore, transparent redress mechanisms are needed, which include notifications about data breaches to those affected, investigation of any breaches, and enforcement of these mechanisms. Although data-related regulations for AI are still emerging across different sectors, the use of AI for SRHR should ensure that there are appropriate informed consent processes and other mechanisms allowing individuals to have proactive control over the collection and use of their SRHR data. Furthermore, stakeholders entrusted with the storage and/or processing of health data have a duty of care by which clear limitations on the use and sharing of SRHR data are needed to uphold the “do no harm” principle. Fight misinformation and targeted disinformation Implement community-led and open-source fact-checking programmes and transparent interfaces clarifying AI-generated recommendations. Collaborate with local health workers and community leaders to ensure dissemination of accurate information that represents and supports people from diverse cultural backgrounds including those in marginalized or vulnerable situations. This may also include developing resources and certification standards that verify that a particular chatbot/conversational agent is properly fact-checked. Promote inclusivity and data diversity Promote the diversification of data sets that are used to train AI algorithms for applications relevant to SRHR, ensuring representation from various socioeconomic, educational and cultural backgrounds. Build country-level and local capacity in developing data sets, case repositories and languages to support contextualization of SRHR needs. Establish collaborative oversight mechanisms Engage local and international regulatory bodies and community representatives for alignment of AI systems with ethical guidance and global health strategies, and to ensure transparency and accuracy of AI models. This may entail the use of mechanisms such as “human-in-the-loop” to ensure there is active detection of potential biases and inaccuracies (62). Furthermore, it can include strengthening considerations of human rights and other SRHR-specific risks (63, 64), within broader risk-management measures applied to AI systems and tools (65). AI systems and tools classified as SaMD are subject to regulatory approval for medical products; however, there is also a need to develop adaptable oversight pathways to monitor AI applications that are integrated into user-accessible products, particularly as individuals may increasingly use these types of tools to access SRH information and services. 10 Acknowledgements The World Health Organization (WHO) is grateful to the individuals across different organizations who contributed to the development of this technical brief. The development of the brief was coordinated by (in alphabetical order): Shada Alsalamah (WHO Department of Digital Health and Innovation [DHI]), Kanika Kalra (DHI), Jose Eduardo Diaz Mendoza (DHI); Sameer Pujari (DHI), Lale Say (WHO Department of Sexual and Reproductive Health and Research [SRH]), Denise Schalet (DHI), Tigest Tamrat (SRH) and Yu Zhao (DHI), under the guidance of Pascale Allotey (SRH) and Alain Labrique (DHI). Agata Ferretti (Consultant, SRH) developed the initial draft of the technical brief. WHO gratefully acknowledges the external experts who contributed to the development of this technical brief (in alphabetical order): Smisha Agarwal (Center for Global Digital Health Innovations, United States of America [USA]), Olasupo Ajayi (Queen’s University, Canada), Mary Akinyemi (University of Lagos, Nigeria), Williams Bukret (Independent advisor, Argentina), Cintia Cejas (Instituto de Efectividad Clínica y Sanitario, Argentina), Meg Davis (University of Warwick, United Kingdom of Great Britain and Northern Ireland), Rebecca Distiller (Patrick J. McGovern Foundation, USA), Asha George (University of Western Cape, South Africa), Dean Ho (National University of Singapore, Singapore), Rubayat Khan (Endless Network, USA), Anja Kova (Independent researcher, Mauritius), Claudia Lopes (United Nations University, Malaysia), Allan Maleche (Kenya Legal & Ethical Issues Network on HIV and AIDS, Kenya), Rohit Malpini (Independent consultant, France), Roli Mathur (Indian Council of Medical Research, India), Ayat Mohammed (Ain Shams University, Egypt), Rachida Parks (Quinnipac University, USA), Rosalind Parkes-Ratanshi (Makerere University, Uganda), Sathyanath Rajasekharan ( Jacaranda Health, Kenya), Zainab Garba Sani (Stanford University, USA), Sarah Simms (Privacy International, United Kingdom), Chaitali Sinha (International Development Research Centre, Canada), Lonneke van der Plas (Idiap, Switzerland), Effy Vayena (ETH Zurich, Switzerland), Sten Vermund (Yale University, USA) and Shan Xu (China Academy of Information and Communications Technology, China). The following WHO staff and consultants contributed to this technical brief (in alphabetical order): Karthik Adapa (Regional Office for South-East Asia), Adeniyi Aderoba (Regional Office for Africa), Keyrellous Adip (Regional Office for Europe), Siyam Amani (Regional Office for South-East Asia), Maria Barreix (SRH), Ana Pilar Betran (SRH), Marcelo D’Agostino (Regional Office for the Americas), Ryan Dos Santos (Regional Office for Europe), Mengjuan Duan (Regional Office for the Western Pacific), Mekdes Feyssa (SRH), Karima Gholbzouri (Regional Office for the Eastern Mediterranean), Rodolfo Gomez (Latin American Center of Perinatology), Asantesana Kamuyango (SRH), Abdoulaye Konate (Regional Office for Africa), Oleg Kuzmenko (Regional Office for Europe), Carl Massonneau (SRH), Gitau Mburu (SRH), Sara Mengistu (SRH), Rosemary Muliokela (SRH), Manjulaa Narasimhan (SRH), Åsa Nihlén (SRH), David Novillo (Regional Office for Europe), Mohammed Nour (Regional Office for Europe), Jennifer Rasanthan (SRH), Andres Reis (Department of Research for Health), Esther Soko (Regional Office for Africa) and Meera Upadhyay (Regional Office for South-East Asia). 11 Declarations of interests All of the external experts involved in this initiative completed and submitted a WHO Declaration of Interests (DOI) form. The review of these DOI forms identified two individuals as having potential conflicts of interest. The interests and their management by the WHO Department of Sexual and Reproductive Health and Research are outlined below. Dr Williams Bukret was the developer and owner of the Maternia App, which focuses on maternal risk evaluation and management. Mr Sathyanath Rajasekharan is a salaried employee of a non-profit organization that develops AI-based digital health tools focused on SRHR information. The conclusion was that, given that no active research, implementation or fundraising was declared or otherwise identified by WHO related to Dr Bukret, the declared potential conflict of interest was judged to not present an actual conflict with respect to the content of the consultations. With regard to Mr Rajasekharan, he was not part of the initial group of external experts who prioritized the issues and was consulted on technical details following the drafting of the document. References 1. Sexual and reproductive health interventions in the WHO UHC compendium. Geneva: World Health Organization; 2021 ( 2. World Health Organization, UNDP-UNFPA-UNICEF-WHO-World Bank Special Programme of Research, Development and Research Training in Human Reproduction. Sexual health and its linkages to reproductive health: an operational approach. Geneva: World Health Organization; 2017 ( handle/10665/258738). 3. Coates A, Allotey P. Global health, sexual and reproductive health and rights, and gender: square pegs, round holes. BMJ Glob Health. 2023;8(1):e011710 ( bmjgh-2023-011710). 4. Khosla R, Mishra V, Singh S. Sexual and reproductive health and rights and bodily autonomy in a digital world. Sex Reprod Health Matters. 2023;31(4) ( 69003). 5. Special Rapporteur on the right of everyone to the enjoyment of the highest attainable standard of physical and mental health. Digital innovation, technologies and the right to health. Thematic reports. Geneva: The Office of the United Nations High Commissioner for Human Rights; 2023 (A/HRC/53/65; https:// innovation-technologies-and-right-health). 6. Russell SJ. Artificial intelligence: a modern approach, third edition. Upper Saddle River (NJ): Prentice Hall; 2010. 7. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021 (https://iris. 8. Nadarzynski T, Bayley J, Llewellyn C, Kidsley S, Graham CA. Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice. BMJ Sex Reprod Health. 2020;46(3):210-7 (https:// 9. Wang H, Gupta S, Singhal A, Muttreja P, Singh S, Sharma P, Piterova A. An artificial intelligence chatbot for young people’s sexual and reproductive health in India (SnehAI): instrumental case study. J Med Internet Res. 2022;24(1):e29969.10.2196/29969 ( 10. Mills R, Mangone ER, Lesh N, Mohan D, Baraitser P. Chatbots to improve sexual and reproductive health: realist synthesis. J Med Internet Res. 2023;25:e46761 ( 11. Tamrat T, Zhao Y, Schalet D, Al Salamah S, Pujari S, Say L. Artificial intelligence and sexual and reproductive health and rights: protocol for scoping review. JMIR Res Protoc. 2024 (in press) ( 12. Sidey-Gibbons JAM, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol. 2019;19(1):64 ( 13. Ethics and governance of artificial intelligence for health: guidance on large multi-modal models. Geneva: World Health Organization; 2024 ( 14. What is an AI model? IBM; undated ( ai-model, accessed 26 February 2024). 15. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6 (https://doi. org/10.3389/frai.2023.1169595). 16. IMDRF SaMD Working Group. Software as a medical device (SaMD): key definitions. International Medical Device Regulators Forum; 2013 ( final/technical/imdrf-tech-131209-samd-key-definitions-140901. pdf). 17. Nadarzynski T, Lunt A, Knights N, Bayley J, Llewellyn C. “But can chatbots understand sex?” Attitudes towards artificial intelligence chatbots amongst sexual and reproductive health professionals: an exploratory mixed-methods study. Int J STD AIDS. 2023;34(11):809–16 (https://doi. org/10.1177/09564624231180777). 12 18. Young SD, Crowley JS, Vermund SH. Artificial intelligence and sexual health in the USA. Lancet Digital Health. 2021;3(8):e467--8 ( 19. Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial intelligence in cervical cancer screening and diagnosis. Front Oncol. 2022;12 ( 20. Fang M, Lei X, Liao B, Wu FX. A deep neural network for cervical cell classification based on cytology images. IEEE Access. 2022;10:130968–80 ( ACCESS.2022.3230280). 21. Akazawa M, Hashimoto K. Prediction of preterm birth using artificial intelligence: a systematic review. J Obstet Gynaecol. 2022;42(6):1662–8 ( 6828). 22. Schmidt LJ, Rieger O, Neznansky M, Hackelöer M, Dröge LA, Henrich W et al. A machine-learning–based algorithm improves prediction of preeclampsia-associated adverse outcomes. Am J Obstet Gynecol. 2022;227(1):77.e1-77.e30 (https://doi. org/10.1016/j.ajog.2022.01.026). 23. Hoodbhoy Z, Noman M, Shafique A, Nasim A, Chowdhury D, Hasan B. Use of machine learning algorithms for prediction of fetal risk using cardiotocographic data. Int J Appl Basic Med Res. 2019;9(4) ( 24. Du S, Jiang X, Guo A, Zuo K, Zhang T. Clinical application of early warning scoring based on bilstm-attention in emergency obstetric preexamination and triage. J Healthc Eng. 2022;2022:6274230 ( Retraction in: J Healthc Eng. 2023;2023:9763743. 25. Ravikumar S, Kannan E. Machine learning techniques for identifying fetal risk during pregnancy. Int J Image Graphics. 2022;22(05):2250045 ( S0219467822500450). 26. Warrick PA, Hamilton EF, Kearney RE, Precup D. A machine learning approach to the detection of fetal hypoxia during labor and delivery. AI Mag. 2012;33(2):79 ( aimag.v33i2.2412). 27. Ponsiglione AM, Cosentino C, Cesarelli G, Amato F, Romano M. A comprehensive review of techniques for processing and analyzing fetal heart rate signals. Sensors. 2021;21(18):6136 (https://doi. org/10.3390/s21186136). 28. Gomes RG, Vwalika B, Lee C, Willis A, Sieniek M, Price JT et al. A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment. Communic Med. 2022;2(1):128 ( 29. Shen Y, Liu T, Chen J, Li X, Liu L, Shen J et al. Harnessing artificial intelligence to optimize long-term maintenance dosing for antiretroviral-naive adults with HIV-1 infection. Adv Therapeutics. 2020;3(4):1900114 ( 30. Arsalan M, Haider A, Choi J, Park KR. Detecting blastocyst components by artificial intelligence for human embryological analysis to improve success rate of in vitro fertilization. J Pers Med. 2022;12(2):124 ( 31. Letterie G, Mac Donald A. Artificial intelligence in in vitro fertilization: a computer decision support system for day-to-day management of ovarian stimulation during in vitro fertilization. Fertil Steril. 2020;114(5):1026–31 ( fertnstert.2020.06.006). 32. Wang R, Pan W, Yu L, Zhang X, Pan W, Hu C et al. AI-based optimal treatment strategy selection for female infertility for first and subsequent IVF-ET cycles. J Med Syst. 2023;47(1):87 (https://doi. org/10.1007/s10916-023-01967-8). 33. Zaninovic N, Rosenwaks Z. Artificial intelligence in human in vitro fertilization and embryology. Fertil Steril. 2020;114(5):914–20 ( 34. Eleftheriades M, Chatzakis C, Papachatzopoulou E, Papadopoulos V, Lambrinoudaki I, Dinas K et al. Prediction of insulin treatment in women with gestational diabetes mellitus. Nutr Diabetes. 2021;11(1):30 ( 35. Sütcüoğlu BM, Güler M. Appropriateness of premature ovarian insufficiency recommendations provided by ChatGPT. Menopause. 2023;30(10):1033–7 ( gme.0000000000002246). 36. Welch H. Algorithmically monitoring menstruation, ovulation, and pregnancy by use of period and fertility tracking apps. J Res Gender Stud. 2021;11(2):113–25 ( JRGS11220218). 37. Luo J, Mao A, Zeng Z. Sensor-based smart clothing for women’s menopause transition monitoring. Sensors. 2020;20(4):1093 ( 38. Li Q, Wang Y-y, Guo Y, Zhou H, Wang X, Wang Q et al. Effect of airborne particulate matter of 2.5 μm or less on preterm birth: a national birth cohort study in China. Environ Int. 2018;121:1128– 36 ( 39. Bhagavathula AS, Massey PM. Google trends on human papillomavirus vaccine searches in the United States from 2010 to 2021: infodemiology study. JMIR Public Health Surveill. 2022;8(8):e37656 ( 40. Johnson AK, Bhaumik R, Nandi D, Roy A, Mehta SD. “Is this herpes or syphilis?”: latent dirichlet allocation analysis of sexually transmitted disease-related Reddit posts during the COVID-19 pandemic. medRxiv. 2022:2022.02.13.22270890 ( 0.1101/2022.02.13.22270890). 41. Merz A. The use of Twitter to explore trends in attitudes toward contraceptive methods. Cambridge (MA): Harvard University; 2020. 42. Ellahham S, Ellahham N. Use of artificial intelligence for improving patient flow and healthcare delivery. J Comput Sci Syst Biol. 2019;12(3):1000303. 43. Abu Zwaida T, Pham C, Beauregard Y. Optimization of inventory management to prevent drug shortages in the hospital supply chain. Applied Sci. 2021;11(6):2726 ( APP11062726). 44. Jaiswal P, Nigam A, Arora T, Girkar U, Celi LA, Paik KE. A data-driven approach for addressing sexual and reproductive health needs among youth migrants. Leverag Data Sci Glob Health. 2020:397– 416 ( 45. Wang B, Liu F, Deveaux L, Ash A, Gerber B, Allison J et al. Predicting adolescent intervention non-responsiveness for precision HIV prevention using machine learning. AIDS Behav. 2023;27(5):1392– 402 ( 46. Das B, Kutsal M, Das R. Effective prediction of drug–target interaction on HIV using deep graph neural networks. Chemometr Intell Lab Syst. 2022;230:104676 ( chemolab.2022.104676). 47. Machado LA, Krempser E, Guimarães ACR. A machine learning- based virtual screening for natural compounds capable of inhibiting the HIV-1 integrase. Front Drug Discov. 2022;2 (https:// 48. Henarejos-Castillo I, Aleman A, Martinez-Montoro B, Gracia- Aznárez FJ, Sebastian-Leon P, Romeu M et al. Machine learning- based approach highlights the use of a genomic variant profile for precision medicine in ovarian failure. J Pers Med. 2021;11(7):609 ( 49. Challa AP, Beam AL, Shen M, Peryea T, Lavieri RR, Lippmann ES et al. Machine learning on drug-specific data to predict small molecule teratogenicity. Reprod Toxicol. 2020;95:148–58 (https:// 50. Zaninovic N, Rosenwaks Z. Artificial intelligence in human in vitro fertilization and embryology. Fertil Steril. 2020;114(5):914–20 ( 51. Mehrjerd A, Rezaei H, Eslami S, Ratna MB, Khadem Ghaebi N. Internal validation and comparison of predictive models to determine success rate of infertility treatments: a retrospective study of 2485 cycles. Sci Rep. 2022;12(1):7216 (https://doi. org/10.1038/s41598-022-10902-9). 52. U.S. Department of Health & Human Services. Health Insurance Portability and Accountability Act of 1996 (HIPAA). Atlanta (GA): Centers for Disease Control and Prevention; 2022 (https://www., accessed 26 February 2024). 53. Kovacs A, Jain T. Informed consent – said who? A feminist perspective on principles of consent in the age of embodied data. Data Governance Network (DGN) Policy Brief 11. Mumbai: IDFC Institute; 2021 ( research/1616164797.pdf). 13 54. Information asymmetries in the digital sexual and reproductive health space. Discussion paper. New York (NY): United Nations Development Programme; 2021 ( files/zskgke326/files/2021-06/UNDP-Information-Asymmetries-in- the-Digital-Sexual-and-Reproductive-Health-Space-EN.pdf). 55. Bacchus LJ, Reiss K, Church K, Colombini M, Pearson E, Naved R et al. Using digital technology for sexual and reproductive health: are programs adequately considering risk?. Glob Health Sci Pract. 2019;7(4):507–14 ( 56. Yuan G, Lv B, Du X, Zhang H, Zhao M, Liu Y, Hao C. Prediction model for missed abortion of patients treated with IVF-ET based on XGBoost: a retrospective study. PeerJ. 2023;11:e14762 (https:// 57. Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol. 2018;114(2):246–57 ( pspa0000098). 58. Sharevski F, Vander Loop J, Jachim P, Devine A, Pieroni E. Talking abortion (mis)–information with ChatGPT on TikTok. In: 2023 IEEE European Symposium on Security and Privacy Workshops. IEEE; 2023:594–608 ( 59. Campbell MJ, Lotti F, Baldi E, Schlatt S, Festin MPR, Björndahl L et al. Distribution of semen examination results 2020 – a follow up of data collated for the WHO semen analysis manual 2010. Andrology. 2021;9(3):817–22 ( andr.12983). 60. The mobile gender gap report 2023. GSM Association; 2024 (, accessed 26 February 2024). 61. Lee VV, Vijayakumar S, Ng WY, Lau NY, Leong QY, Ooi DSQ et al. Personalization and localization as key expectations of digital health intervention in women pre- to post-pregnancy. NPJ Digit Med. 2023;6(1):183 ( 6). 62. Wellner G. Some policy recommendations to fight gender and racial biases in AI. Int Rev Inform Ethics. 2022;32(1) (https://doi. org/10.29173/irie497). 63. The future of human rights and digital technologies: roundtable 2: think piece. Human Rights 75 High-level Event, 12 December 2023. Geneva: The Office of the United Nations High Commissioner for Human Rights; 2023 ( sites/default/files/udhr/publishingimages/75udhr/HR75-high- level-event-Digital-Technologies-Think-Piece.pdf). 64. Our common agenda policy brief 5: A global digital compact – an open, free and secure digital future for all. New York (NY): United Nations; 2023 ( common-agenda-policy-brief-gobal-digi-compact-en.pdf). 65. Steimers A, Schneider M. Sources of risk of AI systems. Int J Environ Res Pub Health. 2022;19(6):3641 ( ijerph19063641 The role of artificial intelligence in sexual and reproductive health and rights: technical brief ISBN 978-92-4-009070-5 (electronic version) ISBN 978-92-4-009071-2 (print version) © World Health Organization 2024. Some rights reserved. This work is available under the CC BY-NC-SA 3.0 IGO licence. Design and layout: Green Ink Publishing Services Ltd 14 Contact us Department of Sexual and Reproductive Health and Research Email: Department of Digital Health and Innovation Email: World Health Organization Avenue Appia 20 1211 Geneva 27 Switzerland

View the publication

Looking for other reproductive health publications?

The Supplies Information Database (SID) is an online reference library with more than 2000 records on the status of reproductive health supplies. The library includes studies, assessments and other publications dating back to 1986, many of which are no longer available even in their country of origin. Explore the database here.

You are currently offline. Some pages or content may fail to load.