AI-powered bots
Also known as: AI chatbots
Facts (17)
Sources
The Children and Screens Guide for Child Development and Media ... childrenandscreens.org 7 facts
perspectiveAnnie Maheux, PhD, Assistant Professor of Psychology and Neuroscience at UNC, Chapel Hill, states that while social interaction with AI chatbots may feel comfortable for adolescents in the moment, these interactions may contribute to social isolation or loneliness.
claimElizabeth Englander states that romantic relationships with AI chatbots are possible, may offer nudity and sexuality, and are often designed to be safe environments despite these risks.
claimElizabeth Englander asserts that AI chatbots do not have needs or moods and always cater to the user, which fails to create the conditions necessary for developing the social capacity required for human relationships.
claimElizabeth Englander warns that AI chatbots are designed to increase user engagement by pushing users toward sexual topics, photo sharing, and the revelation of private information.
claimElizabeth Englander notes that AI chatbots can be quite realistic because they are trained using human data.
measurementAccording to Pew Research, one-third of teenagers interact with AI chatbots on a daily basis.
measurementNearly one-third of teenagers use AI chatbots on a daily basis as of December 2025.
Cybersecurity Trends and Predictions 2025 From Industry Insiders itprotoday.com 5 facts
claimDror Liwer, co-founder of Coro, states that bad actors could create fake AI chatbots with the explicit intent to trick users into sharing sensitive information directly.
claimAdvanced AI-powered bots pose a security threat to users by harvesting personal data and credentials.
claimAdvanced AI-powered bots are expected to fuel a wave of misinformation by flooding social media platforms with false content and manipulating recommendation algorithms to amplify deceptive narratives.
claimAttackers can exploit users who share data with AI by infiltrating AI chatbots to access the input data provided by those users.
claimAI-powered bots allow threat actors to execute large-scale attacks with minimal effort, potentially allowing less capable adversaries to disrupt services and access sensitive data.
Reference Hallucination Score for Medical Artificial ... medinform.jmir.org Jul 31, 2024 3 facts
referenceKring T, Akula S, Prasad S, Sokhn E, and Thaller S authored 'Evaluating AI Chatbots for Preoperative and Postoperative Counseling for Mandibular Distraction Osteogenesis', published in Journal of Craniofacial Surgery in 2026.
referenceŞahin A and Yorulmaz E authored 'Guideline Concordance and Safety of AI Chatbots for Circumcision Anesthesia: A Comparative Study', published in the Journal of Contemporary Medicine in 2026, volume 16, issue 2, page 109.
referencePergantis P, Bamicha V, Skianis C, and Drigas A published a systematic review titled 'AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review' in Brain Sciences in 2025.
Reference Hallucination Score for Medical Artificial ... - PMC pmc.ncbi.nlm.nih.gov 1 fact
claimF Aljamaan proposed a Reference Hallucination Score (RHS) in 2024 to evaluate the authenticity of citations generated by AI chatbots.
Innovation of Referencing Hallucination Score for medical AI ... researchgate.net 1 fact
claimThe authors of the study titled "Reference Hallucination Score for Medical Artificial Intelligence" proposed a reference hallucination score (RHS) to evaluate the authenticity of citations generated by AI chatbots.