최근 동반자형 인공지능 챗봇(AI companion chatbot)으로 인해 만 18세 미만의 아동 사용자로 하여금 자해 내지 자살 등 극단적 선택에 이르게 하는 사례들이 점증함에 따라 이를 규제하기 위한 정책 당국의 움직임이 점점 늘어나고 있다. 유럽연합, 미국 뉴욕 및 캘리포니아 주, 중국 등은 아동의 보호를 목적으로 한 동반자형 인공지능 챗봇에 관한 규제를 마련하고 있는 대표적인 국가에 속한다. 한국 또한 정보통신망 이용촉진 및 정보보호 등에 관한 법률에서 지능형 대화서비스를 이용하는 아동을 보호하기 위한 규정을 두고 있다.
이에 본고에서는 동반자형 인공지능 챗봇으로 인한 과의존 내지 중독으로 아동 사용자의 신체적, 정신적 건강에 부정적인 영향을 유발한다는 사회적 우려에 따라, 세계 각국의 규제 및 입법 동향을 살펴보고, 유의미한 시사점을 추출하여 제시하고자 한다. 이를 통해 향후 디지털 서비스를 이용하는 국내 아동을 보호하는 법적 체계 마련에 기여가 되기를 바란다.
This article, as a pioneering study, examines existing legal approaches to the regulation of AI companion chatbots―systems designed to engage in human-like conversations in order to provide users with emotional purposes and mental health-related support. While such services present certain benefits, including the mitigation of loneliness and the provision of mental health assistance, significant concerns regarding the safety of minors, particularly those under the age of 18, have increasingly been raised by regulatory authorities across the globe. Indeed, several AI companion chatbot service providers, including ChatGPT and Character.AI, are currently facing a series of lawsuits alleging wrongful deaths of adolescent users.
In response to these escalating concerns, regulatory initiatives have been undertaken in regions like EU, US, China, and the Republic of Korea. These regulatory frameworks generally require covered service providers to implement appropriate and proportionate measures aimed at preventing suicidal ideation and to provide clear and conspicuous disclosures informing users that the conversations are conducted with artificially designed systems rather than real human beings. By contrast, South Korean law merely stipulates that covered service providers shall make efforts to avoid providing inappropriate content to children under the age of 14 through text- or voice-based conversational interactions. This limited provision falls short of the comprehensive protective measures adopted in other jurisdictions, resulting in comparatively inadequate legal protection for children’s safety.
AI companion chatbot services possess substantial transformative potential. It would not be an exaggeration to suggest that widely accepted conceptions of “relationships” are likely to be fundamentally reshaped by their continued development. When combined with advanced humanoid robotics, AI companion chatbots may further alter how individuals recognize, interpret, and interact with the real world. It is hoped that this article will contribute to the development of more robust legal protections for Korean children, who are increasingly immersed in digital services.