Restrictions on Dangerous Challenges and Sexual Role-Play
Adults May Be Temporarily Misclassified as Minors
Following a series of incidents in which teenagers made extreme choices after conversing with ChatGPT, OpenAI has introduced a tool to determine users' ages on ChatGPT.
On December 18 (local time), OpenAI announced that it is gradually rolling out an age prediction model to its consumer subscription plans. This model analyzes various signals, such as the topics discussed with ChatGPT and the typical times of use, to determine whether a user is under the age of 18.
If ChatGPT determines that a user is a minor or if the user's age is unclear, it will immediately enforce a 'sub-18 environment.' When the youth protection feature is activated, it blocks content such as violent depictions, challenges that encourage dangerous behavior, sexual role-play, and body distortion content. In addition, if a safety issue arises or a conversation is deemed dangerous for a minor, ChatGPT will strongly recommend that the user contact emergency services or crisis support organizations.
OpenAI explained that the principles applied to this 'sub-18 environment' were developed with input from experts, including the American Psychological Association (APA). However, OpenAI noted that the system is not yet perfect and may mistakenly classify adults as minors. In such cases, users can verify their age by submitting a 'selfie video' or a government-issued ID, such as a passport, to regain normal access. OpenAI emphasized that any videos or ID information submitted during this process will be deleted within a few hours of verification and will not be stored separately.
According to ChatGPT's terms of service, users under the age of 13 are not allowed to access the service, and those aged 13 to 18 must have parental or legal guardian consent. However, until recently, there was no separate process to verify the birth date entered during registration, which reportedly led to many minors signing up without parental approval by circumventing the system.
Meanwhile, there have been cases in which minors, such as Adam Lane, a high school student in California, ended their lives after suffering from delusions or depression following conversations with ChatGPT. The bereaved families filed lawsuits against OpenAI, claiming that the service was launched without adequate safety measures for minors. They requested the court to order safety measures, including automatic termination of all conversations related to self-harm and protective features for underage users, as well as compensation for damages.
At the end of last month, OpenAI argued in a document submitted to the court handling the Lane case that Lane had ignored the terms of service, which require users aged 13 to 18 to obtain guardian consent.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


