Internet users are getting younger; now the UK is weighing up if AI can help protect them | TechCrunch

Internet users are getting younger; now the UK is weighing up if AI can help protect them | TechCrunch

has been in artificial intelligence. Crosshairs Governments concerned about how it could be misused for fraud, disinformation and other malicious online activity. Now a regulator in the UK is preparing to explore how AI is used in the fight against some of this, particularly as it relates to content harmful to children.

OfcomRegulator charged with UK enforcement Online Safety Actannounced that it plans to launch a consultation on how AI and other automated tools are used today, and may be used in the future, to proactively detect illegal content online. and to remove, specifically to protect children from harmful content and to identify child sexual exploitation. Finding content was difficult at first.

The tools will be part of a wider set of proposals that Ofcom is focusing on protecting children online. Ofcom said a consultation on the comprehensive proposals will begin in the coming weeks with an AI consultation due later this year.

Mark Bunting, director of Ofcom's online safety group, says his interest in AI stems from looking at the extent to which it is used as a screening tool today.

“Some services already use these tools to identify and protect children from this content,” he said in an interview with TechCrunch. “But there's not a lot of information about how accurate and effective those tools are. We want to look at ways we can make sure the industry is evaluating (that) when they're using them. are using, ensuring that threats to free expression and privacy are managed.

One possible outcome would be that Ofcom would prescribe how and what platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tooling, but potentially There could be fines if they fail to improve by blocking content, or creating better ways to retain younger users. by looking at it.

“As with many online security regulations, the onus is on firms to ensure they are taking the appropriate steps and using the appropriate tools to protect consumers,” he said. “

This initiative will have both critics and supporters. AI researchers are finding ever more sophisticated ways to use AI. Check out DeepFax for example.as well as to authenticate users online. Yet there are so many. The doubters Joe notes that AI detection is not foolproof.

Ofcom announced the consultation on AI tools as it published its latest research into how children are engaging online in the UK, which found that overall, more young children are connected than ever before. There have been, so much so that Ofcom is now going bust. Always finish the activity between the younger age brackets.

According to a survey of American parents, nearly a quarter, 24 percent, of all 5- to 7-year-olds now own smartphones, and when you include tablets, that number jumps to 76 percent. The same age group is also consuming more media on these devices: 65% have made voice and video calls (vs. 59% just a year ago), and half of kids (vs. 39% a year ago) watch streamed media. have been. .

Age restrictions around some mainstream social media apps are loosening, yet still largely ignored in the UK. Ofcom found that around 38% of 5 to 7-year-olds are using social media. Meta's WhatsApp, at 37%, is the most popular app among them. And in possibly the first instance of Meta's flagship image app being rescued from the less popular viral sensation of bite dance, TikTok was found to be used by 30 percent of 5- to 7-year-olds, with Instagram ” Only” 22 percent. Discord rounds out the list but is significantly less popular at just 4%.

About a third of children this age, 32%, are going online on their own, and 30% of parents said they are okay with their young children's social media profiles. YouTube Kids is the most popular network for young users at 48%.

Gaming, a perennial favorite among children, has grown in use by 41% of 5- to 7-year-olds, with 15% playing shooter games.

While 76% of parents surveyed said they talked to their young children about staying safe online, Ofcom says there are question marks between what a child sees and what that child can report. In researching older children aged 8-17, Ofcom interviewed them directly. It found that 32% of children reported seeing disturbing content online, but only 20% of their parents said they reported anything.

Even accounting for inconsistencies in some reporting, “research finds a correlation between older children's exposure to potentially harmful content online, and what they share with their parents about their online experiences.” recommends disconnection,” Ofcom writes. And annoying content is just one challenge: deepfakes are a problem, too. Ofcom said that among 16-17-year-olds, 25% said they were not confident about distinguishing real from fake online.

About the Author

Leave a Reply