AI, a double-edged sword for children – Expert --[Reported by Umva mag]

Maximising the benefits of AI for children’s education and growth while ensuring their privacy, healthy development and emotional well-being is challenging, warns Anna Collard, Vice President of Content Strategy at KnowBe4 Africa. Artificial Intelligence (AI) has advanced dramatically in just two years, profoundly transforming daily life, especially for young people. Tools like ChatGPT, Google’s Gemini, [...]

Oct 9, 2024 - 19:43
AI, a double-edged sword for children – Expert --[Reported by Umva mag]

Maximising the benefits of AI for children’s education and growth while ensuring their privacy, healthy development and emotional well-being is challenging, warns Anna Collard, Vice President of Content Strategy at KnowBe4 Africa.

Artificial Intelligence (AI) has advanced dramatically in just two years, profoundly transforming daily life, especially for young people. Tools like ChatGPT, Google’s Gemini, and Microsoft’s Copilot, as well as the integration of AI chatbots into platforms such as WhatsApp and Instagram, are now ubiquitous. For Anna Collard, Vice President of Content Strategy at KnowBe4 AFRICA, this development is both exciting and worrying for children growing up in this new digital environment.

AI offers unprecedented opportunities in education. According to Anna Collard, these tools allow children to explore their creativity, learn new languages and improve their problem-solving skills through engaging and personalised interactions. Chatbots, capable of providing
immediate and tailored responses, are becoming learning companions for curious young minds.

However, this same accessibility raises crucial questions, particularly about privacy, potential psychological effects and the development of children’s critical thinking skills.

Privacy risks
One of the major concerns surrounding children’s use of AI is data collection. “Chatbots may seem harmless, but they often collect personal information without proper consent,” warns Collard.

Risks range from targeted advertising to the creation of detailed profiles of users based on their behaviours and preferences. This can expose young people to malicious manipulation, including misinformation or attempts at online “grooming”.

Generative AI models, often designed for adults, do not always take into account the specific protections needed for minors, raising questions about the safety of interactions for younger people.

Another danger is the over-reliance that children can develop on these tools. “Children may perceive chatbots as human friends,” Collard says, a situation amplified by a phenomenon called the “overconfidence effect.”

This overconfidence can lead to a decrease in critical thinking as children accept AI responses without question, to the detriment of their own judgment.

This reliance on AI could reduce real-world social interactions, a critical factor in young people’s social and emotional development.

Inaccurate and inappropriate information
Despite their sophistication, generative AIs are not infallible. When they don’t have accurate information, they can invent answers, a phenomenon known as “hallucination.” For children, this can mean receiving false information about their homework or, worse, a wrong health diagnosis.

Furthermore, AI systems can reflect biases present in the data they are trained on, reinforcing misinformation and stereotypes. One of the most feared dangers is exposure to inappropriate content, including manipulated images, that exploit children’s vulnerabilities.

The excessive use of AI-based technologies can also have a psychological impact on children. Collard points to side effects already observed with other digital technologies, such as increased anxiety, depression, and reduced meaningful social interactions.

“We see that these new technologies reduce critical thinking in young people, making them less likely to question what they see or read,” she adds.

In the face of these challenges, parents and educators must play a key role. Monitoring children’s use of AI, establishing family rules, and encouraging non-screen activities, such as reading and physical play, are important measures to counteract the negative effects of these
technologies, she suggests.

For their part, policymakers are starting to take action. In Europe, for example, the AI Act aims to strengthen the safety of AI systems, although it does not specifically target children.

However, Collard believes that much more needs to be done to effectively protect the
rights and safety of minors.

ARD/ac/Sf/fss/jn/APA




The following news has been carefully analyzed, curated, and compiled by Umva Mag from a diverse range of people, sources, and reputable platforms. Our editorial team strives to ensure the accuracy and reliability of the information we provide. By combining insights from multiple perspectives, we aim to offer a well-rounded and comprehensive understanding of the events and stories that shape our world. Umva Mag values transparency, accountability, and journalistic integrity, ensuring that each piece of content is delivered with the utmost professionalism.