A few days ago, I was helping my 3-year-old nephew with LEGO bricks. After a while, he got really bored and left the place (Probably to find something more exciting). He then turned to the amazon echo dot on his table and commanded,
“Alexa, make a fart ( Apparently, 4AFart skill of Alexa is a thing now among children)
Alexa didn’t respond immediately, probably due to technical problems. He then kept saying the same command as if he was talking to a person. Once Alexa started responding, he immediately burst out laughing and jumping up and down all over the room.
Although this incident was admittedly hilarious, I was also wondering how much influence that device has on that child. It changed his mood from boredom to complete exhilaration within seconds. Even his parents or playmates won’t always be able to connect with him to that level.
When children place that kind of authority on an entity, it’s going to change the way they perceive and react to things in their life. It can even affect their mental, emotional, and intellectual growth in the long term. This issue is just one aspect of the implications of growing up with AI. There are also many other harmful risks as well.
In 2019, UNICEF and its partners World Economic Forum Centre and Canadian Institute for Advanced Research hosted a workshop called “Generation AI” initiative to steer the conversation around this issue to a slightly positive and fruitful direction. The workshop addressed many current and potential challenges and opportunities for children in the AI era.
What Are The Challenges?
Security And Privacy
What I learned from the incident with my nephew is that kids place a ton of authority on AI without knowing how it works. They see the AI speakers and smart toys as their closest friends or family members with whom they can interact and share their feelings. By virtue of this connection, they tell all kinds of things to these devices, including their most precious secrets.
Although most AI voice assistants allow the users to delete this data, that is not entirely the case with smart toys. The data collected by the smart toys are stored in the manufacturers’ server. The children have no control over this data. This makes it easier for hackers and manufacturers to spy on children and directly communicate with them.
Exposure To Harmful content
Youtube and children have become an inseparable duo now. Research by the pew research center shows that 81% of parents let their kids watch youtube videos regularly. And 61% of those parents found that the content their kids watch inappropriate and harmful.
Most of the time, inappropriate content comes through youtube’s AI-generated recommended videos. The AI that is used in youtube has no sense of wrong or right. So, there is no guarantee that the content in the recommendations is good or bad for kids.
“The AI algorithms determine what the kids learn, what they watch, and how they speak and interact with others.”
When YouTube introduced YouTube kids back in 2015, they thought it would solve this issue. But, the problem persists in there as well. Even the cartoons the kids see contain violence, drugs, sexualization, etc. According to medical experts, this content has many psychological implications on developing brains such as stress, anxiety, depression, and so on.
So, What Can We Do To Protect Our Children?
Starting With Parents
The recurring theme of all the scenarios mentioned above is that the children don’t know how the AI systems work. When a parent buys these devices, they need to be fully aware of what their kids can gain from these devices for better or worse. They should know the answers to many questions that can affect the well-being of their kids.
These questions are :
- What is being done with the data?
- How/when/what data is collected from them?
- What can companies do with that data?
- How is content created/filtered for children?
Finding answers to these questions helps them make better-informed decisions on how they should let their kids engage with these devices.
Giving Users Control Of Their Data
Young people account for a large share of AI consumers. Still, they don’t have much control over the data that AI systems collect from them. Giving the users ownership of their data will enable them to delete and reset their data when needed. This minimizes potential data theft and surveillance threats that can happen when others have control over users’ data.
Building Child-friendly Algorithms
Most internet content platforms like youtube or google use AI algorithms to optimize the views and clicks of their content. The recommendations users receive are often the same content that previously generated clicks or views. As a result, the users will be provided with random content one after another that will potentially lead them to one of those internet rabbit holes. And, that’s not healthy for kids at all. It can even constrain the development of children’s interests and abilities to a high degree.
But the good news is that it is possible to create algorithms that will support the key aspects of childhood development such as curiosity, exploration, active engagement, interests, abilities, etc. Creating such children friendly algorithm demands the collaboration of experts from a broad range of disciplines such as AI, computer science, mobile app development, development psychology, learning sciences, etc.
The point of this article isn’t to say that AI is extremely dangerous for kids. On the other hand, When used correctly, AI is hugely beneficial for children in m anyways. It helps them with their homework, improves their speech and language development, tells them bedtime stories, animal facts, and many more. The point is that children are vulnerable. if we don’t pay attention to how AI is used around them, it can affect our children’s overall well-being in the long run.