Crafting AI Systems With Children: Experimenting With a Distributed Ecosystem of Actors

Written by Vicky Charisi

As indicated in the Policy Guidance on Artificial Intelligence (AI) for Children published by the UNICEF Office of Global Insight and Policy, a responsible approach towards the emerging opportunities and challenges associated with AI for children requires the creation of an ecosystem of actors able to ensure AI technology is developed and used for the best interest of each child and every child’s fundamental rights are always respected1.

This note is the author’s personal reflection on a project that involved interactions among actors from academia, policy, industry, civil society, and children and the ways we, collectively, experimented to pilot and implement the requirements of the Policy Guidance on AI for Children published by the UNICEF Office of Global Insight and Policy. Throughout this project, we identified a need to build bridges among the stakeholders and to create a space that would support the interconnection of the involved, yet distributed, actors. Most importantly, we aimed at developing an integrated methodology for children’s inclusion and participation that would guide the design, development, implementation, and monitoring of AI technology for children in a responsible way.

As such, we illustrate our practical experimentation for the formation of such a space, a schema for the development of a robotic prototype for children as a form of embodied AI system. We propose there are two fundamental preconditions for the design and development of AI for children:

(i) the interconnection of all involved actors within a distributed and multidisciplinary space, and

(ii) the meaningful participation of children and their consideration as a catalytic stakeholder, equal to others.

By supporting the interconnections of the involved actors as well as children’s meaningful involvement, we, as society, can be more powerful and effective in advancing the practice of infusing ethics, responsibility and, consequently, trustworthiness through the entire lifecycle of AI development for children.

1. The Starting Bits

Our scientific motivation behind this experimentation lies in our curiosity about the impact of AI systems on children’s cognitive and socio-emotional development. We formulate questions such as: what happens when children interact with Artificial Intelligence-based systems? What is the impact of AI on their well-being? How do children see their involvement in the design process of AI systems? These are some of the questions the scientific community is currently exploring, and they are in line with the questions currently being explored at the Joint Research Centre (JRC) of the European Commission.

The JRC is the research service of the European Commission that provides scientific evidence for the support of EU policy-making. In the context of AI systems designed especially for children, we investigate the interaction of children with various kinds of intelligent technologies. By conducting a series of behavioural studies about the impact of AI on children’s behaviour and development2,3,4, our research  is connected to the European approach to AI to ensure AI improvements respect people´s safety and fundamental rights.

a. The AI ACT and the EU Strategy on the Rights of the Child

Our project was held in a changing policy context regarding AI in the European Union (EU). The European approach to AI envisions to ensure that, on one hand, people and businesses can enjoy the benefits of AI, establishing an ecosystem of excellence in AI in Europe.  On the other hand, the EU approach for AI envisions that AI should safeguard  people’s safety and fundamental rights, ensuring an ecosystem of trustworthy AI. In this second aspect, the European Commission published in 2021 the AI Act, a proposal for a regulation on AI . The AI Act5 aims to address fundamental rights and safety risks specific to AI systems and to ensure AI systems fulfil certain sets of  requirements according to the risks they bring, i.e.,  following a risk-based approach. The proposed text of the AI Act refers to children as a vulnerable population and states that specific considerations shall be given to whether the AI system is likely to be accessed by or have an impact on children.  

In addition to AI policies, this work relates to children’s rights. In this area, the European Strategy on the Rights of the Child6 proposes a series of targeted actions across six thematic areas to protect, promote, and fulfil children’s rights in today’s context. The thematic area called “Digital and information society: an EU where children can safely navigate the digital environment and harness its opportunities” mentions the expected impact of AI on children and their rights and points to the mentioned AI Act for the protection of fundamental rights, including children.

b. UNICEF’s Policy Guidance on AI and Child’s Rights and the invitation for its Piloting

In parallel to the European AI developments, in 2020, the UNICEF Office of Global Insight and Policy published the first draft of the Policy Guidance on AI and Child’s Rights with a proposal for a set of policy requirements for governments and businesses to consider when developing or supporting the development of AI for children7. UNICEF invited a number of representative organisations, including the team behind this work, to pilot the guidelines by developing specific case-studies. The case-studies would demonstrate the implementation of the guidelines in concrete AI applications and provide further feedback for the final version of the Policy Guidance.

c. Establishing a Distributed Multi-Cultural and Multi-Disciplinary Team

The team involved in the development of our case study was coordinated by researchers from two institutions: the JRC and the HONDA Research Institute, Japan (HRI) (an overview of the corresponding case study was published by UNICEF8). Through this collaboration, the project was able to combine JRC know-how on child-robot interaction and science for policy with the HRI advanced, cutting-edge technical knowledge, forming a complementary multi-disciplinary and multi-cultural  core collaborative scheme.

HRI  is a pioneer institution in the field of AI and Robotics that considers the relationship between intelligent cyber-physical systems, human and nature, should be mutually beneficial. The Japanese values of a mutually beneficial development of humans and nature can be seen as a means for addressing the need for sustainability. This is in line with the European values for sustainable development,  human dignity, and the human-centric approach for AI that inspires JRC research.

This coordination team involved additional researchers from diverse disciplines (e.g. computer science, education) and cultures (Europe, Asia and Africa), listed in the acknowledgement section, to represent different views and expertise domains. For the team involved in this work, the prioritisation of children as the catalytic stakeholder for the design of AI technology that directly or indirectly impacts children, was set as a prerequisite. However, the methods to include children in such a space that would prioritise their fundamental rights in the area of  embodied AI are yet to be explored.

2. The Cultivation of a New Mindset Through Experimentation

In addition to policy recommendations and regulatory frameworks, we believe the cultivation of a culture that prioritises the best interest of the child requires the interaction of various stakeholders, including children, throughout the whole process of an AI system design, development, and use even for those applications that are categorised as low risk.

A manifestation of such an approach is illustrated by the activities of our experimentation in the context of embodied AI with the use of Haru, the robot prototype developed by the HRI, JP (Fig.1). For this project, we focused on the following requirements: (i) Prioritise fairness and non-discrimination for children  and (ii) Provide transparency, explain ability, and accountability for children.

Typically, whenever children are taken into consideration, their participation is scattered and appears partial during some of the phases of the design process (e.g., in the evaluation of the impact of robots on children’s behaviour or in the identification of their perceptions). This contradicts one of the fundamental principles of the design process, one of constant iteration and understanding of the users. Most importantly, children’s rapid and diverse development requires a design process that is flexible, based on multiple smaller iterations with repeated evaluations.

Figure 2 illustrates the process we followed for a child-centred four-step design process, indicating the involved stakeholders for each step. For the purposes of this note, we mainly focus on children’s participation; however, it should be noted that this is part of the wider interaction of all involved stakeholders, such as researchers, policymakers and technology developers from academia, international institutions, educators and industry, on a local, regional, and global level.

A. Child’s needs identification

For the identification of children’s needs, children’s meaningful participation is a fundamentally required procedure. We first prioritised the requirements as identified by UNICEF’s Policy Guidance, then we included children with a diverse ethnic and cultural background (i.e., Japan, Uganda and Greece) in order for us to explore the diversity of their needs with a focus on the concepts of fairness and explainability. Finally, we considered the existing literature on developmental psychology and existing theories of child development to align the identification of children’s needs in the context of our project. For a detailed description, please see the corresponding publication9.

B. System design

While in our work, we followed a similar to common approach to participatory design methods with children. The fact that natural interaction with social robots is not embedded in children’s everyday activities made us base our co-design activities in story-telling and prototyping in imaginary scenarios. However, our cross-cultural approach required us to go one step further by combining two angles: zooming in to discern, identify, contrast, and adapt to cultural characteristics of the participant children in an individual and local community level while zooming out to consider the systemic implications of our AI application by taking a global perspective and elaborating on the global sustainable development goals.

C. System implementation

For the prototyping sessions, the HONDA Research Institute prepared a low-cost screen-based system with the robot-avatar to let the children contribute to the design of the robot behaviour in practice. The robot developers collaborated with the teachers and through them with the children, while aspects of the policy guidance were introduced as a project-based activity to children. The children were invited to reflect on the system as embedded in their everyday activities. This resulted in the development of a mindset that sees the robot as an embedded socio-technical artefact. The researchers, in collaboration with the educators, analysed and interpreted children’s artefacts. The combination of those findings and of the policy requirements as identified by UNICEF formulated the implementation of the system with the following principles:

Principle 1: Centering on equity, accessibility, and non-discrimination

Principle 2: Enacting the best interests of the child and community

Principle 3: Educating and developing through implementing age and developmentally- appropriate application

Principle 4: Limiting data collection and default settings

This work has contributed to the IEEE Standards Association working-group on Children’s Data Governance.

D. Testing / Impact on the child  

To evaluate the impact of our system on children’s behaviour, we focused on the requirement of the system’s explainability in the context of a problem-solving activity. We used as a baseline our previous research in the same context but without robot explanations, and we tested different kinds of explanations with students in Japan to understand the impact on children’s problem-solving process. Post-intervention interviews provided insights for children’s perceptions about their experience with the robot that were combined with the behavioural data for the evaluation of the system.

3. What if We, As a Society, Could Redesign Multi-Actor Interactions?

This project lasted 8 months; it was conducted during the challenging period of COVID-19, and often, we had to make decisions on the spot according to the current situation of the pandemic while trying to keep a balanced collaboration among all the involved stakeholders. This was possible only because, through this collaboration, we felt the emergence of a shared space for distributed interactions that prioritised children’s participation and was based on the common value of the best interest of each child. For our interaction with children, we used participatory action research where the teachers became part of the research team, and we conducted online sessions with the children.

Our experience showed that a habit of best practice can be developed through a project-based approach and an inclusive process of interaction, with a distributed scheme. With a distributed scheme of multi-actor interaction, we considered local values, contexts in which AI technology will be implemented, and needs we combined with global goals. With such an approach, we understood that, in the area of the development of AI for children, a habit of best practice can alleviate dichotomous thinking that appears as conflicting. Instead, different concepts emerged as prerequisites for the development of AI technology for the best interest of children, which I describe below.

First, we acknowledge the complexity of multi-actor interaction especially among different sectors, such as policy institutions, industry, academia, and civil society. For our project, while the initiation was based on a top-down approach by adopting UNICEF’s guidance, we were interested in functioning on multiple levels and experimenting with peer-to-peer interactions.  In this context, we observed that social transparency and shared responsibility can be transformative for a practical implementation of such a collaboration.

Second, if we include children in the design process of AI as a core stakeholder equal to others, they might become our role models and inspire us in terms of transparency, curiosity, and imagination with a focus on sustainability and continuity towards the future. During our project, we had to address the complexities of children’s individual differences that had to be combined with the complexity of our AI system. Including children from a rural area in Uganda required an ethnography-inspired approach and practices that gave space, respect, and value to local habits, needs, and culture. Since there is limited previous scientific work on the inclusion of children from eastern Africa in robot design, this was possible only by allowing the time and ensuring the necessary openness for multiple iterations of discussions. This was similar to the school in the urban area of Tokyo and in Greece. As such, we tried to bridge ethnographic and design future research methods and look into children’s perspectives empathetically by co-creating with the educators and the students, applying participatory action research and child-friendly methods, such as story-telling activities acknowledging that involving people of practice requires us to allocate extra time.

What Do the Children Gain?

At the same time, including children in  all the steps of the design circle allowed all the participant children to contribute practically to the design of the technology they might use in the future, make connections to their local community, develop awareness of the challenges on a global level, have a critical stance towards AI technology while acquiring ownership of the platform and using their imagination for the development of a sustainable AI for their future.  Observing the transformation in their knowledge throughout the duration of the project made us even more confident about the value and the importance of enabling and empowering children to be active and equal actors in a safe space along with researchers, policymakers, and technology developers.

Epilogue

We referred to the lessons learnt from a pilot multi-actor multidisciplinary and multicultural collaboration with a culture of inclusive dialogue and social transparency among organisations that cover a wide geographical distribution. The Joint Research Centre of the European Commission, the Honda Research Institute, Japan, the UNICEF’s Office of Global Insight and Policy, and a number of academic institutions with scientists, roboticists and designers, and schools with educators and children for the piloting of the policy guidance proposed by UNICEF on AI for children. Despite the fact that the different actors involved in this process might have different agendas that sometimes were conflicting, the diverse, distributed, and dynamic nature of our collective practice as well as children’s participation as an equal stakeholder of the process were catalytic. Our practical experimentation indicates a diverse ecosystem of actors can be the drive towards two mutually reinforcing goals, the advancements for AI for good and the benefits for the best interest of all children.

Acknowledgements

The work summarised here was carried out by Vicky Charisi (EC, JRC), Randy Gomez (HRI, JP),  Steven Vosloo (UNICEF), Luis Merino (University Pablo de Olavide Spain), Selma Sabanovic (Indiana University, USA), Deborah Szapiro (University of Technology Sydney) and Emilia Gomez (EC, JRC) with the participation of Tomoko Imai  (Jiyugaoka Gakuen High School, Tokyo, Japan), Joy Bunabumali (Good Samaritan Primary School, Bududa, Uganda), Tiija Rinta (UCL) and the Arsakeio Lyceum Patras, Greece. It was partially funded by the HUMAINT project, European Commission, Joint Research Centre, and the HONDA Research Institute, Japan. We thank all the participating children.