Email template
Both teachers and students need total clarity on what Generative AI is, how it accumulates its training data and what its ethical, environmental and safety hazards are. The focus must not just be on safely using the tool; teachers and students must be given a choice NOT to use it intentionally (obviously AI features embedded in other tools we use can be difficult to avoid, but I am focusing on ChatGPT, Gemini and other LLMs) if its creation (stolen copyrighted works), content (misogynistic, racist, hateful content has been fed into the models during training—intentionally or not), and 3) environmental impact (water use, energy use, damage to ecosystems for building infrastructure) is strongly clashing with their own values.
For younger children, the risks of irresponsible AI use are even higher, so we must pay special attention to how AI tools should be used (or not) in Primary.
It is my opinion that we should NOT allow Primary students to use Generative AI for any tasks! Kids aged 5-10 years old should not aim for "fast learning" in any way, shape or form. This is a developmental stage in which they need to struggle, take a long time to figure out problems, fail, try multiple different ways and not ever focus on a perfect product. At our school, Primary students do not even receive grade marks, so the mental pressure of aiming for a perfect grade is removed from our learners' motivation. There is no need to teach them efficiency and helpful shortcuts in reasoning and thinking.
When thinking about our AI policy, we should address the following concerns:
We need to educate both parents and teachers – everyone who is in charge of what our learners have access to – to the dangers of using ChatGPT at home and at school. We need to make clear its detrimental effects on: 1) physical and mental health (users using generative AI as a therapist, suicides, harmful actions towards others) 2) thinking, problem solving, creativity 3) environment and climate change.
Thank you for your attention to these matters, as we have a responsibility to our students and their families to carefully consider the impacts of any new technology before introducing it into our pedagogical methods.
Privacy in the European Union
If this is happening in the EU, it is almost certainly a violation of the General Data Protection Regulation (GDPR), as student names are classified as personal data, which may not be disclosed to a third party without explicitly being included in a privacy policy and disclosed to users. Furthermore, the school must have a data processing agreement in place with each third party to which personal data is transmitted.
Unless the school has such an agreement with OpenAI and discloses to parents what personal data is being entered into ChatGPT prompts (for example, your child's name or other personally identifiable data that may be included in certain types of assignments, such as their parents' names, their address, etc), use of ChatGPT by teachers violates the GDPR as soon as they enter so much as a single name into a prompt.