Prototype AI and Creative Learning Workshop

Over the past few months I’ve been collaborating with a group of colleagues that form the Connecticut / Baden-Württemberg Human Rights Research Consortium (HRRC) to develop open-ended, personally meaningful and creative pathways for all youth to engage with AI tools and technologies. My local partners are the NEXUS Experiments group at the BrainLinks-BrainTools center at the University of Freiburg.

Before starting to facilitate workshops with local youth around AI and creative computing we wanted to convene a group of critical friends from different institutions around the city to prototype ideas together. So this week we put together an experimental three-hour workshop with teachers, librarians, researchers and afterschool educators to test out some ideas as learners and reflect on the experience.

For the content of the workshop, we’re drawing on the work of Mitchel Resnick and the team at the Lifelong Kindergarten Group at the MIT Media Lab. I was inspired by the article “Generative AI and Creative Learning: Concerns, Opportunities, and Choices” and used a quote from the post to center the goals for the project.

Resnick writes in this article, “I think the most important educational opportunities will come not from teaching students about AI but rather in helping students learn with AI — that is, supporting students in using AI tools to imagine, create, share, and learn. And, as a bonus, learning with AI can be the best way for students to learn about AI too.”

So with that framework in mind, we organized the session around playful hands on experiences creating projects using AI tools. To start we had a quick interaction with some of the online exhibits from Imaginary’s I AM AI collection. I especially like the ‘neural numbers’ experience which allows people to quickly play around with a predictive AI model.

After an initial discussion about our own ideas around opportunities and concerns for AI in creative learning contexts, we started playing with the Scratch Labs face sensing blocks. This experimental extension to the Scratch programming language allows you to “create projects that respond to your eyes, nose, and other parts of your face.” This tool allows kids and adults to interact with machine learning models which are part of a larger world of predictive AI applications in ways that are playful and personally meaningful.

It was fun to see the group dive right into making projects that responded to their faces. Since we were working in pairs they quickly started testing out how to make objects pass between their faces and started moving back and forth out of view from the camera. The face sensing projects supported lots of other experiments with other categories of scratch blocks as well as messing around with the sprite editor to better calibrate the animations.

After spending 15-20 minutes making face sensing projects, I introduced a collection of everyday objects that participants could use to test out the bounds of the machine learning model.

Participants moved things like stuffed animals, playschool figurines and drawings made with sharpies and cardboard around the screen to see if the AI detected them as faces. We added googly eyes to try to make them more recognizable and experimented with covering parts of our own faces.

There were some surprises during this tinkering experience. We noticed that some extremely abstract drawings still got picked up as a face by the program. As well it was really fun to see people adding to the material set by using just their eye-glasses or a cellphone photo to try and “fool” the AI model. We spent a few minutes after these experiments doing a first ‘gallery walk’ to see what everyone had been working on around the room.

In the next part of the workshop, we got inspired by a medium post by Eric Rosenbaum about adding generative AI images into Scratch projects. I really liked his example that brings in both story elements and images co-created using AI tools.

We moved to adding generative AI produced images into our face sensing projects to build something like an interactive graphic novel (or at least the first few steps in the story). For an example, I showed this project about a futuristic robotic Santa Claus that used a AI generated time travel scene, electronic hat and “beard with birds living in it” sprites.

While Eric writes about a potential AI image generator built into the Scratch engine, we did a work around by using Microsoft Image Creator (a website powered by DALL-E) along with background remover software before importing the sprites to the Scratch program. As well several of the groups worked on ideas that were co-authored by the Microsoft Co-Pilot search engine (which is based on Chat-GPT). I was surprised to see how these tools seemed to break down some of the initial hesitation that especially adults tend to have generating ideas for more narrative based projects. Searching for customized story prompts and making many quick images helped the groups get started faster building their projects.

We didn’t have so much time to go deep into the ideas but it was fun to see the variety of stories as well as different directions that the groups were investigating. We had Harry Potter fan-fiction, a teen drama about an alien invasion at the local theme park and a couple of fantasy themes that brought in magical animals. One of the interesting challenges that many groups had was wanting to add in backgrounds to the projects.

As is currently set up, the face sensing projects do not allow backgrounds so that the camera image stays visible. But the groups came up with different solutions using large sprites as backgrounds to the scene. One of the projects used a ghost effect so that the camera image could still be seen dimly in the background and other groups decided to make the background wholly opaque. I think these could both lead to interesting interactions with ‘readers’ who would have to figure out how to activate the story using their faces without getting direct visible feedback.

After about 40 minutes of working on the sketches, we did a final ‘gallery walk’ to share what we tried and what we might continue to explore if we had more time to work on the projects. For the last few minutes of the session we reflected about how these creative projects might be translated into our own settings working with youth at classrooms, libraries and after-school programs. We thought together about how much time might be needed to go deep into the prompts and how we might scaffold the tinkering by introducing helpful elements in a more organized way. Each participant identified moments that were challenging to them and things that they were proud of and we can use these ideas to focus our future experiments with local youth.

Over the next several months we’ll continue to refine these ideas leading up to the pilot program for young participants here in Freiburg. Stay tuned here for more updates from this exciting collaboration with NEXUS Experiments, UCONN and several local partners. I’m looking forward to developing these ideas around AI and creative computing and sharing our discoveries as we tinker with these new technologies.