Tinkering with AI Tools, Scratchlab and Makey Makey

I’m starting to explore how AI-based tools can support creative and playful tinkering with connections to real world materials and chances for collaborations. I’m working with the NEXUS experiments team at Freiburg University (as part of the HRRC) and together we’re hoping to create workshops that build up learners' sense of agency using AI technologies. In the spring I’m leading a three day workshop with local high school students, educators and researchers to try out some new prototypes across a variety of local settings. I’m working on developing some low-threshold starting points as well as opportunities to go deeper and create with AI tools. I’m not too familiar with generative AI and large language models so I’ve been looking around for ways to better understand the possibilities for creative learning that exist with these new tools.

Over the years, the Lifelong Kindergarten Group (LLK) at the MIT Media Lab has been an inspiration and close working partner for integrating computation into tinkering prompts. Their approach to developing new ideas and projects support constructionist learning. Mitchel Resnick, the leader of the team, posted a great article on Medium called “Generative AI and Creative Learning: Concerns, Opportunities, and Choices” which has been a touchstone for me as I think about how to develop these projects further. 

The Scratch team has already started integrating some predictive AI blocks for face sensing into Scratch Lab, an experimental website for prototype extensions to the Scratch platform. Eric Rosenbaum also shared a great article about his experiments adding generative AI created sprites and backgrounds into projects. The combination of face sensing blocks and AI-generated images served as the foundations for a prototyping workshop that I led with the project team in November of last year here in Freiburg.

After the workshop, Eric and others in the LLK group invited me to test out and give feedback about an unreleased Scratch experiment with blocks that integrate an AI-powered chatbot directly into the platform. This version is still in an early prototype form and it’s not currently available to the public, but it's a fascinating way to experiment with AI characters and stories. 

There are challenges and risks in thinking about how this could work as part of the normal Scratch environment, but I think that the experiments can suggest new possibilities for playful interacting with AI tools. The recently released book “The Learner’s Apprentice” by Ken Khan offers many ideas for how learners can play around and tinker with similar ideas using widely available text based chatbots and integrated graphics.

I built a couple example projects with this experimental Scratch program that playfully give information or engage users in unusual “conversations”. For the upcoming workshop with high school students it’s important to add in more physical materials so I’ve also been trying to incorporate Makey Makey and everyday materials to make things more interactive in the real world. 

The first project that I worked on is called the “alien travel agent” and give travel advice from a slightly skewed perspective. In this platform there’s a block where you set the “personality” of the chatbot at the start or in the middle of running the program. For this scene I gave the prompt of “you are an sarcastic alien travel agent who is misinformed about earth” which affects the way that the large language model answers the inputs from the users.

For this project I chose four potential travel destinations (Tokyo, Paris, Rome and Yosemite) and made a quick switch out of aluminium foil and cardboard so that users could choose their favorite. After the Makey Makey powered key press, the program gives a short “recommendation” of what to do and see in the chosen location. As well, I added in some images generated from Microsoft image creator (a DALL-E based AI tool) using the same prompt about how the alien travel agent would interpret each of the places. For the next version it would be interesting to have an undefined choice where the user could input any travel destination they choose into the system.

Another project that I tried was making a mouse chef that felt overwhelmed by the size of the ingredients in their kitchen. In this project (as well as in the other two), I generated both the image of the character and the background with Microsoft image creator. One little trick that I used to make things easier was after getting the image, I ran it through a background remover website (also powered by AI) to get just the character on a transparent background. Then I uploaded the original picture as the background and the new image of just the mouse head as a sprite while adjusting the size so that they overlapped cleanly. This adds some flexibility to the project and allows for better speech bubbles, movement and switching backgrounds.

The last example project that I made is a robot that gives advice with a tone that changes as the user toggles through different “personalities”. I mocked up a quick Makey Makey dial and programmed it so that it could adjust the settings from “you are an really mean jerk who gives harsh advice about daily life“ to “you are an extremely accommodating grandma who gives advice about daily life just meant to make you feel good” as the initial prompt. It also changes the background color to give the user a bit of feedback about what to expect from the character. 

This was another fun idea of testing the capabilities of the LLM to offer different responses depending on the input of the user. It led me to try to refine the initial prompts to make the advice as nice or blunt as possible. I think there’s a lot of potential for adding in more customization and different ideas to the program. Maybe the image of the AI character could also subtly change to reflect the current “attitude” selected by the questioner. 

These quick and dirty examples will hopefully inspire workshop participants to innovate and create their own ideas with unexpected results. I’m sure that I will learn a lot from testing this out with learners and being open to new possibilities. This is very new for me and I’m curious about what other educators and makers are trying with playful and creative AI projects. Please let me know what ideas these examples spark and how you are experimenting with these new technologies.