Fletcher Jones Lecture with Douglas Goodwin

How Do You Teach Art with AI? Douglas Goodwin Reflects on Six Years of Experimentation

Video

Douglas Goodwin shared insights from six years of interdisciplinary teaching at Scripps College. His talk explored how computational tools—from camera obscuras to generative AI—can be used not just to create, but to question how we see, think, and make.

Goodwin walked the audience through hands-on projects like student-built camera obscuras, algorithmic portraits, and machine learning remediations of Marshall McLuhan’s The Medium is the Massage. He described his course Machine Learning for Artists, where students learn to code, critique, and collaborate with AI, not just as tool but as creative partner.

He also introduced new ventures in Computational Textiles (co-taught with Professor T. Kim-Trang Tran) and discussed speculative hardware projects, like a real-time neural style transfer camera and weaving software driven by cellular automata.

Throughout the talk, Goodwin emphasized embracing failure, historical context, and the ethical tensions of working with AI. “Don’t fall in love with the machine,” he warned. “But do ask what it’s showing you—and why.”

Whether building a solar telescope for eclipse education or producing synthetic avatars in collaborative films, Goodwin’s work bridges technical fluency and critical imagination. His closing message: balance speed with intention, and make the machine amplify your voice—not replace it.


TRANSCRIPT

Hi everyone. Thank you for that nice introduction. Welcome—I hope you’re doing well. I’ll do my best to keep my lecture under 18 minutes. No promises!

First, I want to express my gratitude to the wonderful people who have made my time here at Scripps College possible: the Scripps Presents team, the administration, my hiring committee, my colleagues, and tonight’s moderators whom you’ve just met.

So, teaching—yes. You just had a little taste of how I approach it, and I admit it’s a little unusual. People often wonder how I come up with these things. The truth is it’s very intuitive. But I do believe, sincerely, that working with your hands is critical to understanding how things work. Our hands bring us into contact with the world in a way that’s deeply different—and unavailable to AI.

In general, I try to bridge technology and artistic expression. I strive to make technical subjects accessible and to bring this approach to a diverse student population. Teaching here can be a challenge—some of my students are engineers, others are history majors, and we cover the same material. It’s not easy, but I love interdisciplinary work. Scripps has been great about encouraging connections between fields, like history and engineering.

One of the courses I was especially excited to teach ended up becoming two classes: an introductory and advanced sequence I call “Computational Photography.” In this series, we explore photography from its Neolithic roots—yes, cave projections! Tiny holes in tents projected legible images on the walls, perhaps seen as spirits, certainly striking. My students recreate these phenomena using camera obscuras, projecting images on their hands, then try to figure out how to save or fix these images—without any immediate answers from me. They follow historical paths to introduce silver salts and make lasting images.

The course moves all the way to 1976 and the invention of the first digital camera at Kodak by a young electrical engineer. He showed it to executives, capturing their images in boardrooms and projecting them back. Their response? “How can we use this to sell film?” So the idea didn’t go far—at least not then.

We don’t just study technical aspects—HDR imaging, image processing, computer vision—we also experiment. One example from Computational Photography 2: we averaged portraits of each student in the class to create a new composite image each year. These reveal subtle trends over time: changes in skin tone, eye shape, hair texture.

The only course I was originally required to teach was “Intro to Programming.” It changed names a few times—it’s now “Intro to Python Programming,” and honestly, it might be time to make it about JavaScript, but that’s another conversation. The first assignment is a Sol LeWitt–inspired task: Wall Drawing #86, 10,000 10-inch lines covering a surface. They do this by hand with dry-erase markers, and by the end, they’re thrilled to learn how much easier it is with code.

Midway through my time at Scripps, I introduced another class: “Machine Learning for Artists.” I used “machine learning” intentionally instead of “AI,” which is a misleading term. It’s not artificial, and it’s not intelligent—it’s a bundle of technologies. Machine learning, by contrast, actually does learn, drawing inspiration from the human visual cortex, which gives the field a strange and compelling metaphorical framework.

In that class, students learn about image segmentation, CycleGAN (like turning a hamburger into a giraffe), Pix2Pix (turning doodles into cats—sort of), and TensorFlow, Google’s math-wrangling engine. Students use these tools to generate images, sometimes to remediate texts like Marshall McLuhan’s The Medium is the Massage. Recreating that book with AI forces you to inhabit our mediated world in a different way.

I also started co-teaching “Computational Textiles” with Professor Tran. It’s been a huge challenge. Weaving is every bit as complex as artificial intelligence, maybe more. The history of problem-solving in textiles is astounding. My role is to explore metaphors and computational techniques for generating weaving patterns. We’re lucky to have access to a TC2 digital loom at Harvey Mudd, which lets us lift each warp thread individually. That’s a game-changer.

I’ve also been publishing—one article explores how smartphone cameras can capture aurora images with more color and clarity than the human eye. Another looks at the solar eclipse—not directly, but at the surrounding shadows, animals, and people’s reactions. I even built a solar telescope when my daughter was in grade school. Her principal had terrified students into thinking they couldn’t even be outside during the eclipse, so I built a projection-based tool to look at the sun safely.

Another project was a “laptopogram”—I exposed photo paper inside a MacBook in a dark room and developed it. The result was a retina-resolution contact print. Beautiful and detailed.

Recently, I’ve begun using AI to write software. I worked in software development for over 15 years—wrote half the LA Metro backend, shared data with Apple and Google. Now, I can generate good code using AI, as long as I know what I want and understand the fundamentals. One project generates weaving patterns using cellular automata. Another explores real-time image generation—recording video and transforming it live based on prompts like “moonrise over the ocean.”

I’ve also worked with generative film. Nearest Neighbor, a short I made with Rebecca Baron, examines the triangle of birds, language, and AI. We used AI-generated avatars and phonemes. Another short, Companions, speculates on AI’s emotional role in our future.

And yes, I made an AI Godzilla movie. Why is it always Van Gogh and Godzilla in these demos? Mothra, it turns out, is much harder to render. But I try.

I love analog processes too. I’ve been working on a cyanotype film—3,000 hand-printed frames using a recipe for iron salts formulated by John Herschel in the 19th-century. I printed about 1,500 frames all of which were lost in the Eaton Canyon fire.

Merging the old with the new is what excites me. What do camera obscuras and smartphones have in common? Or woodblock printing and CNC milling? What happens when McLuhan’s theories meet AI-generated imagery? I made a silly clip asking to become Ryan Gosling—it was fun, but it also made a point: AI lets you perform identities, write code, become things. But the real question is why. What do you want to say?

I worked as an actor for a long time, and method acting was something we scoffed at. But maybe there’s something there. Use these tools mindfully. Be prepared. Know your intention.

AI can be a creative amplifier. I’ve had 800 generative agents interact in pieces written in Processing. Go back to the roots. Read McLuhan. Make your own “visual massage.” One student recently did just that, publishing a new version of The Medium is the Massage with remediated AI images.

Here are some tips: disclose AI use. Be honest. Watch for bias. Collaborate. Challenge the dominant narrative. Bring your critical thinking. Innovate with intention.

As McLuhan did in the 1960s, we now have the task of reshaping the sensory landscape. It’s up to you.

Thank you so much. I’m thrilled to see you all, and I can’t wait to see where you go next. But first, let’s dive into some questions…


Q&A

Q1:

“How do you anticipate AI changing what we find meaningful or valuable in art?”

A1:

It’s going to shift our attention to new or re-skinned things simply because we’re immersed in it constantly. A concern is that AI might “eat itself,” endlessly recycling the same ideas and trapping us in a loop. But on the positive side, these tools empower artists to create impressive work. My role is to encourage people to push beyond the expected and create something truly new.

Q2:

“When I was in your Machine Learning for Artists class, you encouraged me to embrace the differences between the model’s output and my original vision. Do you see any parallels between AI and other mediums in producing unexpected outcomes—should we embrace those differences?”

A2:

Yes, it’s crucial. That’s what being an artist is: having a strong intention, but being open to what the process gives back. Failure is instructional. It teaches you something about the medium. I remember that moment with you—you weren’t happy, but I urged you to lean into the failure and see what could come of it.

Q3:

“How do you foresee crediting artists who collaborate with AI and also those whose work has been used to train the models?”

A3:

That’s a huge problem. Models are trained using tons of data, and it’s impossible to remove that now. Some tools like LoRAs (low-rank adapters) offer a way to extract or fine-tune smaller parts of big models. While the damage is done, we can still move forward in a way that invests the process with our own hand and intention.

Q4 (follow-up):

“So are you saying that with these new models, you can limit what the model is trained on?”

A4:

Yes, LoRAs let you fine-tune small models to pull out specific qualities. It’s too late to fix older models—they’re full of stolen IP. There are even cases of people’s medical scans showing up in training data. But with LoRAs, we can suppress unwanted outputs and focus on our own creative contributions.

Q5:

“Given these concerns, what advice do you have for creatives just starting to use AI for art?”

A5:

Just start. Learn online, find good tutorials, follow artists like Memo Akten and Anna Ridler. Discover what excites you, build intention, and accept that the process might surprise you. That’s where your own aesthetic develops.

Q6:

“What skills should students focus on, especially since AI is changing the coding landscape?”

A6:

We need to lean into these tools—they’re good and powerful. Yes, they may displace traditional programming jobs, but we can use them at a higher level, like managing intent and design. You can even critique code with AI’s help. Just be careful—it can be very persuasive and misleading. Don’t let it flatter or seduce you off-course.


Student B:

Q1:

“In your research, you’ve discussed how photography and telescopes reveal what we can’t naturally see. What revelatory power do you think AI offers as a new artistic medium?”

A1:

I don’t fully know yet. But AI is already changing how we interact with code, images, and tools. Our phones, for example, mediate photos immediately. Thinking critically about that mediation is important. AI is pretending to be human—it’s important not to fall in love with it (like in Her), but to stay aware of what it’s doing.

Q2:

“You outlined some ethical guidelines in your talk—how can we enable compliance with those on a personal or institutional level?”

A2:

Honestly, it’s going to be hard. Institutions will only do it if there’s money or regulation involved. But individually, we can choose to follow those guidelines. Having your own personal ethics—being honest, transparent—goes a long way.

Q3 (follow-up):

“You mentioned using AI to subvert hegemonic narratives. Can you give an example of that kind of disruption?”

A3:

AI is already disrupting everything. Back in the ’90s, the web disrupted education. We had dreams of new, open education systems. That didn’t fully happen, but the disruption itself was powerful. Being honest about how you use AI—resisting its default values—is a form of disruption. It’s not lucrative, but it’s meaningful.

Q4:

“Your classes are always in demand. How do you come up with your course material?”

A4:

Honestly, it’s intuitive. It comes to me in the morning. I think maybe people resonate with what I’m doing because we’re all breathing the same air. I’ve realized lately how weird I am—but focusing on doing things, practicing with your hands, that’s central to my approach.

Q5:

“What’s your favorite class you’ve taught at Scripps?”

A5:

That’s hard—they’re all my favorite. But the intro to Python class was a real struggle. It took me five years to get it right. So… maybe not that one.

Q6:

“More broadly, how should students in creative fields prepare to collaborate with AI?”

A6:

Understand it. Learn both the visual side and the text-based tools. I’ve seen students get stuck in unproductive loops—asking AI to fix things endlessly. You need intention, a plan. Don’t treat AI as a magic box. Know that it hallucinates. But sometimes that’s okay—work with it.

Q7:

“In your Intro to Computational Photography class, you encourage historical context. What kind of background should people seek to understand AI-generated images?”

A7:

Great question. I always bring in the history of generative art. Artists in the ’60s and ’70s used plotters to make computer art. Looking at that work shows what problems were solved and gives insight into the structure of today’s tools. History gives us access to why things are the way they are. AI has surprised us all—25% of Google’s code is now co-written by AI.

Q8:

“You mentioned Cherokee being uniquely resistant to AI flattening. Can you explain that?”

A8:

Sure. I’ve just started learning Cherokee—it’s extremely hard. I tried using ChatGPT to translate and it failed badly. Cherokee has complex verb structures that change meaning subtly depending on things like tense, number, or qualities like “damp” or “flexible.” These nuances aren’t in the training data, which shows the limitations of AI. It reflects back what we expect to see, not necessarily what’s there. That failure is actually a useful one.