Systems Dreaming in Colour: AI Co-Creation & the Redistribution of Thought
Image generated using Midjourney (2025).
TLDR: The Cuco animation project serves as a useful case study in observing how AI tools are capable of transforming a static visual style into a dynamic, spatially aware city; by redistributing cognitive tasks, allowing artists to focus on storytelling and intent rather than repetitive tasks. This reflects the Extended Mind Thesis, where cognition extends into tools, with AI acting as epistemic partners in a recursive, co-creative process. Creativity theories, like the Four/Six P’s and Wallas’s stages, frame AI as a process amplifier, product modulator, and potential realiser, enhancing human creativity through distributed agency. In education, students must learn to think across these cognitive scaffolds, cultivating meta-cognition and co-adaptive learning. AI co-creation redefines authorship as curating complexity, revealing new dimensions of human cognition in a symbiotic, systemic interplay.
By Lance Bunt *Thoughts refined and sharpened with the help of ChatGPT
In the Cuco animation project, an unusual thing happened: A visual style, once confined to 60 illustrations, became an expansive city. Motion, once inked frame by frame, was interpolated through generative systems. What began as sketches became dynamic, spatially aware environments through an iterative, hybrid workflow between artists and machines.
This wasn’t automation in the industrial sense. It was cognitive redistribution.
The artists did not disappear from the process — they were repositioned. Instead of exhausting effort on repetition, they redirected their focus toward spatial storytelling, tonal modulation, and semantic intent. The system handled the rest. This shift mirrors a larger theoretical transformation already underway — one that reframes cognition not as contained within the skull, but as extended across tools, symbols, and systems.
The Theoretical Turn: From Mind to Media
At the core of this reorientation is the Extended Mind Thesis (Clark & Chalmers, 1998), which posits that cognition can extend into the environment when tools and systems function as integrated components of a thinking process.
The AI-enhanced animation pipeline functions precisely in this way.
Style transfer models did not merely apply a texture; they served as aesthetic memory, offloading visual coherence across a distributed team.
Generative interpolation did not just "fill in the gaps"; it anticipated intention, reflecting back plausible futures to human input.
2D-to-3D conversion tools did not create final artifacts; they modelled perspective, enabling the artist to focus on thematic depth.
Each tool became an epistemic partner — not sentient, but structured to absorb, respond, and relay.
In this light, AI co-creation is not the outsourcing of thought — it is the spatial reconfiguration of it. Thought becomes something distributed across the interface: part human deliberation, part computational suggestion, part iterative dialogue.
Iteration as Ontology: The System as a Cognitive Scaffold
In the Cuco project, success was not determined by linear output but by feedback loops — artists adapted to model responses, models responded to emergent sketches, and the system evolved toward coherence over time.
This is a hallmark of recursive cognition: systems that refine outputs based on reflection and re-entry. In the cognitive sciences, such structures underpin consciousness itself. The mind is not a static container of ideas but an ongoing negotiation between input, memory, prediction, and feedback. What this animation process demonstrates is not just technical innovation — it is an ontological insight: creativity is no longer exclusively internal. It emerges through systemic entanglement.
Or as media theorist Marshall McLuhan would argue, “the medium is the message” — but here, the medium is recursive, and the message is co-produced.
Pedagogy for a Post-Symbolic Generation
In our classrooms, we are now witnessing a similar reconfiguration. Students offload low-level cognitive work onto generative tools — but this is not inherently a deficit. As seen in the animation pipeline, offloading allows higher-order cognition to take priority: abstraction, ethical reasoning, transdisciplinary synthesis.
This is the new imperative: To teach students how to think across systems.
That means:
Recognising AI tools as cognitive scaffolds, not just software.
Cultivating meta-cognitive reflexivity: the ability to evaluate when, why, and how to delegate cognition.
Designing for co-adaptive learning, where both student and system evolve in tandem.
In this framing, co-creation with AI becomes a model of distributed agency — a living example of how modern cognition operates: entangled, iterative, non-linear, and increasingly synthetic.
Co-Creation Reframed: AI Through the Lens of Creativity Theory
To understand what it truly means to create with AI — to offload cognition in a generative, rather than reductive, way — we must turn to the theoretical frameworks that have long sought to define creativity. These models offer not only conceptual clarity but a scaffold for redefining the human-machine relationship as synergistic, rather than oppositional.
1. The Four and Six P's of Creativity
James Rhodes’ Four P’s model — Person, Process, Product, Place — and its expanded Six P’s version (adding Persuasion and Potential) present creativity as a system of interdependent variables. Within this schema, AI becomes:
Process Amplifier: AI reshapes the cognitive steps of creation by compressing search spaces, suggesting combinations, and reducing procedural friction (e.g., style transfer, code autocompletion, text interpolation). The creative flow is not eliminated — it is accelerated and refracted.
Product Modulator: With tools like generative ink and style models, the AI pipeline co-determines the form and finish of outputs. The “product” of creativity is now a co-authored surface, where intention and algorithm meet.
Place Reconstructor: The “environment” of creativity — traditionally a studio, lab, or classroom — is redefined. A creative "place" now includes interface design, model behaviour, prompt ecosystems, and collaborative platforms. The digital becomes ecological.
Potential Realiser: For novice creators, AI functions as a latent potential activator, scaffolding complexity until fluency is reached. It can simulate mastery to foster early confidence, enabling more people to enter the creative domain.
Persuasion Enhancer: AI’s ability to generate refined artefacts, visualisations, or prototypes quickly aids in the social aspect of creativity: convincing others of an idea’s value. It enhances legibility and impact — crucial components of persuasion.
Person as Curator: The creative individual now shifts from maker to meta-designer — the one who frames, refines, and evaluates generative output. Creativity becomes an act of strategic selection, aesthetic calibration, and ethical navigation.
2. Wallas’s Four-Stage Model:
In Graham Wallas’s Preparation–Incubation–Illumination–Verification model, AI tools expand and compress each stage:
Preparation: AI accelerates the information-gathering process, summarising large corpora, suggesting relevant patterns, or ideating from sparse inputs.
Incubation: Instead of passive waiting, users now engage in active incubation through generative iteration. The machine simulates permutations, helping ideas mature by generating ambient noise — from which insights emerge.
Illumination: While insight remains a human trait, AI often provides a catalytic spark — through unexpected combinations, errors, or novel outputs. “Happy accidents” are no longer serendipitous; they’re built into the system.
Verification: AI’s ability to simulate outcomes, test hypotheses, or model responses (as in animation timing, UX flow, or narrative coherence) makes refinement immediate. The loop tightens.
3. Computational Creativity Models
Traditional computational creativity attempted to simulate human creativity algorithmically — often with limited scope. But contemporary generative systems shift the paradigm from simulation to symbiosis. The system no longer imitates creativity in isolation; it extends creativity in tandem. This mirrors Andy Clark’s notion of the brain as a prediction machine — a system always co-creating with its environment. AI tools, too, are predictive in structure. Thus, when used as co-creators, they function as cognitive prostheses: systems designed not to replace the artist, but to externalise part of their mental process.
4. Psycholinguistic and Emotional-Cognitive Models
Psycholinguistic theories remind us that language is both generator and carrier of thought. Prompting an AI is not trivial — it is cognitively loaded expression. The act of writing a prompt becomes a form of speculative narration, a hypothesis about what the system might understand and return.
This interaction is inherently creative:
Conceptual combination occurs in both prompt construction and AI interpretation.
Affective resonance emerges when outputs align with — or subvert — emotional tone and thematic intent.
AI systems also reflect back emotional states and creative rhythms — producing tone-matching music, style-consistent imagery, or pathos-rich narrative sequences. They become mirrors of affect as much as generators of form.
Closing the Loop: Creativity as Cognitive Distribution
To co-create with AI is not to compromise authorship — it is to expand the field in which authorship takes place. By applying creativity theory to AI systems, we see not a threat to the creative process, but a transformation of its topology. These are no longer discrete stages or internal functions — they are distributed systems of mind, media, and modelling. This is not a diminishment of humanity — it is a restatement of what it means to be human. To collaborate with systems is not to give up authorship, but to redefine it: not as sole originator, but as curator of complexity, navigator of affordances, architect of recursive frames.
AI is not becoming human. But humans, in interfacing with AI, are learning what their cognition looks like when it is extended, mirrored, and magnified.
So, what are we building? Not just animations. Not just essays. We are building cognitive environments — designed spaces where the synthetic and the sentient intersect.
…And in those spaces, something remarkable happens: Not only do systems dream in colour — They help us see new shades of our own mind.
References
Boden, M.A., 1998. Creativity and artificial intelligence. Artificial Intelligence, 103(1–2), pp.347–356.
Clark, A. and Chalmers, D.J., 1998. The extended mind. Analysis, 58(1), pp.7–19.
Colton, S. and Wiggins, G.A., 2012. Computational creativity: The final frontier? In: Proceedings of the European Conference on Artificial Intelligence (ECAI), pp.21–26.
Fauconnier, G. and Turner, M., 2002. The way we think: Conceptual blending and the mind's hidden complexities. New York: Basic Books.
Kozbelt, A., Beghetto, R.A. and Runco, M.A., 2010. Theories of creativity. In: J.C. Kaufman and R.J. Sternberg, eds. The Cambridge Handbook of Creativity. Cambridge: Cambridge University Press, pp.20–47.
McLuhan, M., 1964. Understanding media: The extensions of man. New York: McGraw-Hill.
Rhodes, M., 1961. An analysis of creativity. The Phi Delta Kappan, 42(7), pp.305–310.
Runco, M.A. and Jaeger, G.J., 2012. The standard definition of creativity. Creativity Research Journal, 24(1), pp.92–96.
Thagard, P. and Shelley, C., 2001. Emotional cognition: Computational mechanisms of emotional coherence. In: Proceedings of the 23rd Annual Meeting of the Cognitive Science Society, pp.1010–1015.
Wallas, G., 1926. The art of thought. London: Jonathan Cape.