{"id":3345,"date":"2022-08-25T20:37:33","date_gmt":"2022-08-25T20:37:33","guid":{"rendered":"http:\/\/mycours.es\/digitalmedia\/?page_id=3345"},"modified":"2025-03-10T13:02:20","modified_gmt":"2025-03-10T13:02:20","slug":"artificial-creativity","status":"publish","type":"page","link":"https:\/\/mycours.es\/digitalmedia\/artificial-creativity\/","title":{"rendered":"Artificial creativity"},"content":{"rendered":"<p>The field of Machine Learning (or Artificial Intelligence how it&#8217;s often referred to by mainstream media) is evolving rapidly, understanding neural networks and the underlying workings of the current models is beyond the scope of this course.<br \/>\nIn this unit we are looking at some of the most common uses of AI through a series of art projects.<\/p>\n<h2>Computer vision: image detection, tracking and recognition<\/h2>\n<p>Detecting, recognizing, and tracking objects in digital images, in particular bodies and faces, is one of the most common applications of ML. Why is that?<\/p>\n<p>How are these projects approaching this problematic technology?<\/p>\n<p><iframe loading=\"lazy\" title=\"Cheese (2003) by Christian Moeller\" width=\"840\" height=\"630\" src=\"https:\/\/www.youtube.com\/embed\/B61CEiPWzGk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/christianmoeller.com\/Cheese\"> Cheese<\/a> by Christian Moeller (2003)<\/span><\/p>\n<p><iframe loading=\"lazy\" title=\"(In)Security Camera\" src=\"https:\/\/player.vimeo.com\/video\/9293913?dnt=1&amp;app_id=122963\" width=\"640\" height=\"480\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/silviaruzanka.com\/portfolio\/insecurity-camera\/\">Insecurity camera<\/a> by Silvia Ruzanka, Ben Chang, Dmitry Strakovsky (2003)<\/span><\/p>\n<p><iframe loading=\"lazy\" title=\"Backlash 2.0\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/3XDHyWyI9eA?start=43&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><a href=\"https:\/\/graycake.com\/en\/backlash\/\">Backlash<\/a> by Grey Cake (2023)<br \/>\nAutomatic anonymization of protesters.<\/p>\n<h2>Data Analysis<\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-3372\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-1024x541.png\" alt=\"\" width=\"840\" height=\"444\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-1024x541.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-800x423.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-768x406.png 768w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-1536x812.png 1536w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26-1200x634.png 1200w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-26.png 1599w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/p>\n<p><iframe loading=\"lazy\" title=\"White Collar Crime Risk Zones Sam Lavigne (The New Inquiry)\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/Wj8meXnHSPo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><a href=\"https:\/\/whitecollar.thenewinquiry.com\/\">White Collar Crime Risk Zones<\/a> by Brian Clifton, Sam Lavigne and Francis Tseng (2017)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3352\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-22.png\" alt=\"\" width=\"906\" height=\"571\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-22.png 906w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-22-800x504.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-22-768x484.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/p>\n<p><a href=\"https:\/\/experiments.withgoogle.com\/ai\/drum-machine\/view\/\">The Infinite Drum Machine<\/a> by Kyle McDonald, Manny Tan, Yotam Mann, and friends at Google Creative Lab (2016)<br \/>\nA more scientific application of the same system <a href=\"https:\/\/experiments.withgoogle.com\/bird-sounds\">Bird Sounds<\/a><\/p>\n<p>&nbsp;<\/p>\n<h2>Voice and text generation<\/h2>\n<p>Beyond chatbots<\/p>\n<p><iframe loading=\"lazy\" title=\"Conversations with Bina48: Fragments 7, 6, 5, 2\" src=\"https:\/\/player.vimeo.com\/video\/460370903?dnt=1&amp;app_id=122963\" width=\"840\" height=\"473\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p><a href=\"https:\/\/www.stephaniedinkins.com\/conversations-with-bina48.html\">Conversations with Bina48<\/a> by Stephanie Dinkins (2014 &#8211; Ongoing)<br \/>\nDinkins recorded her conversations with <a title=\"BINA48\" href=\"https:\/\/en.wikipedia.org\/wiki\/BINA48\">BINA48<\/a>, an early chatbot modeled after a middle-aged black woman. Dinkins mirrors Bina48 while they discuss identity and technological singularity.<sup id=\"cite_ref-:10_13-2\" class=\"reference\"><\/sup><\/p>\n<p><iframe loading=\"lazy\" title=\"Voice In My Head\" src=\"https:\/\/player.vimeo.com\/video\/860394637?dnt=1&amp;app_id=122963\" width=\"840\" height=\"630\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p>Kyle McDonald &amp; Lauren Lee McCarthy (2023)<\/p>\n<p><a href=\"https:\/\/lauren-mccarthy.com\/Voice-In-My-Head\">Voice In My Head<\/a> explores the implications for AI (ChatGPT) to listen and intervene in your social experience in real-time, augmenting your personality. The piece begins with an onboarding session where you place a bud in your ear and the voice asks you to reflect on the inner voice you were born with. What if it could be more caring? Less obsessive? Less judgmental? More helpful? What if you could change your inner monologue?<br \/>\nAs you respond to the onboarding questions, it clones the sound of your voice and uses it to speak to you. Then you go out into the world, as the voice follows along and offers commentary and direction. The resulting performance calls into question how natural vs synthetic each person\u2019s thoughts actually are. Do any of us have our own point of view?<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3356\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-23.png\" alt=\"\" width=\"780\" height=\"379\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-23.png 780w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/Image-23-768x373.png 768w\" sizes=\"auto, (max-width: 780px) 85vw, 780px\" \/><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/shell-song.neocities.org\/\">Shell Song<\/a> by Everest Pipkin (2020)<br \/>\n<\/span><\/p>\n<p>Shell Song is an interactive audio-narrative game which explores deep-fake voice technologies and the datasets that go into their construction. By considering physical and digital bodies and voices, it asks what a voice is worth, who can own a human sound, and how it feels to come face to face with a ghost of your body that may yet come to outlive you. The piece reminds us that data is people, both in representations of data that is collected but also tools built by people to collect this data.<\/p>\n<h2>Image Generation<\/h2>\n<p>For an initial contrast, let&#8217;s consider this generative 1974 plotter artwork by computer arts pioneer, Vera Moln\u00e1r, below. How did she create this artwork? We might suppose there was something like a double-for-loop to create the main grid; another iterative loop to create the interior squares; and some randomness that determined whether or not to draw these interior squares, and if so, some additional randomness to govern the extent to which the positions of their vertices would be randomized. We can suppose that there were some parameters, specified by the artist, that controlled the amount of randomness, the dimensions of the grid, the various probabilities, etc.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3346\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/vera-molnar.jpg\" alt=\"\" width=\"899\" height=\"890\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/vera-molnar.jpg 899w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/vera-molnar-606x600.jpg 606w, https:\/\/mycours.es\/digitalmedia\/files\/2022\/08\/vera-molnar-768x760.jpg 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/p>\n<p>As with &#8216;traditional&#8217; generative art (e.g. Vera Moln\u00e1r), artists using machine learning (ML) continue to develop programs that render an infinite variety of forms, and these forms are still characterized (or parameterized) by variables. What\u2019s interesting about the use of ML in the arts, is that the values of these variables are no longer specified by the artist. Instead, the variables are now deduced indirectly from the training data that the artist provides. As Kyle McDonald has pointed out, machine learning is programming with examples, not instructions.<\/p>\n<p>The use of ML typically means that the artists&#8217; new variables control perceptually higher-order properties. (The parameter space, or number of possible variables, may also be significantly larger.) The artist\u2019s job becomes one of selecting or creating training sets, and deftly controlling the values of the neural networks\u2019 variables.<br \/>\n-from Golan Levin&#8217;s <a href=\"https:\/\/github.com\/golanlevin\/lectures\/tree\/master\/lecture_cnns_and_gans\">lecture<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><iframe loading=\"lazy\" title=\"Xoromancy: Image Creation via Gestural Control of High-Dimensional Spaces\" src=\"https:\/\/player.vimeo.com\/video\/322525369?dnt=1&amp;app_id=122963\" width=\"840\" height=\"473\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p>This installation by alumni Aman Tiwari and Gray Crawford uses gestural control to give the user a bodily awareness of this multidimensional visual space.<\/p>\n<p><iframe loading=\"lazy\" title=\"Voice Scroll long uncherrypicked example\" src=\"https:\/\/player.vimeo.com\/video\/836629629?h=2fc95973f6&amp;dnt=1&amp;app_id=122963\" width=\"840\" height=\"473\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p>BMO lab (2023)<br \/>\nReal-time Generation of Panoramic scenes from Voice using a custom Stable Diffusion pipeline<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3961\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade.jpg\" alt=\"\" width=\"2334\" height=\"750\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade.jpg 2334w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-800x257.jpg 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-1024x329.jpg 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-768x247.jpg 768w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-1536x494.jpg 1536w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-2048x658.jpg 2048w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/AI-TRANSFORMATION-grenade-1200x386.jpg 1200w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/jamesbettney.com\/emergent-forms\">Emergent Forms<\/a> by James Bettney (2024)<br \/>\n\u201ca photorealistic pencil drawing of a hand grenade on a pristine white background\u201d which generate an initial set of AI-produced images. One image is randomly selected as the seed for the next iteration. The AI then regenerates this image, producing new variations. This process repeats, with each cycle drifting further from the original concept.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3962\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2.png\" alt=\"\" width=\"1600\" height=\"533\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2.png 1600w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2-800x267.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2-1024x341.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2-768x256.png 768w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2-1536x512.png 1536w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Group-143-2-1200x400.png 1200w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/bjoernkarmann.dk\/project\/paragraphica\">Paragraphica <\/a>by Bjoern Karmann (2023)<br \/>\nContext-to-image<\/p>\n<p>See also similar works examining AI image making in relation to photography: <a href=\"https:\/\/www.youtube.com\/watch?v=JC6NC_ta0GE&amp;t=131s\">Blind Camera <\/a>(sound-to-image) by Diego Trujillo Pisanty, <a href=\"https:\/\/jaspervanloenen.com\/black-box-camera\/\">Black Box Camera<\/a> (image-to-text-to-image), and <a href=\"https:\/\/www.creativeapplications.net\/objects\/memogram-timetextcapsule-camera\/\">Memogram\u00a0<\/a>(image-to-text)<br \/>\n<a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3967\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731.png\" alt=\"\" width=\"1089\" height=\"783\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731.png 1089w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731-800x575.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731-1024x736.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-180731-768x552.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/www.beatricelartigue.com\/oeuvres\/3:featured\/59\/The-Big-Smoke\">The Big Smoke<\/a> by Beatrice Lartigue (2023)<br \/>\nThe series of &#8220;photographs&#8221; visualize invisible air pollution in Paris by using the color code of the forecast maps to modulate the tint of the smoke clouds (cold\/good &gt; hot\/extremely bad). The project mines air quality data, through an overall index including ozone, nitrogen dioxide, particulate matter and fine particles.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3997\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen.png\" alt=\"\" width=\"2000\" height=\"2000\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen.png 2000w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-600x600.png 600w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-1024x1024.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-150x150.png 150w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-768x768.png 768w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-1536x1536.png 1536w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/eldagsen-1200x1200.png 1200w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p>Promptography by Boris Eldagsen<br \/>\n<a href=\"https:\/\/www.eldagsen.com\/introspections\/\">Introspections<\/a><br \/>\n<a href=\"https:\/\/www.eldagsen.com\/blind-looking-for-a-mirror\/\">Blind looking for a mirror<\/a><br \/>\n<a href=\"https:\/\/www.eldagsen.com\/pseudomnesia3\/\">Pseudomnesia<\/a><br \/>\n<a href=\"https:\/\/www.eldagsen.com\/pd\/\">Professional Development<\/a><\/p>\n<p>Eldagsen <a href=\"https:\/\/www.theguardian.com\/artanddesign\/2023\/apr\/18\/ai-threat-boris-eldagsen-fake-photo-duped-sony-judges-hits-back\">caused controversies<\/a> after winning a prestigious photography prize with an AI generated image.<br \/>\n<a href=\"https:\/\/www.cbsnews.com\/news\/real-photo-ai-competition-flamingone-miles-astray\/\">Another artist<\/a> won an AI photography competition with a real photo.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3969\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332.png\" alt=\"\" width=\"1123\" height=\"737\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332.png 1123w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332-800x525.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332-1024x672.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-12-181332-768x504.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><a href=\"https:\/\/prototypex.info\/\">Igun: prototype X<\/a> by Minne Atairu (2024-)<\/p>\n<p>The 1897 British invasion of Benin had a devastating effect on its rich artistic landscape, resulting in a 17-year (1897-1914) artistic recession &#8211; a period which lacks visual\/archival records. Ig\u00f9n generates models that might have been created in this period.<\/p>\n<h2>Video Generation<\/h2>\n<p><iframe loading=\"lazy\" title=\"Learning to see: Gloomy Sunday\" src=\"https:\/\/player.vimeo.com\/video\/260612034?dnt=1&amp;app_id=122963\" width=\"840\" height=\"473\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media\"><\/iframe><\/p>\n<p><a href=\"https:\/\/www.memo.tv\/works\/learning-to-see\/\">Learning to See<\/a> by Memo Atken (2017)<\/p>\n<p>&#8220;An artificial neural network looks out onto the world, and tries to make sense of what it is seeing. But it can only see through the filter of what it already knows. Just like us. Because we too, see things not as they are, but as we are.&#8221;<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2024-09-06-at-7.15.52PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3963\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2024-09-06-at-7.15.52PM.png\" alt=\"\" width=\"902\" height=\"499\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2024-09-06-at-7.15.52PM.png 902w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2024-09-06-at-7.15.52PM-800x443.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2024-09-06-at-7.15.52PM-768x425.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/verse.works\/series\/fighting-windmills-by-addie-wagenknecht\">Fighting Windmills<\/a> by Addie Wagenknecht (2023)<br \/>\nThe artist as a deepfake confronts a deepfaked doppelg\u00e4nger in a boxing ring, symbolizing the internal battle against self-sabotage and the weight of imposed standards.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-3965\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema.png\" alt=\"\" width=\"1215\" height=\"662\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema.png 1215w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema-800x436.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema-1024x558.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema-768x418.png 768w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Trailer-Posthuman-Cinema-1200x654.png 1200w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/posthumancinema.com\/\">Posthuman Cinema<\/a> by Mark Amerika, Will Luers &amp; Chad Mossholder (2023)<\/p>\n<h2>Dataset Bias<\/h2>\n<p><iframe loading=\"lazy\" title=\"KATE CRAWFORD | TREVOR PAGLEN: TRAINING HUMANS | Osservatorio Fondazione Prada\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/P4JpD1PWBDI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><a href=\"https:\/\/paglen.studio\/2020\/04\/29\/imagenet-roulette\/\">ImageNet Roulette<\/a><\/p>\n<p>Read <a href=\"https:\/\/excavating.ai\/\">Excavating AI<\/a> by Kate Crawford and Trevor Paglen<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-4004\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118.png\" alt=\"\" width=\"1680\" height=\"608\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118.png 1680w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118-800x290.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118-1024x371.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118-768x278.png 768w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118-1536x556.png 1536w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-155118-1200x434.png 1200w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/amy-alexander.com\/projects\/deep-hysteria\/\">Deep Hysteria<\/a> by Amy Alexander<br \/>\nThe people in these artworks are \u201cAI\u201d-generated twins that vary in gender presentation. Another \u201cAI,\u201d trained on human perceptions, identifies the emotion on their faces.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-4011\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8.jpg\" alt=\"\" width=\"1200\" height=\"800\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8.jpg 1200w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8-800x533.jpg 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8-1024x683.jpg 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/image8-768x512.jpg 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p>Unbroken Meaning is a Black Corpus project that investigates the ability of current speech-to-text algorithms to understand African Diaspora-derived Creole, Pidgin, and Broken English. It seeks to demonstrate the bias of these algorithms while developing new algorithms that are better suited to recognize phrases and languages of these communities. Additionally, it seeks to create suitable text-to-speech methods for proper computer pronunciation of these languages.<\/p>\n<p><a href=\"http:\/\/ayo.io\/corpus.html\">Black Corpus<\/a> by Ayodamola Tanimowo Okunseinde Series of projects and artworks that attempt to build datasets and tools centered on black culture.<\/p>\n<p>&nbsp;<\/p>\n<p>Various types of racial and gender bias have been affecting AI systems since their inception.<\/p>\n<p><iframe loading=\"lazy\" title=\"How AI Image Generators Make Bias Worse\" width=\"840\" height=\"473\" src=\"https:\/\/www.youtube.com\/embed\/L2sQRrf1Cd8?start=105&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Also see <a href=\"https:\/\/huggingface.co\/spaces\/society-ethics\/DiffusionBiasExplorer\">Stable diffusion bias explorer<\/a><br \/>\n<a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-4003\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai.png\" alt=\"\" width=\"1200\" height=\"900\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai.png 1200w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai-800x600.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai-1024x768.png 1024w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/homer-simpson-ai-768x576.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><a href=\"https:\/\/x.com\/fireh9lly\/status\/1728934106304774289\">Prompt<\/a>: Guy with swords pointed at me meme except they are Homer Simpson.<br \/>\nDALL-E3 attempts to combat the racial bias in its training data by occasionally randomly inserting race words that aren&#8217;t white into a prompt.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/googlevision.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-4006\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/googlevision.png\" alt=\"\" width=\"958\" height=\"453\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/googlevision.png 958w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/googlevision-800x378.png 800w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/googlevision-768x363.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p>Object recognition affected by skin color.<\/p>\n<p><a href=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-162942.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-4009\" src=\"http:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-162942.png\" alt=\"\" width=\"598\" height=\"602\" srcset=\"https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-162942.png 598w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-162942-596x600.png 596w, https:\/\/mycours.es\/digitalmedia\/files\/2025\/01\/Screenshot-2025-01-13-162942-150x150.png 150w\" sizes=\"auto, (max-width: 598px) 85vw, 598px\" \/><\/a><a href=\"https:\/\/www.technologyreview.com\/2022\/12\/12\/1064751\/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent\/\">The viral AI avatar app Lensa undressed me\u2014without my consent<\/a><br \/>\nMy avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.<br \/>\nBy Melissa Heikkil\u00e4<\/p>\n<p>Part of the problem is that these models are trained on predominantly US-centric data, which means they mostly reflect American associations, biases, values, and culture, says Aylin Caliskan, an assistant professor at the University of Washington.<br \/>\n&#8211;<a href=\"https:\/\/www.technologyreview.com\/2023\/03\/22\/1070167\/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are\/\">MIT Technology Review<\/a><\/p>\n<p>Beyond media representation why this matters?<br \/>\n<a href=\"https:\/\/www.wired.com\/story\/flawed-facial-recognition-system-sent-man-jail\/?ref=dl-staging-website.ghost.io\">Flawed facial recognition system sent a man to jail<\/a><br \/>\n<a href=\"https:\/\/www.technologyreview.com\/2021\/02\/05\/1017560\/predictive-policing-racist-algorithmic-bias-data-crime-predpol\/\">Predictive policing<\/a><br \/>\n<a href=\"https:\/\/www.cbsnews.com\/news\/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials\/\">AI used to deny health insurance<\/a> and to <a href=\"https:\/\/qz.com\/fight-health-insurance-denials-appeals-ai-1851733712\">appeal such denials<\/a><br \/>\n<a href=\"https:\/\/www.technologyreview.com\/2019\/10\/25\/132184\/a-biased-medical-algorithm-favored-white-people-for-healthcare-programs\/\">AI tool used to predict the need of extra medical care<\/a> is biased against Black patients<br \/>\n<a href=\"https:\/\/www.reuters.com\/article\/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G\/\">AI-powered recruiting tool is biased against women<\/a> (luckily scrapped by Amazon)<br \/>\n<a href=\"https:\/\/www.vice.com\/en\/article\/flawed-algorithms-are-grading-millions-of-students-essays\/\">AI tools are used to grade student essays<\/a><br \/>\n<a href=\"https:\/\/www.thenation.com\/article\/society\/artificial-intelligence-chatgpt-college-applications\/\">AI tools are used to review college admissions<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The field of Machine Learning (or Artificial Intelligence how it&#8217;s often referred to by mainstream media) is evolving rapidly, understanding neural networks and the underlying workings of the current models is beyond the scope of this course. In this unit we are looking at some of the most common uses of AI through a series &hellip; <a href=\"https:\/\/mycours.es\/digitalmedia\/artificial-creativity\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Artificial creativity&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":86,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-3345","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/pages\/3345","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/comments?post=3345"}],"version-history":[{"count":21,"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/pages\/3345\/revisions"}],"predecessor-version":[{"id":4013,"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/pages\/3345\/revisions\/4013"}],"wp:attachment":[{"href":"https:\/\/mycours.es\/digitalmedia\/wp-json\/wp\/v2\/media?parent=3345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}