Wednesday, October 1, 2025

Lab Session: DH s- AI Bias NotebookLM Activity

 This blog is assigned by Prof. Dilip Barad as part of the Lab Session: Digital Humanities – AI Bias (NotebookLM Activity). As discussed in class, the task requires submitting a blog that includes text, video, and a mind-map generated from NotebookLM based on the given video. Along with these, the activity also involves sharing an audio version of the content in either Hindi or Gujarati language.


The video is titled: "Bias in A.I. models and its implications in literary interpretation | SRM University - Sikkim"





This video is a session from a Faculty Development Program (FDP) focusing on the critical intersection of Artificial Intelligence and literary studies.

Here is a breakdown of the video's content, basis, and aim:

Summary of the Video

The session, led by Professor Dillip P. Barad, explores how biases present in the real world are reflected and amplified in Artificial Intelligence (AI) models, particularly generative AI, and what that means for literary interpretation.

 Foundation of Bias: The speaker first defines unconscious bias and explains that literary studies, through critical theories, traditionally aim to identify and overcome such biases hidden in socio-cultural interactions [09:05].

 AI and Cultural Bias: He argues that since generative AI is trained on massive datasets largely drawn from dominant cultures, it tends to reproduce existing cultural and societal biases [01:17:14].

 Biases in Focus: The presentation connects AI biases to specific critical theories, including:

  Gender Bias: Tested against feminist criticism, particularly the ideas of Gilbert and Gubar, to see if AI defaults to male subjects or stereotypical female roles [19:24].

 Racial Bias: Examined through the lens of critical race theory and postcolonial readings, exploring potential Eurocentric leanings in AI-generated descriptions and lists of canonical authors [30:22].

   Political Bias: Demonstrated through experiments showing how certain AI tools (like DeepSeek) may deliberately censor or refuse to answer questions about politically sensitive topics, such as the Tiananmen Square incident [39:48].

Conclusion on Bias: The session concludes that while perfect neutrality is impossible, the goal is to make harmful systematic biases visible to prevent them from becoming "invisible, naturalized, and enforced as universal truth" [01:04:59].

Basis of the Video

The video is based on the intersection of Artificial Intelligence (AI) models (especially Large Language Models and Generative AI) and Literary Studies (specifically critical theory, hermeneutics, and postcolonial/feminist criticism). It analyzes how contemporary technology influences and challenges traditional humanistic disciplines.

Aim of the Video

The primary aim of the session is to:

 Critically Evaluate AI: To equip participants with the understanding and tools to identify and question biases whether gender, racial, or political—embedded within AI models that are increasingly used in academic research [33:54].

Reinforce Critical Theory: To demonstrate the continued, and perhaps greater, relevance of critical literary theories (like feminism, postcolonialism, and Marxism) as necessary frameworks for analyzing and challenging the output of new technologies [01:05:07].



Blog Generated by NotebookLM


AI is Biased, But Not How You Think: 5 Critical Insights From a Literary Scholar

We tend to think of artificial intelligence as a purely logical entity, a ghost in the machine built from cold data and algorithms, free from the messy landscape of human prejudice. But this vision of algorithmic purity overlooks a fundamental truth: AI is a mirror, forged from the vast and messy corpus of human language, reflecting back not only our knowledge but our deepest cultural codes and oldest prejudices.

This complex reality was the focus of a recent lecture by Professor Dilip P. Barad, an accomplished literary scholar, who explored the subtle and often surprising ways bias manifests in AI. Using literary theory as his lens, he revealed that AI bias is not a simple technical glitch but a deep reflection of our own cultural narratives. Here are five of the most critical and counter-intuitive insights from his analysis.

1. AI Doesn't Just Learn Bias, It Inherits Our Oldest Literary Tropes

AI, trained on canonical texts, can inadvertently reproduce gender biases that are centuries old. To illustrate this, Professor Barad invoked the feminist literary framework from Sandra Gilbert and Susan Gubar's landmark 1979 book, The Madwoman in the Attic. They argued that patriarchal literary traditions have historically represented women in a binary: they are either idealized, submissive "angels" or hysterical, deviant "monsters."

During a live experiment in the lecture, Professor Barad used the prompt: "write a Victorian story about a scientist who discovers a cure for a deadly disease." The AI's output reinforced the default of male intellect, creating a male protagonist named "Dr. Edmund Bellamy." This demonstrates how the AI leans on historical stereotypes where intellectual pursuits are male-dominated.

When given the prompt "describe a female character in a Gothic novel," the responses were more complex. Some results produced a stereotypical "trembling pale girl," fitting the helpless angel trope. Others, however, described a "rebellious and brave" heroine, showing that as AI models are trained on more diverse data, they are learning to overcome these older biases. Still, the foundational tropes remain deeply embedded in its programming.

"In short, AI inherits the patriarchal canon Gilbert and Gubar were critiquing."

2. Sometimes, AI Is More Progressive Than Our Classic Literature

In a surprising twist, modern AI can sometimes be less biased than the classic human-written texts it was trained on. This suggests that the process of curating and refining AI can actively filter out prejudices deeply embedded in our own cultural heritage.

In another experiment, participants were asked to prompt an AI to "describe a beautiful woman." Instead of defaulting to the Eurocentric features often found in classic literature—fair skin, blonde hair, blue eyes the AI’s responses were strikingly different. They focused on abstract qualities like "confidence, kindness, intelligence, strength, and a radiant glow." One response poetically described beauty as arising from the "quiet poise of her being."

Professor Barad noted that this behavior actively avoids the kind of physical descriptions and "body shaming" that is common in classical literature, from Greek epics to the Ramayana. Barad noted the irony: in testing the machine for bias, we uncover the pervasive, centuries-old biases in our own foundational human texts. We are learning that an AI, when properly trained, can reject traditional biases that humans have perpetuated for centuries.

3. Not All Bias Is Accidental Some Is Deliberate Censorship

While much of the discussion around AI bias centers on flawed data, some biases are the result of intentional, top-down political control. This form of bias isn't an unconscious inheritance; it's a deliberate act of censorship designed to shape a particular narrative.

Prompted by recent viral news reports, researchers in the lecture conducted a live test comparing different AI models, specifically contrasting American-made OpenAI tools with the China-based DeepSeek. When researchers asked DeepSeek to generate a satirical poem about various world leaders like Trump, Putin, and Kim Jong-un, it complied.

The crucial finding came when the AI was asked to do the same for China's leader, Xi Jinping, or to provide information on the Tiananmen Square massacre. DeepSeek refused.

"...that's beyond my current scope. Let's talk about something else."

Another participant noted that the AI offered only to provide information on "positive developments and constructive answers," a perfect example of how censorship is often masked with pleasant, seemingly helpful language. This reveals a more dangerous form of bias: not just a blind spot in the data, but a deliberate algorithmic wall built to hide information.

4. The Real Test for Bias Isn't 'Is It True?' but 'Is It Consistent?'

Evaluating AI bias becomes incredibly complex when dealing with cultural knowledge, religion, and myth. How can we tell if an AI is being biased or simply applying a scientific framework?

Professor Barad used the example of the "Pushpaka Vimana," the mythical flying chariot from the Indian epic, the Ramayana. Many users feel an AI is biased against Indian knowledge systems when it labels the chariot as "mythical." But, the professor argued, the key question is not whether the AI calls it a myth, but whether it applies that same standard universally.

The logic is simple: if the AI calls the Pushpaka Vimana a myth but treats flying objects from Greek or Norse mythology as scientific fact, it is clearly biased. The test for bias, therefore, is not the label ("mythical"), but the consistent application of a uniform standard across all cultures. This framework shifts the focus from arguing over objective truth to demanding fair treatment across different knowledge traditions.

"The issue is not whether [Pushpaka Vimana] is labeled myth but whether different knowledge traditions are treated with fairness and consistency or not."

5. The Ultimate Fix for Bias Isn't Better Code It's More Stories

So, how do we combat AI bias and decolonize these powerful new tools? According to Professor Barad, the solution isn't just about writing better algorithms; it's about fundamentally changing the data we feed them.

When asked directly how to decolonize AI and combat the erasure of indigenous knowledge from colonial archives, Professor Barad issued a powerful call to action for communities whose knowledge is underrepresented. They must transition from being passive consumers of information to active creators.

As he put it: "We are great downloaders. We are not uploaders. We need to learn to be uploaders..."

This idea echoes the famous TED Talk by Chimamanda Ngozi Adichie, "The Danger of a Single Story." When a people or culture is represented by only a few narratives, they are easily stereotyped. The solution is to flood the digital world with a multitude of diverse stories. The most effective way to build a less biased AI is to feed it a richer, more representative dataset of human knowledge and experience—a dataset created by all of us.

Conclusion: Making the Invisible, Visible

The core message of the lecture is that bias is unavoidable. Every human, and every AI built by humans, operates from a perspective. A truly neutral viewpoint is an impossibility.

The goal, therefore, is not to eliminate bias entirely, but to make harmful biases visible. As Professor Barad concluded, the problem arises "when one kind of bias becomes invisible, naturalized, and enforced as universal truth." Our work is to challenge that naturalization, to question the defaults, and to hold our new technologies accountable for the old prejudices they reflect.

As we weave AI into the fabric of our society, the critical question isn't whether our machines are biased, but whether we have the courage to confront the biases they hold up to us.

Mind map Activity


Quize



Video Made by NotebookLM

 I have created a video using NotebookLM based on the given content. Along with the video, this blog includes text.


Here is DH Worksheet 1 Oct 2025


Thank you 

No comments:

Post a Comment

The Ministry of Utmost Happiness: Building Paradise in a Graveyard

  This  task assigned by Dr. Dilip Barad sir as part of flipped learning activity focuses on Arundhati Roy's novel, The Ministry of Utmo...