Sunday, February 22, 2026

Humans in the Loop: Movie Review

 This blog is part of a sunday reading Film screening activity  assigned by Dr. Dilip Barad Sir following the screening of Humans in the Loop. The activity encouraged students to reflect thoughtfully on the film and engage with its ideas beyond surface-level viewing. Through this task, students were expected to observe the film’s portrayal of artificial intelligence, digital labour, and human involvement in technological systems, while also developing critical thinking and a deeper understanding of contemporary digital realities.



Pre- viewing task :


1. AI Bias & Indigenous Knowledge Systems

  • AI bias refers to systematic errors in machine learning outputs that reflect and reproduce the prejudices, gaps, and assumptions embedded in training data often favouring dominant cultural, racial, or geographic perspectives over marginalised ones.

  • Because AI models learn from human-labelled datasets, the biases of those who design, fund, and supervise the labelling process are baked directly into the technology's "worldview."

  • Humans in the Loop dramatises this through Nehma's work: she is asked to label ecological objects  plants, insects, animals  using industrial categories like "pest," "weed," or "crop" that conflict with her Oraon Adivasi knowledge system, which understands the same entities relationally and contextually.

  • The film raises the critical point that classifiers like "pest" or "weed" are dependent on functionality  what is a weed to one party is a herb to another  prompting the question: "Will this industrial consumption economy dictate our knowledge?"

  • Indigenous ecological knowledge (IEK) is holistic, place-based, and relational; it resists the reductive binary labelling that machine learning requires, thereby exposing how AI's very architecture privileges extractive, capitalist epistemologies.

  • A telling scene in the film shows an image generator producing a white boy atop an alligator when an Adivasi child asks to see himself on a crocodile  a vivid illustration of whose reality gets encoded as "default."

  • This challenges us to ask: who defines ground truth in AI, and whose knowledge systems are rendered invisible or invalid in the process?


2. Labour & Digital Economies

  • Invisible labour in digital economies refers to the unacknowledged, low-paid, and often feminised human work that underpins technologies marketed as automated or artificially intelligent  including data annotation, content moderation, and image labelling.

  • The term "artificial intelligence" itself obscures the vast human infrastructure behind it: over 70,000 Indians  mostly rural women  form AI's invisible workforce, performing the repetitive cognitive tasks that train global AI systems.

  • This labour is "invisible" in multiple senses: it is geographically remote, socially marginalised, algorithmically erased from the final product, and absent from mainstream narratives about technological progress.

  • The film is attentive to the texture of this labour  fluorescent-lit rooms, lagging computers, supervisors chasing targets, and workers trying to make sense of labels written in a different idiom from their own lives.

  • Highlighting such labour matters because it dismantles the myth of AI as disembodied and neutral; it reveals supply chains of exploitation that mirror older colonial and caste-based patterns of resource extraction.

  • It also raises questions of fair compensation, intellectual credit, and whose cognitive labour is commodified without consent or recognition in the building of billion-dollar AI industries.

  • The film highlights the unsung human labour that quietly powers the global artificial intelligence industry, insisting this story be central, not peripheral, to how we narrate technological modernity.


3. Politics of Representation

  • Representation in Humans in the Loop operates on two interlocking levels: how AI systems represent (or fail to represent) Adivasi people, and how the film itself represents both technology and Adivasi culture to mainstream audiences.

  • Critics and publicity materials emphasise the film's unusual vantage point  it examines how technological progress can entrench exclusion while sidelining indigenous knowledge systems  positioning Adivasi experience not as a backdrop but as the analytical lens through which AI is scrutinised.

  • The film's most cited scene  where an AI image generator produces a stereotyped, Europeanised image when asked to depict a tribal woman  directly indicts the representational politics of training data, showing that AI does not see Adivasi people at all, or sees them through a colonial distortion.

  • Executive producer Kiran Rao framed the film's intervention explicitly around "equitability, representation and data colonialism," signalling that representation is understood here as a political and structural issue, not merely an aesthetic one.

  • On the question of how Adivasi culture is depicted, reviews are divided: some praise the film's grounded, research-based authenticity, while at least one Letterboxd critic accused it of "fetishising" Adivasi life for a liberal festival audience without genuine political commitment.

  • The film's bilingual Hindi-Kurukh language choice is itself a representational act  giving screen presence to an Adivasi language rarely heard in mainstream Indian cinema.

Taken together, the film asks audiences to be conscious of the double bind: AI misrepresents Adivasi communities, and cinema, if not careful, can do the same. While- watching task: 1.NARRATIVE & STORYTELLING

How does the film situate Nehma’s personal life with larger algorithmic structures? What narrative turns foreground labour, family, and knowledge systems?
The film Humans in the Loop closely connects Nehma’s personal life with broader algorithmic systems by showing how her everyday experiences are shaped by invisible digital labour networks. Her work as a data annotator is not presented as an abstract technological task but as something embedded within her home, family responsibilities, and socio-economic realities. By situating annotation work inside domestic spaces, the film highlights how global AI infrastructures rely on local, often precarious labour that remains unseen yet essential.

Several narrative turns foreground the intersections of labour, family, and knowledge systems. First, scenes of Nehma working from home while managing household duties reveal the gendered and emotional dimensions of digital labour, where professional and personal boundaries blur. Second, moments in which she struggles to categorise ecological objects using industrial labels bring her indigenous Oraon knowledge into tension with algorithmic classification, exposing how AI systems reshape ways of knowing. These sequences emphasise that annotation is not merely technical work but a process involving interpretation, negotiation, and sometimes epistemic conflict.

Another key narrative turn emerges through conversations with family members and community contexts, which ground her labour within collective life rather than individual employment. The film also uses visual contrasts between her natural surroundings and the digital interfaces she works on to underscore the gap between lived ecological knowledge and standardised machine-readable categories. Through these narrative strategies, the film foregrounds how algorithmic structures penetrate intimate spaces, revealing digital labour as deeply entangled with family life, cultural identity, and local knowledge systems.

When Nehma “teaches” AI, what does this suggest about human-machine learning loops beyond technological jargon?

In Humans in the Loop, Nehma’s act of “teaching” AI through data annotation reveals that machine learning is not an autonomous or purely technical process but a deeply human-dependent cycle. Her work shows that AI systems learn by absorbing human judgement, interpretation, and cultural assumptions. In this sense, the human–machine learning loop becomes a process of knowledge transfer where people quietly shape the intelligence that machines appear to possess.

Beyond technological jargon, Nehma’s teaching suggests that AI learning is also emotional, ethical, and interpretative labour. Each label she assigns reflects decisions about meaning, context, and relevance, demonstrating that machines do not understand the world directly but through human mediation. This reframes AI as a collaborative yet unequal relationship: while humans train and correct the system, their contributions often remain invisible and undervalued.

Moreover, the film highlights that what Nehma teaches AI is influenced by her own worldview, even when constrained by industrial categories. This indicates that human–machine learning loops are also epistemological loops, where certain knowledge systems are translated into machine-readable forms while others are simplified or excluded. Thus, Nehma’s role suggests that AI is not merely learning facts but inheriting human perspectives, biases, and limitations.

Ultimately, the film invites us to see the human–machine loop as a socio-cultural process rather than a purely technical one, raising critical questions about authorship, agency, and recognition: if humans teach AI, whose knowledge is being amplified, and whose remains unheard?

2. REPRESENTATION & CULTURAL CONTEXT

How are Adivasi culture, language, tradition, and ecological knowledge represented?

In Humans in the Loop, Adivasi culture, language, tradition, and ecological knowledge are represented with intimacy and everyday realism rather than through romanticised or exotic portrayals. The film situates these elements within Nehma’s daily life, showing how her identity as an Oraon Adivasi woman shapes the way she understands nature, work, and community.

Culture and tradition are reflected through domestic spaces, family interactions, and community practices that emphasise collective living and continuity with ancestral ways of life. The film presents rituals, food habits, and local customs subtly in the background, allowing viewers to see Adivasi life as lived experience rather than spectacle. This grounded representation highlights the coexistence of tradition and modern digital labour.

Language plays an important role in conveying identity and worldview. Nehma’s use of her local language in conversations contrasts with the English-dominated digital interface of annotation platforms, illustrating a linguistic hierarchy embedded within global technological systems. This tension suggests that while Adivasi language carries cultural memory and meaning, it remains marginal within digital infrastructures.

Ecological knowledge is one of the film’s most powerful representations. Nehma’s understanding of plants, insects, and animals is relational and context-based, shaped by lived interaction with the environment. The film contrasts this holistic perspective with the reductive labels required by AI annotation, such as “weed” or “pest,” thereby revealing how indigenous ecological knowledge challenges industrial and algorithmic framings of nature.

Overall, the film portrays Adivasi identity as dynamic and resilient, showing how cultural memory, language, and ecological wisdom persist even within globalised technological networks. At the same time, it raises critical questions about visibility and recognition, suggesting that while Adivasi knowledge sustains both community life and digital economies, it often remains under-acknowledged in dominant narratives of technology and progress.

Does the film challenge or reinforce dominant media stereotypes about tribal communities and modern technology?

In Humans in the Loop, the portrayal of Nehma and her community largely challenges dominant media stereotypes about tribal (Adivasi) communities and their relationship with modern technology.

Mainstream media often frames tribal communities as either primitive, technologically disconnected, or romantically close to nature but outside modernity. The film disrupts this binary by presenting Nehma as both deeply rooted in her Adivasi culture and actively participating in global digital labour. Her role as a data annotator demonstrates that tribal communities are not excluded from technological systems; instead, they are integral yet often invisible contributors to AI development.

The film also challenges stereotypes by foregrounding agency and intellectual labour. Rather than depicting Nehma as a passive subject of development narratives, it shows her making interpretative decisions, negotiating meaning, and effectively “teaching” AI. This representation counters the idea that technological knowledge belongs only to urban or elite populations.

At the same time, the film does not ignore structural inequalities. By showing the precarious nature of annotation work and the epistemic tensions between indigenous ecological knowledge and industrial AI categories, it reveals how dominant technological systems can marginalise local knowledge. This nuanced portrayal avoids reinforcing stereotypes and instead exposes the power dynamics that shape visibility, labour value, and knowledge recognition.

Ultimately, the film presents Adivasi identity as modern, adaptive, and intellectually engaged, challenging simplistic representations and encouraging viewers to rethink who participates in technological futures. It suggests that the real issue is not technological absence but unequal recognition and representation within digital economies.

3. CINEMATIC STYLE & MEANING




Mise-en-Scène & Cinematography: Humans in the Loop (2025)

Aspect Ratio — The Governing Formal Choice
The film is shot in a 1.55:1 (near-square) aspect ratio — deliberately mimicking the shape of a computer monitor. This framing lends a unique gaze towards a set of people one knows very little about, while creating a storybook-like intimacy on screen. Both the forest and the data centre are held within the same frame preventing the film from glorifying either space over the other.

The Forest
  • Wide-angle shots embed characters within nature  they are not against the landscape but part of it.
  • Natural, dappled light  warm, diffuse, non-directional. Nothing is spotlit; everything participates equally.
  • Low, ground-level camera in the porcupine sequence places human and animal at the same horizontal plane, encoding a relational, non-hierarchical world-view.
  • Compositions are deliberately non-geometric: roots, grass, and canopy break the frame into irregular, organic shapes  the visual opposite of the grid.

The Workspace / Data Centre
  • Tight mid-shots and close-ups compress space walls, screens, and ceilings are always visible, implying no world beyond the frame.
  • Fluorescent artificial light flat, shadowless, institutional. It erases depth and complexity, visually rhyming with the binary label: everything is either lit or not.
  • Cool blue-grey colour palette contrasts directly with the warm ochres of the forest sequences.
  • The computer screen as light source  it illuminates Nehma's face, not the other way around. The machine directs light onto the human; power flows from screen to body.
  • Workers blocked in rows, facing screens — assembly-line composition that places data labour within the visual grammar of industrial production.

Ritual and Domestic Spaces
  • Medium close-ups that are tender, not confining  the camera holds rather than traps.
  • Tactile texture  rock, bark, weave, soil  emphasised through close framing. A visual argument that Adivasi knowledge is embodied and material, not digitisable.
  • Horizontal eye-lines between Nehma and her children in knowledge-transmission scenes equal, not hierarchical contrasting directly with the top-down blocking of supervisors over workers in the data centre.

The Central Visual Argument

Mid-shots and close-ups dominate the confined data centre, while wide-angle shots showcase the natural surroundings  a deliberate representational choice between confinement and openness, pixel and landscape.
The most precise expression of this is the parallel editing of the AI infant and Guntu: close-ups of muscle-movement data on screen are visually rhymed with close-ups of Nehma's own baby's limbs. The cinematography makes them look alike  so that the edit can argue, without dialogue, that the attention Nehma gives the machine is attention taken from her child.
The camera quietly captures the conflict between tradition and modernisation through shifts in colour and composition  but crucially, the film uses this contrast sparingly, letting visual atmosphere carry the argument rather than over-signalling it.

How do sound design and editing rhythms contribute to the contrast between analog life and digital labour?



The Sound Team

Sound Design: Kalhan Raina

Score: Saransh "Khwabgah" Sharma

Editing: Swaroop Reghu & Aranya Sahay

These three collaborators divide their responsibilities clearly: Raina governs the diegetic world (what exists within the film's reality), Sharma governs the non-diegetic emotional register (the score we hear as audience), and the editors govern rhythm and time  how quickly or slowly the film breathes.


Sound Design: Diegetic Contrast

The most fundamental sonic opposition in the film is between organic ambient sound and mechanical ambient sound.

In the forest and domestic world:


Birdsong, wind through grass, the sounds of water, animal movement, children's voices, communal singing  all are irregular, unpredictable, and layered simultaneously. No single sound dominates. The soundscape is polyphonic and relational, mirroring the Adivasi ecological world-view the film endorses.


Crucially, these sounds are not cleaned up in post-production. The presence of background noise, ambient texture, and sonic imperfection is a deliberate choice  it signals authenticity, embodiment, and the irreducibility of living sound to pure signal.


In the data centre:


The sonic world contracts to repetitive mechanical sounds: keyboard clicks, mouse clicks, the hum of fluorescent tubes, the lag and boot sounds of slow computers. These sounds are metronomic  they have regularity, uniformity, and no organic variation.

The click of the mouse labelling an image becomes the film's most politically loaded sound. Each click is an act of classification  pest or not pest, weed or crop, human or other. The sound is small, dry, and final. It enacts the violence of binary categorisation in sonic form.


The Score: Sharma's Compositional Strategy

Sharma's music blends organic soundscapes with textured electronic production  drawing from ambient, post-classical, and downtempo influences. Guitar, piano, synthesizers, and field recordings form the basis of his pieces, built on layers that feel intimate, tactile, and reflective. For Humans in the Loop, the aim was to create a subtle bed of sound that would allow the emotion to unfold naturally.

This compositional strategy is not merely aesthetic  it is politically calibrated:


The score uses both organic instruments (guitar, piano) and synthetic textures (synthesizers, electronics) simultaneously. It refuses to assign "natural instruments" only to forest scenes and "electronic sounds" only to data centre scenes. Instead, it holds both registers in one sonic space  the same way Nehma holds both worlds in her body.

\

Sharma's ambient soundtrack infuses a dream-like quality The Federal particularly in the opening porcupine sequence  here the score functions as sonic mise-en-scène, enlarging a small, quiet visual event into something of mythic weight without dramatising it.


The score does not swell or signal. It operates in understatement. This restraint prevents the audience from being told how to feel, placing the interpretive burden back on the viewer  appropriate for a film about who has the right to assign meaning.


The Kurukh Music Problem: Authenticity vs. Accessibility

This is the film's most sophisticated and honest sound decision. Director Sahay explains:

Oraon music is very different from the music that the broader Indian audience listens to. It does not follow the four-by-two meter. Musicians often change the key, sometimes even in the middle of a song  and may also change the rhythm in the middle of a song. It was difficult to decide how to bridge the music: to not alienate the audience completely, but to also retain authenticity. This is where the synth, the violin, and other instruments came in.

This is a representational dilemma made audible. The film's score is itself a negotiation between two knowledge systems  the irregular, non-metrical rhythms of Kurukh Adivasi music and the regularised, metrically predictable structures of mainstream Indian and global cinema audiences' expectations. The synthetic instruments act as a bridge  familiar enough to hold the audience, unfamiliar enough to signal that they are in a different sonic world. Importantly, the director acknowledges the compromise honestly, which itself is an ethical act.


Editing Rhythms: Pacing as Political Argument

The film is edited by Swaroop Reghu and Sahay himself  a significant choice, as the director controlling the edit ensures that pacing decisions are inseparable from thematic ones.


Slow rhythm in analog sequences:


Forest and domestic scenes are given longer takes and fewer cuts. Time is allowed to accumulate. A scene of Nehma and the porcupine, or Nehma pointing out living things to her children, breathes the edit does not hurry it. This slow rhythm encodes a different relationship with time: non-industrial, non-target-driven, attentive.


Tighter rhythm in labour sequences:


Inside the data centre, cuts are more frequent. The editing rhythm begins to mirror the click-rate of the labelling work  image after image, classification after classification. The edit trains you, as a viewer, to process images quickly, just as Nehma is trained to. This is a formally self-aware gesture: the editing makes you complicit in the very cognitive rhythm it is critiquing.


The central parallel edit  AI infant / Guntu:


The film's most powerful editorial sequence cuts between Nehma labelling infant muscle-movement data at her workstation and close-ups of her own son Guntu's limbs at home. The editing holds both in the same rhythm  the cuts between them are not accelerated or slowed. By keeping the temporal rhythm identical, the edit argues that these two events are not merely analogous but materially linked: the same unit of time, the same unit of attention, the same body  displaced from one to the other. No score is needed here. The argument is made entirely through the cut.


Restraint and silence:


Sahay exercises admirable restraint  with a focus on silences and moments of introspective quiet  that allows the film's themes to breathe without descending into didacticism. The Federal Silence in this film is not absence  it is where the film places its most serious claims. When Nehma refuses to label the caterpillar as a pest and is reprimanded, the scene is followed by a held silence rather than a dramatic score cue. The silence asks: who fills this space? Who has the right to speak after this? The audience is left in the gap.

4. ETHICAL & POLITICAL QUESTIONS

What ethical dilemmas are depicted when training AI with culturally specific data?

1. Whose Categories Govern?  The Epistemological Dilemma

Nehma gets in trouble for failing to label a creature as a pest, because based on her community's ecological knowledge, she knows it is not harmful to crops. The film uses this scenario to highlight how AI relies on universal categories, often ignoring valuable local and indigenous knowledge  when the world is forced into a handful of machine-readable boxes, nuance is the first casualty.

The ethical dilemma here is foundational: who has the authority to define "correct"? The AI system's categories are designed for an extractive agricultural economy by clients in the Global North. Nehma's knowledge  built from generational, place-based observation  is more accurate but inadmissible. The system treats superior knowledge as error.

2. Extraction Without Compensation  The Data Colonialism Dilemma

Representative datasets, containing images of indigenous people, their languages, culture and knowledge systems, are a product of the labour of the masses. The AI image generator is arguably better off with Nehma's additions  but what does she or her community get out of it? If foreign AI companies are dependent on the value generated by India's tribals, trained on their images and utilising their knowledge of nature, what is their return?

Nehma's cultural knowledge and her community's images are incorporated into a commercial AI system she will never own, profit from, or have any control over. This is data colonialism  the extraction of epistemic and representational value from a marginalised community without consent, credit, or compensation.

3. Bias Reproduction  The Inheritance Dilemma

In a world where machines learn to absorb human biases, Nehma comes to understand that the technology she has undertaken, like tending her children, inherits the discrimination of its labeller  she can't help but question whether the system she creates will only serve to perpetuate the fate she has suffered.

The AI does not generate neutral outputs. It inherits the biases of whoever designed its classification framework. AI rides on the back of struggling low-income women labourers, and their input or their lives is rarely acknowledged a biased AI leads to misrepresentation and further isolates already marginalised communities. The ethical dilemma: Nehma is both the victim of existing bias and, through her constrained labour, the unwilling reproducer of new bias.

4. Representational Violence  The Image Generation Dilemma

When Nehma feeds her own image and those of her community into the AI generator to correct its misrepresentations, the film raises a fresh dilemma: culturally representative datasets are something AI companies are increasingly hungry for  fresh concerns arise that do not get addressed in the movie. These representative datasets are a product of the labour of the masses, but the communities supplying them receive nothing in return.

Correcting the AI's blindness to Adivasi faces requires Nehma to submit her own image to the same system that previously erased her. To be seen, she must surrender her image to an economy that does not recognise her humanity. This is the film's most precise ethical trap.

5. The Hierarchy of Accountability  The Structural Dilemma

The film shows how dataset guidelines, client briefs, and workplace hierarchies encode dominant assumptions into the material Nehma is asked to produce  tracing how AI's "neutrality" is undermined by who gets to name things, whose faces are over-represented, and whose stories never enter the frame.

The ethical chain runs: American tech client → Indian centre manager → Adivasi data worker. Responsibility is diffused across this chain so that no single actor appears culpable. Director Sahay asks: "When are we going to take responsibility as humanity for the kind of algorithms we're building?" The film's answer is that the architecture of outsourcing is designed to prevent that accountability from settling anywhere. Core Ethical Argument

All five dilemmas converge on a single point: the ethics of AI cannot be separated from questions of caste, gender, and geography. The film insists that training data is never culturally neutral  it always encodes the power relations of those who design, fund, and supervise its collection. When that data is sourced from communities with the least power in the global economy, the ethical debt compounds at every layer of the system.


How does the film’s human-in-the-loop metaphor operate beyond the technical term—politically, socially, and culturally?

The "Human-in-the-Loop" Metaphor Beyond the Technical Term


The Technical Term  Briefly

In machine learning, human-in-the-loop (HITL) refers to a system design where a human overseer corrects algorithmic errors, validates outputs, and improves the model's accuracy over iterative cycles. The human is positioned as a check on the machine. The term implies agency, authority, and mutual benefit.

The film accepts this definition as its starting point  then systematically dismantles every assumption embedded in it.


Politically: The Loop as Colonial Circuit

The title alludes to the closed-loop relationship between humans and technology  one "programs" the other and vice versa, forever. But the film makes visible that this loop is not symmetrical. It has a direction: value flows upward from Jharkhand to Silicon Valley; constraint flows downward in return.

The political loop the film exposes is data colonialism  a 21st century structure that mirrors 19th century colonial extraction. Raw material (Nehma's cognitive labour and cultural knowledge) is extracted from the periphery, processed at the centre, and returned as a finished product (AI) in which the original producer has no ownership, profit, or control. The film engages one of the most urgent global conversations of our time: how we can prevent social inequities from making their way into AI and instead use it to enhance underrepresented voices.

Crucially, the political loop is self-reinforcing. The AI trained on biased data produces biased outputs. Those biased outputs further marginalise the communities whose knowledge was extracted to build the system. Nehma corrects the machine, but the machine's economic architecture is correcting her in return  telling her which knowledge is valid, which categories are acceptable, which version of the world is real. She is in the loop, but the loop is not in her interest.


Socially: The Loop as Caste and Gender Structure

The film uses Nehma's work at the AI data lab as a site for exploring themes of caste, assimilation, and the desire to belong to a world that seems more "legitimate" in the eyes of dominant society.

Nehma enters the film already caught in multiple social loops she did not design:

  • The loop of gender: as a woman, her labour  domestic and digital  is treated as naturally available, endlessly renewable, and minimally compensable. There may even be a hidden metaphor for the way our world operates  characterised by a lack of more feminine and nurturing values in favour of more masculine ambitions of dominance and control. The loop of caste: a profession that thrives on binary labels is outsourced to people whose plurality transcends labels. Humans are instructed to think like machines in order to instruct machines to act like humans. A marginalised Indian's social conditioning is at odds with a job that formalises societal bias. 

  • The loop of class: Dhaanu, Nehma's daughter, prefers her urban, upper-caste father  gravitating toward a world that appears more "legitimate." This is the social loop working through the next generation: assimilation reproducing the very hierarchy that excludes the assimilated.

The loop metaphor here means that social exclusion is not a one-time event but a recursive system  one that reproduces itself across generations, across institutions, and now across algorithms.


Culturally: The Loop as Epistemological Erasure

This is the film's deepest and most original extension of the metaphor. In a training session at the AI centre, Nehma's supervisor tells her that artificial intelligence is like a child  it must be taught how to see the world. This metaphor becomes a central ideological battleground. If AI is indeed a child, who becomes the teacher? What values and assumptions are encoded in the data it consumes? The cultural loop operates as follows: Nehma possesses a sophisticated, relational, ecologically precise knowledge system. She is hired to transmit this  and other  knowledge to an AI. But the transmission is filtered through categories she did not design. A caterpillar must be either a pest or not a pest. A plant must be a weed or a crop. The cultural loop does not absorb Nehma's knowledge  it translates it into a form that erases what makes it valuable, then feeds it back into the world as "objective data." The AI learns from her but cannot see her.

This is the cultural loop in its most precise form: unlike the thousands of images and videos she goes through every day, Nehma isn't even afforded the dignity of being a data point. She teaches the system to see, but the system cannot return the gaze.


The Reciprocal Loop: Who Is Training Whom?

The film's sharpest political-cultural insight is that the loop runs in both directions  but not equally. One "programs" the other and vice versa, forever. The machine trains on Nehma's knowledge. But Nehma is simultaneously being trained by the machine  being told to suppress her ecological intuition, to accept industrial categories, to think in binaries. To programme the robot, she must think robotically too.

The cultural violence of the loop is therefore not just extraction  it is substitution: the AI economy does not merely take Nehma's knowledge; it replaces it with a degraded, commodified version of itself and asks her to use that replacement as her new cognitive standard.


A Dissenting Voice: What the Loop May Not Resolve

It is worth noting that not all critics accept the film's political architecture as fully realised. The film claims to speak about marginalisation while marginalising the very community it depicts. It critiques data bias but exhibits cultural bias. It condemns invisibility but erases the Adivasi lived experience.  This critique from within the Adivasi community  points to an uncomfortable meta-irony: the film itself may be operating in a loop analogous to the one it critiques, extracting Adivasi cultural material for a global festival circuit audience without sufficient return to the community depicted.

This does not invalidate the metaphor. It deepens it  suggesting that the human-in-the-loop problem is not confined to the data centre but extends to the film industry, the festival circuit, and the very act of representation itself.

POST-VIEWING REFLECTIVE ESSAY TASKS TASK 1 — AI, BIAS, & EPISTEMIC REPRESENTATION Critical Reflection: Humans in the Loop (2025) Technology, Knowledge, and the Politics of the Algorithm Introduction
In the dominant cultural imagination, artificial intelligence is presented as a self-generating, self-correcting, and therefore politically neutral technology. Humans in the Loop (2025), directed by Aranya Sahay, refuses this mythology from its opening frame. Set in the Adivasi communities of Jharkhand, the film follows Nehma an Oraon woman who takes work as a data labeller at an AI training centre and uses her experience to expose a foundational contradiction: that a technology marketed as objective is, in fact, built on the classified, extracted, and systematically devalued knowledge of the world's most marginalised people. This essay argues that Humans in the Loop represents algorithmic bias not as a technical error awaiting correction but as a culturally situated and ideologically enforced condition, rooted in epistemic hierarchies that determine whose knowledge counts, whose categories govern, and whose existence is permitted to be "seen" by the machine. Drawing on Louis Althusser's concept of ideological state apparatuses, Jean-Louis Baudry's apparatus theory, and Stuart Hall's theory of representation and encoding/decoding, this essay reads the film as both a narrative about AI labour and a formal intervention into the politics of knowledge itself. Algorithmic Bias as Cultural Situatedness
The central epistemological confrontation of the film is staged through a single, deceptively simple object: a caterpillar. Nehma, instructed to label it as a "pest" for an AI-powered agricultural system, refuses. Drawing on her Oraon ecological knowledge built from generational, place-based observation she understands that the caterpillar consumes only rotting plant matter, and is therefore not destructive but regenerative. Her classification is more accurate. Her supervisor Alka, squeezed between the demands of an American tech client and the productivity targets of the centre, overrules her. Nehma is told, explicitly, not to use her brain.
This scene operationalises what scholars of science and technology studies call situated knowledge the argument, developed by Donna Haraway, that all knowledge is produced from a specific position, and that claims to universal objectivity are themselves expressions of power. The AI's categorical framework pest/non-pest, weed/crop is not neutral. It is designed within and for an extractive, monocultural, industrial agricultural economy. It encodes the priorities of its designers: maximise yield, eliminate deviation, streamline classification. Nehma's knowledge system does not fit these categories because it operates through a different ontology altogether one that understands ecological relationships as contextual, reciprocal, and irreducible to binary opposition.
Critically, the film does not present this as a cultural misunderstanding to be resolved through better training data. The supervisor's instruction to "not use your brain" reveals that the suppression of Nehma's knowledge is structural, not accidental. The AI system does not merely fail to accommodate Adivasi ecological knowledge it is architecturally incompatible with it, because to accommodate it would be to challenge the industrial categories that make the system commercially valuable to its clients. Algorithmic bias, in this framing, is not a bug in the system. It is the system functioning exactly as designed by those with the power to design it. Apparatus Theory and the Ideological Screen
Jean-Louis Baudry's apparatus theory developed in his landmark essay "Ideological Effects of the Basic Cinematographic Apparatus" (1974) argues that the cinema apparatus is not a neutral recording device but an ideological machine. The positioning of the camera, the organisation of spectatorial vision, and the very form of projected cinema naturalise a particular, centred, sovereign subject as the norm. Applied beyond cinema to technological apparatus in general, Baudry's framework allows us to ask: what does the AI system, as an apparatus, make visible, and what does it systematically render invisible?
Humans in the Loop stages this question through the film's most formally precise scene. When Nehma and members of her community interact with an AI image generator and prompt it to produce an image of a tribal woman, the system generates a pale, light-haired, Europeanised figure. The AI literally cannot see Nehma. It has been trained on a dataset in which Adivasi faces, bodies, and aesthetic norms are either absent or subordinated to a hegemonic visual norm and it reproduces that norm as "universal." This is the apparatus operating ideologically: it presents the output not as the outcome of biased training data but as simply "what a woman looks like." Louis Althusser's concept of ideological state apparatuses extends this analysis. For Althusser, ideology does not merely distort reality it constitutes subjects, interpellating them into positions within the social order. The data-labelling centre functions in the film as precisely such an apparatus: it does not merely exploit Nehma's labour, it attempts to reconstitute her as a cognitive subject compatible with its requirements. She is trained to suppress her own knowledge, to accept industrial categories, to think in binaries. To programme the machine, she must first allow the machine's logic to programme her. The human-in-the-loop, in this reading, is not the overseer of the machine but its ideological product. Epistemic Hierarchy: Whose Knowledge Counts?
Stuart Hall's theory of representation the argument that meaning is not inherent in objects but produced through systems of representation that are always organised by power provides the framework for reading the film's deepest claim. The film stages a direct confrontation between two systems of representation: Adivasi ecological knowledge, which is relational, contextual, and place-based, and the AI classification system, which is universal, binary, and designed for scalability.
The epistemic hierarchy the film exposes is not simply that one system is ranked above the other. It is that the dominant system denies the subordinate system the status of knowledge at all. Nehma's understanding of the caterpillar is treated not as a rival classification but as an error a failure of correct labelling. This is what philosophers of knowledge call epistemic injustice Miranda Fricker's term for the harm done to a subject specifically in their capacity as a knower. Nehma is not merely underpaid; she is epistemically dismissed. Her testimony about the living world is inadmissible within the system she has been hired to improve.
The film reinforces this hierarchy through the chain of authority it depicts. The American tech client, never physically present but audible on a Zoom call, defines what "correct" labelling means. The Indian centre manager transmits and enforces this definition. Nehma's expertise is positioned at the bottom of this chain most proximate to the data, most distant from the definition of accuracy. Dataset guidelines, client briefs, and workplace hierarchies encode dominant assumptions into the material Nehma is asked to produce, tracing how AI's "neutrality" is undermined by who gets to name things, whose faces are over-represented, and whose stories never enter the frame. Cinematic Form as Critical Argument
What elevates the film beyond documentary critique is the way its formal choices actively participate in its argument. The 1.55:1 near-square aspect ratio mimicking the shape of a computer monitor positions the audience inside the machine's frame of vision. We watch Nehma through the same screen she watches images on. This is apparatus theory made literal: the film places us in the position of the classifier, and asks us to notice what we cannot see within the frame.
The cinematography is sensitive and tactile filled with the texture of rock, the swaying of grass, the expressions on faces painting a robust picture of lives lived in communion with the natural world. The forest sequences use natural light, organic composition, and low camera angles that place human and animal at the same horizontal level encoding a non-hierarchical, relational world-view. The data centre, by contrast, is characterised by fluorescent flatness, tight framing, and the blue-white glow of the computer screen casting light onto Nehma's face. In this visual economy, the machine illuminates the human power flows from screen to body, not the reverse. The cinematography does not merely illustrate the political argument; it enacts it.
The film's most formally precise political statement is the parallel editing sequence in which Nehma labels infant muscle-movement data to train an AI walking model, while in cross-cut her own son Guntu takes his first steps unseen. The editor holds both sequences in identical rhythm. The argument made through the cut is not sentimental but structural: the same unit of cognitive attention, the same quality of care, is displaced from child to machine. The labour that builds artificial intelligence is purchased with the currency of lived human experience. Conclusion
Humans in the Loop makes a precise and urgent argument: algorithmic bias is not a technical problem awaiting a technical solution. It is the expression of epistemic hierarchies that predate the algorithm and will survive its correction unless the underlying politics of knowledge are confronted. Through Nehma's experience, the film demonstrates that the categories embedded in AI systems are culturally situated designed within and for dominant economic interests and that the communities whose knowledge and labour are extracted to build these systems are systematically denied the authority to define what counts as correct.
Apparatus theory illuminates how the AI system, like cinema itself, naturalises its ideological positioning as neutral vision. Hall's theory of representation shows how meaning is always organised by power and that the AI's failure to see Adivasi faces is not an oversight but a structural outcome of whose visual norms dominate the training pipeline. Althusser's framework reveals the data centre as an ideological apparatus that does not merely exploit but reconstitutes its workers as subjects compatible with the system's requirements. The film asks a question that no algorithm can answer: when are we going to take responsibility, as humanity, for the kind of systems we are building? That question belongs not to the data centre but to the political economy that designed it and to the audiences, critics, and scholars equipped to name what the machine cannot see. References: Alonso, D. V. Imagining AI Futures in Mainstream Cinema: Socio-Technical Narratives and Social Imaginaries. AI & Society, 2026. Anjum, N. “Aranya Sahay’s Humans in the Loop and the Politics of AI Data Labelling.” The Federal, 2026. Barad, Dilip. “Humans in the Loop: Exploring AI, Labour and Digital Culture.” Blog post, Jan. 2026. Bazin, André. What Is Cinema? Vol. 1, University of California Press, 1967. Bordwell, David, and Kristin Thompson. Film Art: An Introduction. 12th ed., McGraw-Hill Education, 2019. Cave, Stephen, et al. “Shuri in the Sea of Dudes: The Cultural Construction of the AI Engineer in Popular Film, 1920–2020.” Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, Oxford University Press, 2023. Deleuze, Gilles. Cinema 1: The Movement Image. Translated by Hugh Tomlinson and Barbara Habberjam, University of Minnesota Press, 1983. Frías, C. L. “The Paradox of Artificial Intelligence in Cinema.” Cultura Digital, vol. 2, no. 1, 2024, pp. 5–25. Göker, D. “Human-like Artificial Intelligence in Indian Cinema: Cultural Narratives, Ethical Dimensions, and Posthuman Perspectives.” International Journal of Cultural and Social Studies, vol. 11, no. 2, 2025, pp. 1–10. Haris, M. J., et al. “Identifying Gender Bias in Blockbuster Movies through the Lens of Machine Learning.” Humanities and Social Sciences Communications, vol. 10, 2023. Humans in the Loop (film). Wikipedia entry, retrieved Feb. 2026. Indian Express Editorial. “Humans in the Loop: Technology, AI and Digital Lives.” The Indian Express, 2026. McDonald, Kevin. Film Theory: The Basics. 2nd ed., Routledge, 2023. Sahay, Aranya, director. Humans in the Loop. India, 2024. Shepherdson, Charles, et al., editors. Film Theory: Critical Concepts in Media and Cultural Studies. Vols. 1–4, Routledge, 2004. Sui, Z., and S. Wang. “Dogme 25: Media Primitivism and New Auteurism in the Age of Artificial Intelligence.” Frontiers in Communication, vol. 10, 2025. Vighi, Fabio. Critical Theory and Film: Rethinking Ideology through Film Noir. Bloomsbury Academic India, 2019. Yu, Y. “The Reel Deal? An Experimental Analysis of Perception Bias and AI Film Pitches.” Journal of Cultural Economics, vol. 49, 2025, pp. 281–300.

No comments:

Post a Comment

When the Machines Came From Mars

  When the Machines Came From Mars How H.G. Wells’ The War of the Worlds Invented the Language of Modern Science Fiction H.G. Wells  ·  1898...