The "Human-in-the-Loop" Metaphor Beyond the Technical Term
The Technical Term Briefly
In machine learning, human-in-the-loop (HITL) refers to a system design where a human overseer corrects algorithmic errors, validates outputs, and improves the model's accuracy over iterative cycles. The human is positioned as a check on the machine. The term implies agency, authority, and mutual benefit.
The film accepts this definition as its starting point then systematically dismantles every assumption embedded in it.
Politically: The Loop as Colonial Circuit
The title alludes to the closed-loop relationship between humans and technology one "programs" the other and vice versa, forever. But the film makes visible that this loop is not symmetrical. It has a direction: value flows upward from Jharkhand to Silicon Valley; constraint flows downward in return.
The political loop the film exposes is data colonialism a 21st century structure that mirrors 19th century colonial extraction. Raw material (Nehma's cognitive labour and cultural knowledge) is extracted from the periphery, processed at the centre, and returned as a finished product (AI) in which the original producer has no ownership, profit, or control. The film engages one of the most urgent global conversations of our time: how we can prevent social inequities from making their way into AI and instead use it to enhance underrepresented voices.
Crucially, the political loop is self-reinforcing. The AI trained on biased data produces biased outputs. Those biased outputs further marginalise the communities whose knowledge was extracted to build the system. Nehma corrects the machine, but the machine's economic architecture is correcting her in return telling her which knowledge is valid, which categories are acceptable, which version of the world is real. She is in the loop, but the loop is not in her interest.
Socially: The Loop as Caste and Gender Structure
The film uses Nehma's work at the AI data lab as a site for exploring themes of caste, assimilation, and the desire to belong to a world that seems more "legitimate" in the eyes of dominant society.
Nehma enters the film already caught in multiple social loops she did not design:
The loop of gender: as a woman, her labour domestic and digital is treated as naturally available, endlessly renewable, and minimally compensable. There may even be a hidden metaphor for the way our world operates characterised by a lack of more feminine and nurturing values in favour of more masculine ambitions of dominance and control. The loop of caste: a profession that thrives on binary labels is outsourced to people whose plurality transcends labels. Humans are instructed to think like machines in order to instruct machines to act like humans. A marginalised Indian's social conditioning is at odds with a job that formalises societal bias.
The loop of class: Dhaanu, Nehma's daughter, prefers her urban, upper-caste father gravitating toward a world that appears more "legitimate." This is the social loop working through the next generation: assimilation reproducing the very hierarchy that excludes the assimilated.
The loop metaphor here means that social exclusion is not a one-time event but a recursive system one that reproduces itself across generations, across institutions, and now across algorithms.
Culturally: The Loop as Epistemological Erasure
This is the film's deepest and most original extension of the metaphor. In a training session at the AI centre, Nehma's supervisor tells her that artificial intelligence is like a child it must be taught how to see the world. This metaphor becomes a central ideological battleground. If AI is indeed a child, who becomes the teacher? What values and assumptions are encoded in the data it consumes? The cultural loop operates as follows: Nehma possesses a sophisticated, relational, ecologically precise knowledge system. She is hired to transmit this and other knowledge to an AI. But the transmission is filtered through categories she did not design. A caterpillar must be either a pest or not a pest. A plant must be a weed or a crop. The cultural loop does not absorb Nehma's knowledge it translates it into a form that erases what makes it valuable, then feeds it back into the world as "objective data." The AI learns from her but cannot see her.
This is the cultural loop in its most precise form: unlike the thousands of images and videos she goes through every day, Nehma isn't even afforded the dignity of being a data point. She teaches the system to see, but the system cannot return the gaze.
The Reciprocal Loop: Who Is Training Whom?
The film's sharpest political-cultural insight is that the loop runs in both directions but not equally. One "programs" the other and vice versa, forever. The machine trains on Nehma's knowledge. But Nehma is simultaneously being trained by the machine being told to suppress her ecological intuition, to accept industrial categories, to think in binaries. To programme the robot, she must think robotically too.
The cultural violence of the loop is therefore not just extraction it is substitution: the AI economy does not merely take Nehma's knowledge; it replaces it with a degraded, commodified version of itself and asks her to use that replacement as her new cognitive standard.
A Dissenting Voice: What the Loop May Not Resolve
It is worth noting that not all critics accept the film's political architecture as fully realised. The film claims to speak about marginalisation while marginalising the very community it depicts. It critiques data bias but exhibits cultural bias. It condemns invisibility but erases the Adivasi lived experience. This critique from within the Adivasi community points to an uncomfortable meta-irony: the film itself may be operating in a loop analogous to the one it critiques, extracting Adivasi cultural material for a global festival circuit audience without sufficient return to the community depicted.
This does not invalidate the metaphor. It deepens it suggesting that the human-in-the-loop problem is not confined to the data centre but extends to the film industry, the festival circuit, and the very act of representation itself.
POST-VIEWING REFLECTIVE ESSAY TASKS
TASK 1 — AI, BIAS, & EPISTEMIC REPRESENTATION
Critical Reflection: Humans in the Loop (2025)
Technology, Knowledge, and the Politics of the Algorithm
Introduction
In the dominant cultural imagination, artificial intelligence is presented as a self-generating, self-correcting, and therefore politically neutral technology. Humans in the Loop (2025), directed by Aranya Sahay, refuses this mythology from its opening frame. Set in the Adivasi communities of Jharkhand, the film follows Nehma an Oraon woman who takes work as a data labeller at an AI training centre and uses her experience to expose a foundational contradiction: that a technology marketed as objective is, in fact, built on the classified, extracted, and systematically devalued knowledge of the world's most marginalised people. This essay argues that Humans in the Loop represents algorithmic bias not as a technical error awaiting correction but as a culturally situated and ideologically enforced condition, rooted in epistemic hierarchies that determine whose knowledge counts, whose categories govern, and whose existence is permitted to be "seen" by the machine. Drawing on Louis Althusser's concept of ideological state apparatuses, Jean-Louis Baudry's apparatus theory, and Stuart Hall's theory of representation and encoding/decoding, this essay reads the film as both a narrative about AI labour and a formal intervention into the politics of knowledge itself.
Algorithmic Bias as Cultural Situatedness
The central epistemological confrontation of the film is staged through a single, deceptively simple object: a caterpillar. Nehma, instructed to label it as a "pest" for an AI-powered agricultural system, refuses. Drawing on her Oraon ecological knowledge built from generational, place-based observation she understands that the caterpillar consumes only rotting plant matter, and is therefore not destructive but regenerative. Her classification is more accurate. Her supervisor Alka, squeezed between the demands of an American tech client and the productivity targets of the centre, overrules her. Nehma is told, explicitly, not to use her brain.
This scene operationalises what scholars of science and technology studies call situated knowledge the argument, developed by Donna Haraway, that all knowledge is produced from a specific position, and that claims to universal objectivity are themselves expressions of power. The AI's categorical framework pest/non-pest, weed/crop is not neutral. It is designed within and for an extractive, monocultural, industrial agricultural economy. It encodes the priorities of its designers: maximise yield, eliminate deviation, streamline classification. Nehma's knowledge system does not fit these categories because it operates through a different ontology altogether one that understands ecological relationships as contextual, reciprocal, and irreducible to binary opposition.
Critically, the film does not present this as a cultural misunderstanding to be resolved through better training data. The supervisor's instruction to "not use your brain" reveals that the suppression of Nehma's knowledge is structural, not accidental. The AI system does not merely fail to accommodate Adivasi ecological knowledge it is architecturally incompatible with it, because to accommodate it would be to challenge the industrial categories that make the system commercially valuable to its clients. Algorithmic bias, in this framing, is not a bug in the system. It is the system functioning exactly as designed by those with the power to design it.
Apparatus Theory and the Ideological Screen
Jean-Louis Baudry's apparatus theory developed in his landmark essay "Ideological Effects of the Basic Cinematographic Apparatus" (1974) argues that the cinema apparatus is not a neutral recording device but an ideological machine. The positioning of the camera, the organisation of spectatorial vision, and the very form of projected cinema naturalise a particular, centred, sovereign subject as the norm. Applied beyond cinema to technological apparatus in general, Baudry's framework allows us to ask: what does the AI system, as an apparatus, make visible, and what does it systematically render invisible?
Humans in the Loop stages this question through the film's most formally precise scene. When Nehma and members of her community interact with an AI image generator and prompt it to produce an image of a tribal woman, the system generates a pale, light-haired, Europeanised figure. The AI literally cannot see Nehma. It has been trained on a dataset in which Adivasi faces, bodies, and aesthetic norms are either absent or subordinated to a hegemonic visual norm and it reproduces that norm as "universal." This is the apparatus operating ideologically: it presents the output not as the outcome of biased training data but as simply "what a woman looks like."
Louis Althusser's concept of ideological state apparatuses extends this analysis. For Althusser, ideology does not merely distort reality it constitutes subjects, interpellating them into positions within the social order. The data-labelling centre functions in the film as precisely such an apparatus: it does not merely exploit Nehma's labour, it attempts to reconstitute her as a cognitive subject compatible with its requirements. She is trained to suppress her own knowledge, to accept industrial categories, to think in binaries. To programme the machine, she must first allow the machine's logic to programme her. The human-in-the-loop, in this reading, is not the overseer of the machine but its ideological product.
Epistemic Hierarchy: Whose Knowledge Counts?
Stuart Hall's theory of representation the argument that meaning is not inherent in objects but produced through systems of representation that are always organised by power provides the framework for reading the film's deepest claim. The film stages a direct confrontation between two systems of representation: Adivasi ecological knowledge, which is relational, contextual, and place-based, and the AI classification system, which is universal, binary, and designed for scalability.
The epistemic hierarchy the film exposes is not simply that one system is ranked above the other. It is that the dominant system denies the subordinate system the status of knowledge at all. Nehma's understanding of the caterpillar is treated not as a rival classification but as an error a failure of correct labelling. This is what philosophers of knowledge call epistemic injustice Miranda Fricker's term for the harm done to a subject specifically in their capacity as a knower. Nehma is not merely underpaid; she is epistemically dismissed. Her testimony about the living world is inadmissible within the system she has been hired to improve.
The film reinforces this hierarchy through the chain of authority it depicts. The American tech client, never physically present but audible on a Zoom call, defines what "correct" labelling means. The Indian centre manager transmits and enforces this definition. Nehma's expertise is positioned at the bottom of this chain most proximate to the data, most distant from the definition of accuracy. Dataset guidelines, client briefs, and workplace hierarchies encode dominant assumptions into the material Nehma is asked to produce, tracing how AI's "neutrality" is undermined by who gets to name things, whose faces are over-represented, and whose stories never enter the frame.
Cinematic Form as Critical Argument
What elevates the film beyond documentary critique is the way its formal choices actively participate in its argument. The 1.55:1 near-square aspect ratio mimicking the shape of a computer monitor positions the audience inside the machine's frame of vision. We watch Nehma through the same screen she watches images on. This is apparatus theory made literal: the film places us in the position of the classifier, and asks us to notice what we cannot see within the frame.
The cinematography is sensitive and tactile filled with the texture of rock, the swaying of grass, the expressions on faces painting a robust picture of lives lived in communion with the natural world. The forest sequences use natural light, organic composition, and low camera angles that place human and animal at the same horizontal level encoding a non-hierarchical, relational world-view. The data centre, by contrast, is characterised by fluorescent flatness, tight framing, and the blue-white glow of the computer screen casting light onto Nehma's face. In this visual economy, the machine illuminates the human power flows from screen to body, not the reverse. The cinematography does not merely illustrate the political argument; it enacts it.
The film's most formally precise political statement is the parallel editing sequence in which Nehma labels infant muscle-movement data to train an AI walking model, while in cross-cut her own son Guntu takes his first steps unseen. The editor holds both sequences in identical rhythm. The argument made through the cut is not sentimental but structural: the same unit of cognitive attention, the same quality of care, is displaced from child to machine. The labour that builds artificial intelligence is purchased with the currency of lived human experience.
Conclusion
Humans in the Loop makes a precise and urgent argument: algorithmic bias is not a technical problem awaiting a technical solution. It is the expression of epistemic hierarchies that predate the algorithm and will survive its correction unless the underlying politics of knowledge are confronted. Through Nehma's experience, the film demonstrates that the categories embedded in AI systems are culturally situated designed within and for dominant economic interests and that the communities whose knowledge and labour are extracted to build these systems are systematically denied the authority to define what counts as correct.
Apparatus theory illuminates how the AI system, like cinema itself, naturalises its ideological positioning as neutral vision. Hall's theory of representation shows how meaning is always organised by power and that the AI's failure to see Adivasi faces is not an oversight but a structural outcome of whose visual norms dominate the training pipeline. Althusser's framework reveals the data centre as an ideological apparatus that does not merely exploit but reconstitutes its workers as subjects compatible with the system's requirements.
The film asks a question that no algorithm can answer: when are we going to take responsibility, as humanity, for the kind of systems we are building? That question belongs not to the data centre but to the political economy that designed it and to the audiences, critics, and scholars equipped to name what the machine cannot see.
References:
Alonso, D. V. Imagining AI Futures in Mainstream Cinema: Socio-Technical Narratives and Social Imaginaries. AI & Society, 2026.
Anjum, N. “Aranya Sahay’s Humans in the Loop and the Politics of AI Data Labelling.” The Federal, 2026.
Barad, Dilip. “Humans in the Loop: Exploring AI, Labour and Digital Culture.” Blog post, Jan. 2026.
Bazin, André. What Is Cinema? Vol. 1, University of California Press, 1967.
Bordwell, David, and Kristin Thompson. Film Art: An Introduction. 12th ed., McGraw-Hill Education, 2019.
Cave, Stephen, et al. “Shuri in the Sea of Dudes: The Cultural Construction of the AI Engineer in Popular Film, 1920–2020.” Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, Oxford University Press, 2023.
Deleuze, Gilles. Cinema 1: The Movement Image. Translated by Hugh Tomlinson and Barbara Habberjam, University of Minnesota Press, 1983.
Frías, C. L. “The Paradox of Artificial Intelligence in Cinema.” Cultura Digital, vol. 2, no. 1, 2024, pp. 5–25.
Göker, D. “Human-like Artificial Intelligence in Indian Cinema: Cultural Narratives, Ethical Dimensions, and Posthuman Perspectives.” International Journal of Cultural and Social Studies, vol. 11, no. 2, 2025, pp. 1–10.
Haris, M. J., et al. “Identifying Gender Bias in Blockbuster Movies through the Lens of Machine Learning.” Humanities and Social Sciences Communications, vol. 10, 2023.
Humans in the Loop (film). Wikipedia entry, retrieved Feb. 2026.
Indian Express Editorial. “Humans in the Loop: Technology, AI and Digital Lives.” The Indian Express, 2026.
McDonald, Kevin. Film Theory: The Basics. 2nd ed., Routledge, 2023.
Sahay, Aranya, director. Humans in the Loop. India, 2024.
Shepherdson, Charles, et al., editors. Film Theory: Critical Concepts in Media and Cultural Studies. Vols. 1–4, Routledge, 2004.
Sui, Z., and S. Wang. “Dogme 25: Media Primitivism and New Auteurism in the Age of Artificial Intelligence.” Frontiers in Communication, vol. 10, 2025.
Vighi, Fabio. Critical Theory and Film: Rethinking Ideology through Film Noir. Bloomsbury Academic India, 2019.
Yu, Y. “The Reel Deal? An Experimental Analysis of Perception Bias and AI Film Pitches.” Journal of Cultural Economics, vol. 49, 2025, pp. 281–300.
No comments:
Post a Comment