Discussion: View Thread

Call for book chapters

  • 1.  Call for book chapters

    Posted 5 hours ago
    * Apologies for cross posting * 

    Artificial Intelligence, Ethics, and Digital Colonialism

    Handbook editors

    Payal Kumar, ISH, India

    Pawan Budhwar, Aston University, UK

    Elham Malik, NIT Andhra Pradesh, India

    Publisher: World Scientific Publishers

    Executive Editor: Gideon Markman, USA

    Introduction: Rethinking Ethics in the Age of Artificial Intelligence

    Artificial intelligence (AI) is transforming organizations, governance, and the lives of people worldwide. The handbook grounds itself in digital colonialism as an analytical paradigm for interpreting AI and global inequality. We conceptualize digital colonialism as a process that entails the concentration of power, value, and knowledge in the hands of a few actors, leading to technological dependency. In particular, we dwell on three mutually dependent mechanisms, namely, data extraction, infrastructure control, and value asymmetry, that determine the way AI systems can generate and allocate benefits and burdens. This lens allows us to move beyond generalized discussions of AI ethics and foreground the structural forces that reproduce global inequalities through AI.

    AI is increasingly embedded in the systems that organize social, institutional, and economic life, including algorithmic decision-making in organizational systems and generative AI systems that facilitate knowledge building. Although AI systems are commonly portrayed as unbiased, objective, and efficiency-enhancing, they are inevitably linked to political economies, institutional logics, and historical injustices that shape the nature of data, knowledge, and value production and distribution. The rapid advancement of AI raises fundamental ethical concerns about power, responsibility, agency, equity and also ethical decision making (Hiebert & Kumar, in press, 2026). Some pertinent questions are: Who owns AI systems, what knowledge is coded into them, and who profits - or gets sidelined? These are not merely technical questions but are normative concerns that require a multi-disciplinary engagement with ethics, organizational theory, sociology and political economy.

    AI Ethics, Digital Colonialism, and Global Inequality

    Fairness, accountability, transparency, and privacy are among the principles highlighted in mainstream AI ethics literature (Jobin et al., 2019; Mittelstadt, 2019). These principles have been of immeasurable value in defining algorithmic harms. However, they tend to view ethics as a technical issue rather than as inherently connected to the power structures that inform AI systems. This handbook is divided into four themes:-

    i. AI Ethics and Digital Colonialism

    Digital colonialism offers an understanding of how AI systems replicate historical processes of extraction and dependence. The central part of this regime is digital extraction, a perpetual act of mining and refining information into economic and political wealth. Such extraction processes enhance economic dependency and inequality, promote cultural hegemony, and undermine political independence. The context of AI in this system clarifies the constraints of principle-based ethics: the asymmetry of information, infrastructure, and value is not a technical incident but an organizational consequence of world power relations (Khan, 2025). 

    ii. Data Extraction and Surveillance

    In a digital colonial context, data mining is both a source of economic activity and a control instrument. This dynamic is best exemplified by the so-called surveillance capitalism, in which human experience is industrialized through big-data harvesting and behavioral prediction (Zuboff, 2019). Surveillance based on AI is spreading across security, healthcare, and city management, often without sufficient transparency or regulation (Blease, 2024; Saheb, 2023), posing challenges to privacy, autonomy, and democratic accountability. Algorithms are biased and discriminatory against marginalized groups, and AI systems reproduce inequalities among races, genders, and social classes (Buolamwini & Gebru, 2018; Benjamin, 2019; Noble, 2018). Recent literature emphasizes that fairness should account for intersecting identities and socio-technical systems (Gohar & Cheng, 2023). Generative AI presents novel epistemic dangers, including hallucinations, misinformation, and unreliable outputs, that may be harmful in high-stakes settings like healthcare, learning, and research (Ashwin et al., 2025). These advancements highlight the necessity of strong governance, human control, and participatory design to address data extraction and its social impacts. 

    iii. Infrastructure Control and Dependency

    Infrastructure control describes the process of concentrating AI platforms, cloud services, and foundational models in a few corporations and states. Such a concentration supports reliance that undercuts data sovereignty and constrains the ability of other areas to innovate. The internet regime, with its tenacity of accumulation, surveillance, and expansion, is the centralization of power and the repression of alternatives. According to Khan (2025), the global South's reliance on proprietary infrastructure replicates colonial dependencies. Complementary analyses demonstrate that algorithmic tools are incorporated into geopolitical and economic relations; they are usually used to serve the interests of dominant actors rather than to support local communities (Saheb, 2023). Generative AI can address platform dependency by managing access to training data, computational resources, and deployment models (Ashwin et al., 2025). Such observations have elicited calls to establish AI sovereignty and to invest in government infrastructure and open-source models. 

    iv. Value Asymmetry and Labor in AI

    Value asymmetry explains how the benefits of AI are not equally distributed: the people who control data and infrastructure capture innovation rents, while data creators get little. Research on the human resources field shows that the use of AI in recruitment, appraisal, and monitoring can increase workplace inequality and reduce employee agency (Budhwar et al., 2022, 2023; Chowdhury et al., 2022, 2024). To address these imbalances, organizational scholars promote human values and reflexivity in algorithmic management (Budhwar et al., 2022, 2023). Maheshwari et al. (2025) suggest applying non-Western ethical theories, such as the Dharma Matrix, to create context-specific decision-making. In the meantime, leadership reflexivity and pluralistic ethics focus on inclusivity and empowerment within organizations (Mascarenhas et al., 2023; Marques et al., 2024). 

    Positioning this Handbook: Ethics Beyond Technical Fixes

    While the field of artificial intelligence (AI) ethics is expanding rapidly, much of the literature remains fragmented across analytical levels and disciplinary traditions. A substantial body of work continues to focus on principle-based frameworks such as fairness, accountability, transparency, and privacy, alongside technical remedies including bias mitigation, explainability, and compliance-oriented governance. Although these approaches have made important contributions, they often pay insufficient attention to the broader organizational, societal, and geopolitical contexts in which AI systems are designed, deployed, and governed. As a result, questions of power, inequality, labor, institutional accountability, and human agency remain underexplored in mainstream AI ethics debates.

    More recent scholarship has begun to address these gaps by examining AI as a socio-technical phenomenon shaped by organizational practices, economic structures, and global asymmetries. However, these perspectives remain dispersed across disciplines. This handbook responds to that fragmentation by bringing together technical, organizational, and societal approaches to AI ethics within a single interdisciplinary volume. It treats ethics not as an external constraint but as an integral part of the design, development, deployment, and use of AI systems in context.

    Although digital colonialism is a central theme, especially for understanding disparities in data, infrastructure, and technological power, the volume situates this concern within a broader ethical landscape that also includes privacy and surveillance, algorithmic bias and intersectional inequality, AI hallucinations and epistemic risk, and organizational transformation. By foregrounding organizational settings and engaging pluralistic, context-sensitive, and non-Western perspectives, the handbook aims to advance more responsible, equitable, and human-centered approaches to AI futures.

    Like the other handbooks in the Set on Business Ethics and Values in a Globalized World (Editors Payal Kumar & Peter Bamberger), this handbook will consist of cutting-edge research that will serve as an indispensable guide to scholars. While the other handbooks center around the themes of i. human resource ethics ii. leadership and power dynamics and iii. value implementation and ethical strategies, this handset stands apart in that its focus is on rethinking ethics in the fast-changing world of artificial intelligence which gives rise to not only greater efficiencies and breathtaking innovations, but also to skewed global power dynamics. Following Baumeister's lead in encouraging scholars to break the hegemony of research consisting of good actions leading to good outcomes, and bad leading to bad, this handbook produces more nuanced research which reflects organizational complexities (Baumeister et al, 2001).

    Indicative Themes and Topics

    We invite theoretical papers, empirical studies (qualitative, quantitative, or mixed-methods), comparative analyses, and policy-oriented research on the following themes. Contributions should exhibit strong theoretical grounding, rigorous methodology, and practicality.

     

    1. AI Ethics, Governance, and Human Agency

    Examine the normative, institutional, and societal questions raised by AI, including governance, accountability, bias, epistemic risk, and human agency across organizational and public contexts.

     

    Indicative chapters:

          Rethinking AI Ethics Beyond Technical Compliance

          Governance and Accountability in AI Systems

          Algorithmic Bias, Fairness, and Social Justice

          Intersectionality and Structural Discrimination in AI

          AI Hallucinations, Misinformation, and Epistemic Risk

          Privacy, Autonomy, and the Ethics of AI Surveillance

          Human Agency in AI-Augmented Decision-Making

          Ethics of AI in Healthcare, Education, and Public Services

          Leadership and Reflexivity in AI-Driven Organizations

          Pluralistic and Non-Western Frameworks for AI Ethics

          Responsible AI Regulation and Institutional Design

          Designing Human-Centered AI Systems

     

    2. Data Extraction and the Unequal Geographies of AI

    Analyze how AI depends on the large-scale extraction, aggregation, and commodification of data, producing uneven geographies of knowledge, consent, and control.

     

    Indicative chapters

          Data Extraction and the Dynamics of Digital Colonialism

          The Political Economy of AI Training Data

          Data Supply Chains and Global Knowledge Appropriation

          Cross-Linguistic Inequality and the Politics of AI Datasets

          Consent, Ownership, and Collective Rights over Data

          Datafication of Everyday Life in the Global South

          Indigenous Knowledge, Data Governance, and AI

          Case Studies in Data Appropriation and Unequal Benefit

          Regulating Data Extraction and Building Equitable Data Regimes

          Knowledge Extraction and Uneven Data Infrastructures

     

    3. Infrastructure Control and AI Dependency

    Investigate how control over compute, cloud systems, platforms, models, and technical standards creates dependency and constrains autonomy in AI development and deployment.

     

    Indicative chapters:

          Infrastructure Control and the Dynamics of Digital Colonialism

          Cloud Concentration and the Centralization of AI Capacity

          Compute Inequality and Barriers to AI Development

          The Politics of API Access and Platform Control

          Foundation Models and the Power to Set Technical Standards

          AI Dependency and the Limits of Technological Sovereignty

          The Geopolitics of Chips, Cloud, and AI Supply Chains

          Open-Source AI as a Pathway to Infrastructural Autonomy

          Public and Cooperative Alternatives to Proprietary AI Infrastructure

          Community-Led AI Infrastructures and Local Capacity Building

     

    4. Value Asymmetry, Labor, and Unequal AI Futures

    Explore how AI-generated value is distributed unevenly, with profits and strategic gains concentrated among a few actors while labor burdens, precarity, and social costs are displaced elsewhere.

     

    Indicative chapters

          Value Extraction and Asymmetry in Digital Colonialism

          Who Captures Value in the AI Economy?

          Innovation Rents, Intellectual Property, and Global Inequality

          Human-in-the-Loop Work and AI Value Chains

          Content Moderation and the Hidden Workforce of AI

          Algorithmic Management and Worker Control

          AI, Employment Restructuring, and Regional Labor Divides

          AI in HRM and the Reproduction of Workplace Inequality

          Redistribution, Compensation, and Fairer AI Economies

          Cooperative, Commons-Based, and Worker-Centered AI Futures

     

    Timeline

            31st May, 2026: Abstract submission (800–1,000 words)

    (Send to elhammalik77@gmail.com)

            15th June, 2026: Decision notification

            30 September 2026: Full chapter submission

            15 December 2026: Review feedback

            15 February 2027: Revised submission

            15 April 2027: Final submission

            1 May 2027: Final manuscript to publisher

     

    Submission Guidelines

    All manuscripts to follow APA 7th edition style (with no DOIs). Use British English with 'z' spellings. Each chapter to be between 8,000 to 8500 words, inclusive of references, tables and figures.  If AI is employed in the paper, its purpose and application need to be clearly outlined in the document.

    References

    Ashwin, M., Jha, S., Prasad, G., & Kumar, S. (2025). Fake it till you make it? AI hallucinations and ethical dilemmas in anesthesia research and practice. Journal of Anaesthesiology Clinical Pharmacology.
    https://doi.org/10.4103/joacp.joacp_56_25

    Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of general psychology5(4), 323-370.

    Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.

    Blease, C. (2024). Open AI meets open notes: Surveillance capitalism, patient privacy, and online record access. Journal of Medical Ethics, 50(2), 84–89.
    https://doi.org/10.1136/jme-2023-109574

    Budhwar, P., Chowdhury, S., Wood, G., Aguinis, H., Bamber, G. J., Beltran, J. R., Boselie, P., Cooke, F. L., Decker, S., DeNisi, A., Dey, P. K., Guest, D., Knoblich, A. J., Malik, A., Paauwe, J., Papagiannidis, S., Patel, C., Pereira, V., Ren, S., & Varma, A. (2023). Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT. Human Resource Management Journal, 33(3), 606–659.
    https://doi.org/10.1111/1748-8583.12524

    Budhwar, P., Malik, A., De Silva, M. T. T., & Thevisuthan, P. (2022). Artificial intelligence-Challenges and opportunities for international HRM: A review and research agenda. The International Journal of Human Resource Management, 33(6), 1065–1097.
    https://doi.org/10.1080/09585192.2022.2035161 

    Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77–91.
    https://proceedings.mlr.press/v81/buolamwini18a.html

    Chowdhury, S., Budhwar, P., & Wood, G. (2024). Generative artificial intelligence in business: Towards a strategic human resource management framework. British Journal of Management, 35(4), 1680–1691.
    https://doi.org/10.1111/1467-8551.12824

    Chowdhury, S., Budhwar, P., Dey, P. K., Joel-Edgar, S., & Abadie, A. (2022). AI–employee collaboration and business performance: Integrating knowledge-based view, socio-technical systems, and organizational socialization framework. Journal of Business Research, 144, 31–49.
    https://doi.org/10.1016/j.jbusres.2022.01.069

    Gohar, U., & Cheng, L. (2023). A survey on intersectional fairness in machine learning: Notions, mitigation, and challenges-proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
    https://doi.org/10.24963/ijcai.2023/742

    Hiebert, B., & Kumar, P. (in press). Generative AI and ethical decision making for leaders: Rethinking human agency. Indian Journal of Industrial Relations

    Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
    https://doi.org/10.1038/s42256-019-0088-2

    Khan, R. (2025). From AI colonialism to co-creation: Bridging the global AI divide. Media@LSE.
    https://blogs.lse.ac.uk/medialse/2025/07/14/from-ai-colonialism-to-co-creation-bridging-the-global-ai-divide/

    Maheshwari, A. K., Nandram, S. S., & Kumar, P. (2025). Dharma matrix: An open architecture for ethical decision-making in the age of AI. In A. K. Maheshwari (Ed.), AI and consciousness in organizations and society (pp. 39–62). Springer.
    https://doi.org/10.1007/978-3-031-91470-6_3

    Marques, J., Kumar, P., & Culham, T. (2024). Drawing on Eastern spiritual traditions of diversity, equity, and inclusion as guideposts in an increasingly unpredictable world. Journal of Business Ethics, 192(3), 611–626.
    https://doi.org/10.1007/s10551-023-05524-8 

    Mascarenhas, O. A. J., Thakur, M., & Kumar, P. (2023). A primer on critical thinking and business ethics. Emerald Publishing.
    https://doi.org/10.1108/9781837533466 

    Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
    https://doi.org/10.1038/s42256-019-0114-4

    Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of Oppression. New York University Press.

    OBHDP Editorial (2026). Responsible Collaboration with Artificial Intelligence in Organizational Scholarship: Governance Framework for Authors and Reviewers.

    Saheb, T. (2023). Ethically contentious aspects of artificial intelligence surveillance: A social science perspective. AI and Ethics, 3(2), 369–379.
    https://doi.org/10.1007/s43681-022-00196-y

    Zuboff, S. (2024). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Journal of Information Ethics33(1), 84–85.



    With regards,

    Prof. Payal Kumar 

    Principal Academic Advisor, ISH; Ecole Ducasse India |

    Academy of Management MSR Program Chair |

    AOM MED Global Ambassador Co-Chair Elect |

    Special Issues Editor, JMSR  | 


    FT 50 paper: Eastern spirituality * Recent paper: bhakti & SDGs * My books