Discussion: View Thread

CFP: HICSS-59 AI AND DIGITAL DISCRIMINATION MINITRACK

  • 1.  CFP: HICSS-59 AI AND DIGITAL DISCRIMINATION MINITRACK

    Posted 05-15-2025 02:49

    This minitrack attracts and presents research on understanding and addressing the discrimination problems arising in the design, development and use of artificial intelligent systems. A technology is biased if it unfairly or systematically discriminates against certain individuals by denying them an opportunity or assigning them a different and undesirable outcome. As we delegate more and more decision-making tasks to computer autonomous systems and algorithms, such as using artificial intelligence for employee hiring and loan approval, digital discrimination is becoming a serious problem. In her New York Times best-seller book "Weapons of math destruction: How big data increases inequality and threatens democracy," Cathy O'Neil refers to those math-powered applications as "Weapons of Math Destruction" and provides examples to show how such mathematical models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed and harmed our lives.

    Discrimination is defined as treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc., according to Cambridge Dictionaries Online. Digital discrimination refers to discrimination between individuals or social groups due to lack of access to Internet-based resources or in relation to biased practices in data mining and inherited prejudices in a decision-making context. It is a form of discrimination where users are treated unfairly, unethically or just differently based on their personal data such as income, education, gender, age, ethnicity, religion, or even political affiliation during the process of automating decision making. Digital discrimination in AI refers to the systematic disadvantages that algorithms impose on certain groups due to biases emerging throughout the algorithm's development lifecycle.

    Artificial Intelligence (AI) decision making can cause discriminatory harm to many vulnerable groups. In a decision-making context, digital discrimination can emerge from inherited prejudices of prior decision makers, designers, engineers or reflect widespread societal biases. One approach to addressing digital discrimination is to increase transparency of AI systems. However, we need to be mindful of the user populations that transparency is being implemented for. In this regard, research has called for collaborations with disadvantaged groups whose viewpoints may lead to new insights into fairness and discrimination.

    Another approach to mitigating digital discrimination in AI is algorithmic justice, which seeks to ensure fairness, equity, and accountability in AI-driven decision-making. Machine learning models often inherit biases from historical data, leading to unfair outcomes that disproportionately impact marginalized groups. Despite AI's perceived neutrality, research has shown that it can reinforce and even amplify systemic biases, underscoring the need for governance frameworks that promote fairness, transparency, and accountability in AI deployment.

    Potential ethical concerns also rise in the use of generative AI that builds on Large Language Models (LLM) such as ChatGPT, the virtual AI chatbot that debuted in November 2022 by the startup OpenAI and reached 100 million monthly active users just two months after its launch. Professor Christian Terwiesch at Wharton found that ChatGPT would pass a final exam in a typical Wharton MBA core curriculum class, which sparked a national conversation about ethical implications of using AI in education. While some educators and academics have sounded the alarm over the potential abuse of ChatGPT for cheating and plagiarism, industry practitioners from legal industry to travel industry are experimenting with ChatGPT and debating on the impact of the AI on the business and future of the work. In essence, a Large Language Model is a deep learning algorithm that trains on large volumes of text. The bias inherited in the data can lead to emerging instances of digital discrimination especially as various LLM based models are trained on data from different modalities (e.g. images, videos, etc.). Furthermore, the lack of oversight and regulations can also prove to be problematic. Given the rapid developments and penetration of AI chatbots, it is important for us to investigate the boundaries between ethical and unethical use of AI, as well as potential digital discrimination in the design, development and use of LLM applications.

    Addressing the problem of digital discrimination in AI requires a cross-disciplinary effort. For example, researchers have outlined social, organizational, legal, and ethical perspectives of digital discrimination in AI . In particular, prior research has called for our attention to research the three key aspects: how discrimination arises in AI systems; how design in AI systems can mitigate such discrimination; and whether our existing laws are adequate to address discrimination in AI.

    This minitrack welcomes papers in all formats, including empirical studies, design research, theoretical framework, case studies, etc. from scholars across disciplines, such as information systems, computer science, library science, sociology, law, etc. Potential topics include, but are not limited to:

    1. AI-based Assistants: Opportunities and Threats

    2. AI Explainability and Digital Discrimination

    3. Algorithmic justice

    4. AI Systems Design and Digital Discrimination

    5. AI Use Experience of Disadvantaged / Marginalized Groups

    6. Biases in AI Development and Use

    7. Digital Discrimination in Online Marketplaces

    8. Digital Discrimination and the Sharing Economy

    9. Digital Discrimination with Various AI Systems (LLM based AI, AI assistants, etc.)

    10. Effects of Digital Discrimination in AI Contexts

    11. Ethical Use/ Challenges/ Considerations and Applications of AI systems

    12. Erosion of Human Agency and Generative AI Dependency

    13. Generative AI (e.g., ChatGPT) Use and Ethical Implications

    14. Organizational Perspective of Digital Discrimination

    15. Power Dynamics in Human-AI Collaboration

    16. Responsible AI Practices to Minimize Digital Discrimination

    17. Responsible AI Use Guideline and Policy

    18. Societal Values and Needs in AI Development and Use

    19. Sensitive Data and AI Algorithms

    20. Social Perspective of Digital Discrimination

    21. Trusted AI Applications and Digital Discrimination

    22. User Experience and Digital Discrimination

    Minitrack Co-Chairs:

    Sara Moussawi (Primary Contact)
    Carnegie Mellon University
    sara7@cmu.edu

    Jason Kuruzovich
    Rensselaer Polytechnic Institute
    kuruzj@rpi.edu

    Minoo Modaresnezhad
    University of North Carolina Wilmington
    modaresm@uncw.edu

     

    Conference Website: 

    https://hicss.hawaii.edu/tracks-59/information-technology-social-justice-and-marginalized-contexts/#ai-and-digital-discrimination-minitrack



    ------------------------------
    Jason Kuruzovich
    Associate Professor of Business Analytics
    Rensselaer Polytechnic Institute
    Troy NY
    ------------------------------