Arun Rai is Regents' Professor of the University System of Georgia and holds the Howard S. Starks Distinguished Chair at the Robinson College of Business at Georgia State University.
Director and Co-founder, Robinson College of Business Center for Digital Innovation (CDIN), an interdisciplinary research center that focuses on digital innovation and promotes industry-university partnerships.
Appointed Regents' Professor in 2006 by the Board of Regents' of the University System of Georgia for outstanding contributions in research, teaching and service, and has received Robinson College of Business Faculty Recognition Awards for distinguished contributions in research, teaching and service.
Fellow of the Association for Information Systems (AIS) (2010) and Distinguished Fellow of the INFORMS Information Systems Society (ISS) (2014). Received Impacts Awards from the AIS (2022) and INFORMS ISS (2022), the INFORMS ISS Inaugural President’s Service Award (2021) and the LEO Award from the Association for Information Systems for Lifetime Exceptional Contributions to the Information Systems discipline (2019). Honored as a 2024 AACSB Influential Leader.
Served as Editor-in-Chief for the MIS Quarterly from 2016 to 2020. Has also served as Senior Editor for Information Systems Research, MIS Quarterly, and Journal of Strategic Information Systems and Associate Editor for journals such as Management Science.
Has served on the Board of Directors of Indraprastha Apollo Hospitals and Apollo Health & Lifestyle Limited. Collaborated on research projects with major corporations across sectors (e.g., Axim Collaborative, Apollo Hospitals, China Mobile, Daimler-Chrysler, Emory Healthcare, Gartner, Georgia-Pacific, Grady Hospital, IBM, Intel, Laureate Inc., SAP, SunTrust, United Parcel Service). Served on the Developing and Deploying at Scale Disruptive Technologies Working Group of the US National Commission on Innovation and Competitiveness Frontiers and serves on the AI Advisory Council for the State of Georgia.
April 16, 2026 - Anxiety grows as headlines point to layoffs due to AI cost cutting. Nonetheless, in an interview with 11Alive - the National Broadcasting Company (NBC) affiliate in Atlanta - Rai pointed out a deeper story of how organizations need to learn about AI with humans at the helm leading the transition to augment the future of work.
This vision has already started to operationalize at Georgia State University. By forging strategic partnerships with educational institutions, Georgia companies and government agencies, we are proactively preparing students for entry-level roles for the AI economy. Such proactive approach ensures that the next generation of the workforce isn't just reacting to AI, but leading the transition. For Rai, the deeper story isn't the job lost—it's the potential unlocked.
April 03, 2026 - The Massachusetts Institute of Technology (MIT), together with Georgia State University and an expanding network of partners, has announced the expansion of PATH (Pathways for AI Training and Hiring) - a multi-year initiative aims to scale affordable, industry-aligned AI training for both entry-level and existing workers, with a specific emphasis on turning community colleges into vital engines for the nation’s AI-enabled workforce.
"We are very excited by the significant early momentum," Rai said, noting that over 1,000 students are already enrolled. As the co-PI of PATH, Rai and his team co-designed a rigorous curriculum - covering everything from data science to agentic AI - now being shared with partner institutions to build a collaborative ecosystem. By merging academic rigor with industry partnerships, PATH is ensuring students develop tangible, job-ready skills.
March 3, 2026 - As algorithms increasingly drive life-changing decisions in healthcare, lending, and hiring, Rai and his co-authors have introduced a groundbreaking theory to ensure these systems remain equitable: FAIR framework, which challenges the industry's tendency to treat bias as a one-time fix.
“Most organizations discover a problem, patch it, and move on,” Rai said. He argued that because fairness is a sociotechnical paradox with shifting social and legal expectations, it must be managed continuously - much like safety or quality standards.
The FAIR theory proposes a proactive, two-tier solution: it utilizes a partnership between AI agents at the system level, and advocates for a federated governance structure at the organizational level. Rai acknowledges that while building this institutional capacity requires significant investment, the cost of inaction is far higher. The goal of FAIR is to move organizations from a defensive posture to one where fairness is built into the very fabric of AI design and governance.
Find the latest open-access research on the FAIR (Fairness Adaptation through AI-augmented Responsiveness) theory, published in MIS Quarterly (January 2026).
Read the latest media feature on the FAIR theory and its implications for AI governance by TechXplore.
November 14, 2025 - As AI integration accelerates in the workplace, Arun Rai addressed the critical necessity of conscientious implementation during the University of Georgia’s annual Ethics Week Lecture. Rai explored the paradoxes of AI, such as the tension between economic efficiency and human elevation, while arguing that responsible AI is not a static goal but a continuous practice of managing these complexities.
“What does it mean to establish a sensible system and work architecture... so you mitigate the risks while harnessing the advantages?” Rai asked.
To scale AI beyond experimental stages, Rai identified three vital roles for the future workforce: Architects who design the rules for human-AI systems, Strategists who interpret outputs and make decisions amidst ambiguity, and Guardians who provide ethical oversight and empathy to ensure fairness. Rai concluded with a call on universities to fulfill their ethical obligation by training students to lead as the architects and guardians of the evolving technological landscape.
Authors: Rai, A., Tian, J., and Xue, L.
Abstract: Artificial intelligence (AI)-automated decision systems encounter persistent, interdependent, and dynamic fairness tensions that traditional one-off interventions cannot resolve. Because these tensions persist due to interdependence and dynamic interaction, organizations require both a theory of the problem to explain their persistence and a theory of the solution to prescribe how they can be managed. Our design theory, FAIR (Fairness Adaptation through AI-augmented Responsiveness), provides a theory of the problem by reframing AI fairness as a sociotechnical paradox constituted within AI artifacts that automate decision tasks, through interdependent organizational, technical, and governance choices and their interaction with regulatory mandates and societal norms. Synthesizing four fairness perspectives (Ethics, Organizational Justice, Economic Fairness, and Rawlsian Justice), we identify three metatheoretical dimensions (principles, goals, foci) and show that the interdependence within and among these dimensions is the root, endogenous source that constitutes paradoxical fairness tensions. Building on this diagnosis, FAIR provides a theory of the solution by specifying an organizational capability grounded in three design foundations. First, the paradox lens motivates iterative adaptive cycles (Surfacing and Resolving) to continually surface and resolve AI fairness tensions. Second, design science in information systems and computer science distinguishes AI artifacts (the “what”) from the actors (the “who”) responsible for adapting them, establishing the basis for complementary human–AI agent collaboration in the adaptive cycles: AI agents execute monitoring to surface and refinement to resolve tensions, whereas human agents specify objectives, adjudicate trade-offs, and exercise contextual judgment and oversight. Third, the managing-with-AI literature informs how this human–AI agent collaboration should be governed. These foundations yield two reinforcing mechanisms: (i) artifact-level adaptation, achieved through structured human–AI agent collaboration, within and across the layers of the AI decision pipeline—Representation (data), Learning (model), and Calibration (decision); and (ii) portfolio-level, risk-tiered federated governance that structures how human–AI agent collaboration scales across tasks and artifacts, balancing process standardization with configuration choices and human control with AI autonomy based on task risk. Enabled by organizational “fairness complements”—namely, human skills to work with AI agents and structured stakeholder feedback—this sociotechnical design provides organizations with a sustained capability to harmonize global coherence and local flexibility in the responsive adaptation of AI fairness.
Authors: Chen, L., Rai, A., Wei, W., and Guo, X.
Abstract: Match formation is challenging in online matching platforms where suppliers are subject to dynamic capacity constraints. We provide a theoretical foundation for understanding how online matching platforms support the transmission and triangulation of multisource information for consumers to infer provider service quality and dynamic capacity states, and achieve desirable matching outcomes. Situating this study in the context of an online health consultation community (OHCC) and drawing upon signaling theory, we theorize how physicians’ owned and earned signals influence physicians’ voluntary online consultations with new patients they have not consulted with previously. Importantly, we articulate how these signaling effects are contingent upon physicians’ dynamic capacity in OHCC. We collected longitudinal data from a large OHCC in China and used a hidden Markov model (HMM) to characterize the dynamic physician capacity in the OHCC and test the hypotheses. Our findings reveal that service professionals’ owned and earned signals interactively work together to balance supply and demand dynamically, and thereby facilitating matchmaking. In OHCCs, where physicians provide voluntary service beyond their primary jobs at hospitals, we find that owned and earned signals increase patient consultations in different patterns contingent upon physicians’ capacity states. In addition, we discover the complementary and substitute relationships between owned signals and earned signals change when physicians are in different capacity states. The findings have significant implications for our understanding of online match formation under dynamic capacity constraints and the design of OHCCs.
Authors: Pye, J., Rai, A., and Dong, J.
Abstract: Hospitals have implemented health information technology (HIT) for clinical care to address rising operating costs in recent years. We integrate behavioral and institutional perspectives to explain how hospitals differentiate technological search relative to industry peers (i.e., search differentiation) for HIT portfolios. In the context of the U.S. healthcare industry, we theorize that hospitals’ search differentiation for HIT results jointly from idiosyncratic learning in response to cost-based performance shortfalls and isomorphic pressures in relation to changing policy uncertainty as the Health Information Technology for Economic and Clinical Health (HITECH) Act has unfolded. Based on a panel data set from 3,319 hospitals in 2007–2014, we demonstrate that when costs increase relative to aspiration level, a hospital differentiates its search for HIT by exploring more novel technologies for clinical care relative to peers. As policy uncertainty declines from the conceptualization phase to the enactment phase of the HITECH Act, a hospital’s search differentiation for HIT increases to a greater extent in response to cost-based performance shortfalls as lower uncertainty reduces the need to imitate peers’ search. As policy uncertainty further declines from the enactment phase to the enforcement phase of the HITECH Act and reaches its lowest level, however, the hospital’s search differentiation for HIT increases to a smaller extent in response to cost-based performance shortfalls because of policy incentives and professional norms to promote implementation of common technologies. Overall, we provide a more holistic picture of how uncertainty in a dynamic regulatory context intertwines with organizational learning from performance feedback in shaping search differentiation.
Authors: Mindel, V., Aaltonen, A., Rai, A., Mathiassen, L., and Jabr, W.
Abstract: Although online peer-production systems have proven to be effective in producing high-quality content, their open call for participation makes them susceptible to ongoing quality problems. A key concern is that the problems should be addressed quickly to prevent low-quality content from remaining in place for extended periods. We examine the impacts of two control mechanisms, bots and policy citations, and the number of contributors, with and without prior experience in editing an article, on the cleanup time of 4,473 quality problem events in Wikipedia. We define cleanup time as the time it takes to resolve a quality problem once it has been detected in an article. Using an accelerated failure time model, we find that the number of bots editing an article during a quality problem event has no effect on cleanup time; that citing policies to justify edits during the event is associated with a longer cleanup time; and that more contributors, with or without prior experience in editing the article, are associated with a shorter cleanup time. We also find important interactions between each of the two control mechanisms and the number of different types of contributors. There is a marginal increase in cleanup time that is larger when an increase in the number of contributors is accompanied by fewer bots editing the article during a quality problem event. This interaction effect is more pronounced when increasing the number of contributors without prior experience in editing the article. Further, there is a marginal decrease in cleanup time that is larger when an increase in the number of contributors, with or without prior experience in editing the article, is accompanied by fewer policy citations. Taken together, our results show that the use of bots and policy citations as control mechanisms must be considered in conjunction with the number of contributors with and without prior experience in editing an article. Accordingly, the number of contributors and their experience alone may not explain important outcomes in peer production; it is also important to find an appropriate mix of different control mechanisms and types of contributors to address quality problems quickly.
Authors: Tian, H., and Rai, A.
Abstract: As digital platforms evolve, app developers do more than passively participate; they actively reshape the competitive landscape through boundary-shifting moves (BSMs). By modularizing boundary resources at different levels of the technology stack, developers can provoke, delay, or neutralize competitive responses in hypercompetitive app markets. Drawing on an empirical analysis of mobile apps launched by leading Chinese internet firms, we uncover a striking asymmetry in how rival developers react to these moves. Specifically, higher-level app modules—those tailored to particular application domains—tend to delay rival responses, while lower-level technical modules—broadly applicable across domains—trigger faster competitive reactions. This effect is heightened among firms with a history of direct competition, as they respond more aggressively to changes at the lower levels of modularization. By foregrounding the strategic role of technology stack levels in shaping competitive interactions, our study advances a differential resourcing perspective and offers new insights into the dynamics of competition within digital platforms. These findings challenge the dominant host platform-centric and cooperative views on boundary resources, illuminating how app developers actively reshape platform ecosystems to gain temporary advantage.
Authors: Rai, A., Chen, Y., and Lin, Y.
Abstract: Gig platforms seek to create income opportunities, particularly for socially and economically marginalized people who find it challenging to engage in regular employment. Alongside this empowerment, safety concerns over unregulated drivers for transportation network company (TNC) platforms such as Uber and Lyft have led to discourse among policymakers on the necessity of background check laws (BCLs) with different stringency to exclude individuals from TNC jobs. Drawing on theories of labeling, routine activity, and rational choice theory, we conceptualize a trilogy of guardians—the government, TNC platforms, and the community—to safeguard ridesharing while mitigating the social costs of excluding marginalized citizens from TNC jobs. Empirically, we document the shifting of crimes into the property domain as an unintended consequence of the exclusion solution (i.e., BCLs by the government). Our findings indicate that digital safety technologies deployed by TNC platforms to deter crimes (i.e., in-app safety features) can serve as an alternative to BCLs. Moreover, we show that resources provided by the community can inhibit the negative impacts of exclusion by stringent BCL (i.e., through alternative income sources) and enable the effectiveness of deterrence by in-app safety features (i.e., through policing). Our study surfaces a holistic social justice assessment that involves examining the risks of excluding marginalized individuals from gig work and showing that digital technologies expand the solution space to achieve the public safety of citizens and inclusivity in gig employment through enabling the role of each guardian as well as their interdependence.
Click here for complete List of Articles