Arun’s research in this area is looking at the interactions of AI with other machine and human agents, unintended AI behaviors and consequences, and how the risks of AI can be mitigated while realizing benefits.

Selected Publications

The objective of this whitepaper is to identify opportunities, issues, and challenges facing equitable education pathways for careers in computing and the particular role that generative artificial intelligence (AI) could play to support postsecondary education at minority . . .

This paper examines the design and evaluation of Large Language Model (LLM) tutors for Python programming, focusing on personalization that accommodates diverse student backgrounds. It highlights the challenges faced by socioeconomically disadvantaged students in computing courses and proposes LLM tutors as a solution to provide inclusive educational support. The study explores two LLM tutors, Khanmigo and CS50.ai, assessing their ability to offer personalized learning experiences. By employing a focus group methodology at a public minority-serving institution, the research evaluates how these tutors meet varied educational goals and adapt to students’ diverse needs. The findings underscore the importance of advanced techniques to tailor interactions and integrate programming tools based on students' progress. This research contributes to the understanding of educational technologies in computing education and provides insights into the design and implementation of LLM tutors that effectively support equitable student success.

Artificial intelligence (AI) has become an important driver of economic growth and innovation. With rapid advances in AI, firms have a strategic opportunity to re-envision the cognitive reapportionment of tasks between humans, AI, and non-AI technologies. In doing so, firms can transform and dramatically elevate value creation through their business models, processes, and market offerings. We focus on a key risk, patent infringement litigation (PIL), that can adversely impact a firm’s value creation with AI. We posit and demonstrate that firms facing AI-PILs would have more negative short term abnormal returns compared to firms facing non-AI PILs. We further suggest that the abnormal returns are moderated by the types of plaintiffs and AI patents in which the abnormal returns are more negative when the plaintiffs are non-practicing entities (versus practicing entities) and when the AI patents at suit concern expertise-driven (versus data-driven) AI. Exploring the moderators jointly, we find evidence that for data-driven AI patents, the negative abnormal returns are stronger when the plaintiffs are practicing entities, but for expertise-driven AI patents, the negative abnormal returns are stronger when the plaintiffs are non-practicing entities.

Artificial intelligence (AI) is transforming the nature of work and reshaping labor markets. Viewing labor as a bundle of skills, recent research has analyzed AI skills and offered important insights about the impacts of AI on labor markets. We add to this on-going discourse and argue that taking a dynamic skill-based approach to measurement is critical: just like the development of AI is emergent and ever-evolving, so are AI skills. Taking stock of the literature, we show that existing studies tend to take a static approach to measuring AI skills, which fails to fully reflect the dynamic phenomenon of AI skills and could cause measurement errors. We propose a dynamic co-occurrence method and demonstrate that it performs better than the extant static methods, which can cause severe Type I and II errors, omit emerging AI skills, and temporally over- and under-estimate the demands for AI skills and jobs.

Intelligent systems (IntelSys) are transforming the nature of work as humans and machines collectively perform tasks in novel ways. Although intelligent systems empower employees with algorithm-generated knowledge, they require employees to adapt how they work to enhance their job performance. We draw on the coping-adaptation framework as the overarching theoretical lens to explain how employees’ perceptions of IntelSys knowledge as an empowering external coping resource affect the mechanisms through which they adapt to IntelSys-induced changes to their work, as well as how their internal coping resources regulate their adaptation. Our coping-adaptation explanation of intelligence augmentation integrates (i) the empowering role of external coping resources, specifically IntelSys knowledge, captured as intelligent system knowledge empowerment (ISK-Emp); (ii) the benefit-maximizing adaptation mechanism (through infusion use enhancement) and the disturbance-minimizing adaptation mechanism (through role conflict reduction) that channel the impact of ISK-Emp on job performance; and (iii) the regulating role of internal resources, specifically, employees’ work experience, in influencing the importance of the adaptation mechanisms for the employee. We conduct studies in three distinct settings in which different intelligent systems were implemented to support employees’ knowledge work. Our findings show that ISK-Emp increases job performance through each of the two adaptation mechanisms. The benefit-maximization mechanism (via enhanced infusion use) plays a more important role for novice employees than for experienced employees, whereas the disturbance-minimization mechanism (via reduced role conflict) has higher importance for experienced employees than for novice employees. Our work provides insights into the critical role of adaptation mechanisms in linking ISK-Emp with performance outcomes and into the relative importance of the adaptation mechanisms through which job performance payoffs are realized by novice and experienced employees.

Deep learning methods to develop artificial intelligence (AI) systems produce black-box models that achieve high levels of prediction accuracy but are inscrutable to human users. Explainable AI (XAI) techniques offer the potential to achieve both prediction accuracy and explainability objectives with AI applications by converting black-box models to glass-box models that can be interpreted. Understanding how to effectively use XAI in marketing opens up exciting research avenues relating to the role of different classes of XAI in redefining the tradeoff between prediction accuracy and explainability, creating trustworthy AI applications, achieving AI fairness, and modifying the privacy calculus of consumers. It also raises important questions related to the levels of explainability and transparency for different users and how explainability can create value for different stakeholders involved in the development and deployment of marketing AI applications.

We illustrate the emergent spectrum of human-AI hybrids in digital platforms and discuss some implications for IS research by using one class of digital platforms: digital labor platforms. Recognizing the service orientation and the expanding role of AI in digital platforms, we define digital labor platforms as online environments where digital services are sourced and delivered in exchange for compensation, with constituent tasks for the services determined, executed, and coordinated by human and AI agents. Work done on these platforms is, by definition, digital and can thus be modularized into tasks which require a range of cognitive skills for execution and coordination, providing a rich context to illustrate human-AI hybrids and some key issues for next-generation digital platforms.