Date of Award
2025
Degree Type
Dissertation
Degree Name
Doctor of Business Administration (DBA)
Specialization
Marketing
Department
Business Administration
First Advisor
Stephen A. Atlas
Abstract
As organizations increasingly rely on AI for critical decision-making, from autonomous vehicles making split-second judgments to financial algorithms managing billion-dollar transactions, AI failures pose significant risks. Biased hiring systems, flawed credit risk assessments, and unpredictable AI-driven investments have demonstrated the dangers of assuming algorithmic neutrality. Understanding how AI evaluates time and risk is crucial for preventing costly, ethically problematic outcomes. As AI diagnoses diseases, recommends treatments, and manages financial risks, its decision-making must be scrutinized, not assumed neutral. To address this gap, this dissertation leverages foundational behavioral economics principles established by Thaler (1981) and Kahneman and Tversky (1984) to systematically investigate how AI systems approach fundamental decision domains: temporal discounting and risk assessment.
The research employs a novel "AI-as-participant" methodology, prompting four leading AI systems, including GPT-4, GPT-3.5, Gemini Pro, and Gemini Ultra, to classic behavioral experiments originally designed for human participants. Using controlled experiments, AI responses are compared to human benchmarks, revealing patterns that both mirror and diverge from human decision-making.
The current dissertation reveals AI decision-making patterns with time and risk preferences that AI systems themselves are unaware of possessing. The research findings demonstrate that AI systems exhibit greater patience in intertemporal choices and higher risk aversion than their human counterparts. To be more specific, GPT-4's continuously compounded discount rates for delayed rewards are significantly lower than human rates across multiple time horizons (e.g., GPT-4's median CDR for a 3-month delay was 115.07%, compared to the human median of 277%, Z= -4.052, p < .001), indicating a stronger willingness to wait for future benefits. The Kruskal-Wallis test reveals significant variability in time preferences across AI systems (H(3) = 38.056, p < .001 for the $250 scenario over 3 months), highlighting distinct decision-making frameworks among different models. In risk scenarios, while 78% of humans become risk-seeking when problems are negatively framed, GPT-4 maintains a consistent risk-averse stance in 97.5% of cases (χ²(1, N=320) = 192.391, p < .001, Cramer's V = .784), demonstrating immunity to the framing effects that typically influence human decisions. The analysis reveals substantial variation - Gemini Pro shows risk-seeking behavior in positive frames (choosing the risky option in 83% of trials), contrasting sharply with other systems' risk aversion.
These results extend decision theory beyond human cognition, offering insights into which decision elements are uniquely human and which emerge from algorithmic processes. By revealing that AI systems - despite being purely data-driven - develop systematic preferences reminiscent of human biases but distinct in magnitude and consistency, the current research has significant implications for AI development, deployment, and governance. The findings suggest AI systems can serve as complementary decision aids in domains where human biases lead to suboptimal outcomes, such as financial planning and strategic decision-making. Additionally, the current dissertation advocates for behavioral approaches to understanding AI decision-making, complementing existing algorithmic and technological approaches which alone provide an incomplete picture of AI cognition.
This dissertation advances decision science in three key ways: (1) Theoretically, it challenges the myth of algorithmic neutrality and extends behavioral economics to non-human agents; (2) Methodologically, it pioneers the ‘AI-as-participant’ framework, setting a new standard for AI behavioral research; and (3) Practically, it provides a roadmap for AI governance, helping businesses, policymakers, and regulators design AI systems that align with human values while mitigating decision biases. By understanding how AI evaluates time, risk, and uncertainty, this research provides a blueprint for ethical, transparent, and effective AI decision systems. As AI shapes finance, healthcare, and policymaking, this research ensures AI decisions are not just efficient but accountable, ethical, and aligned with human values.
Recommended Citation
Suh, Wangsuk, "HEURISTICS, BIASES, AND PREFERENCES OF AI: TIME AND RISK PREFERENCES AMONG GENERATIVE AI SYSTEMS" (2025). Open Access Dissertations. Paper 4474.
https://digitalcommons.uri.edu/oa_diss/4474