Informed i’s Weekly Business Insights
Extractive summaries and key takeaways from the articles carefully curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 375, November 15-21, 2024 | Archive
GenAI Tools and Decision-Making: Beware a New Control Trap
By J. Mauricio Galli Geleilate and Beth K. Humberd | MIT Sloan Management Review | November 18, 2024
3 key takeaways from the article
- In their latest research the authors’ suggests that when managers interact with GenAI tools to help make decisions, the tools may inadvertently nudge them toward a more rigid and mechanistic approach. Specifically, their study reveals that when managers used ChatGPT to assist with solving a problem related to employee behavior and working conditions, they were more likely to propose control-oriented rather than people-oriented solutions.
- For decades, researchers and managerial practitioners have detailed the benefits of human-centric management approaches. Research findings caution that without proper consideration, the use of generative AI tools may risk an unintended return to a more mechanistic and control-based management style. That’s a problem, since research has established that the old command-and-control style of management doesn’t breed employee engagement or trust.
- Consider these key managerial takeaways from the authors’ work: beware the priming effect of generative AI tools; understand that generative AI tools can breed moral disengagement; and when using generative AI tools, overemphasize transparency.
(Copyright lies with the publisher)
Topics: Leadership, Decision-making, Artificial Intelligence, Technology
Click for the extractive summary of the articleExtractive Summary of the Article | Read | Listen
As artificial intelligence technologies develop, managers are striving to reap the benefits. Today’s generative AI tools can aid managers in strategic decision-making and assist with problem-solving in a variety of contexts, ranging from product development to employee conflicts. ChatGPT — a common GenAI tool — is even being used as a debating partner for managerial decision-making processes.
At the same time, interacting with technology as part of a decision-making or problem-solving process is fundamentally different from consulting with humans. AI systems, by design, are focused on efficiency, predictability, and data-driven solutions. This emphasis is where leaders can get into unintended trouble.
In their latest research the authors’ suggests that when managers interact with GenAI tools to help make decisions, the tools may inadvertently nudge them toward a more rigid and mechanistic approach. Specifically, their study reveals that when managers used ChatGPT to assist with solving a problem related to employee behavior and working conditions, they were more likely to propose control-oriented rather than people-oriented solutions.
For decades, researchers and managerial practitioners have detailed the benefits of human-centric management approaches. Research findings caution that without proper consideration, the use of generative AI tools may risk an unintended return to a more mechanistic and control-based management style. That’s a problem, since research has established that the old command-and-control style of management doesn’t breed employee engagement or trust.
Consider these key managerial takeaways from the authors’ work:
- Beware the priming effect of generative AI tools. Consulting with the GenAI tool shifted managers’ attention away from direct experience and human-centric thinking and primed them to use a more detached and analytical mode of thinking. Leaders should actively acknowledge the priming that could occur when enlisting generative AI as a problem-solving “partner” in human-centric challenges. When using GenAI for decision support in people management, they should adopt a decision framework that actively incorporates employee welfare and humanistic factors.
- Understand that generative AI tools can breed moral disengagement. Our results indicate that it is possible that GenAI’s data-driven nature may lead managers to view employees more as data points than individuals with unique needs and circumstances. Keep the humans in the loop: Managers should regularly engage with employees and seek their input in order to balance GenAI-driven suggestions with insights from employees’ real-world experiences.
- When using generative AI tools, overemphasize transparency. When organizations leave employees in the dark about how AI systems judge their performance, determine their schedules, or adjust their working conditions, leaders may foster an environment of mistrust and thus experience unintended consequences. Leaders should implement clear communication and disclosure policies regarding when and how GenAI is being used and what data is being considered in managerial decisions.
Leave a Reply
You must be logged in to post a comment.