The researchers say their study illustrates the importance of testing the influence of AI as a step toward maintaining responsible deployment. And they warn that people with malicious intentions could use the forces of AI to corrupt others.
“AI could be a force for good if it manages to convince people to act more ethically. Yet our results reveal that AI advice fails to increase honesty. AI advisors can serve as scapegoats to which one can deflect (some of the) moral blame of dishonesty. Moreover … in the context of advice taking, transparency about algorithmic presence does not suffice to alleviate its potential harm,” the researchers wrote. “When AI-generated advice aligns with individuals’ preferences to lie for profit, they gladly follow it, even when they know the source of the advice is an AI. It appears there is a discrepancy between stated preferences and actual behavior, highlighting the necessity to study human behavior in interaction with actual algorithmic outputs.”
Read the full article on Venturebeat