Who Is Responsible For A.I. Doing Damage?
(right now) Social media companies: "Hey, we just provide the platform. We're not responsible for what people put on it or how they use it."
(very soon) AI companies: "Hey, we just provide the intelligence. We're not responsible for what people put into it or how they use it."
Maybe some of that "very soon" with AI companies...is happening now? How many of the largest such companies have already had that discussion in some conference room or boardroom?
Who Is Responsible When A.I. Tools Help Commit Crimes?
(right now) Social media companies: "Hey, we just provide the platform. We're not responsible for what people put on it or how they use it."
(very soon) AI companies: "Hey, we just provide the intelligence. We're not responsible for what people put into it or how they use it."
Maybe some of that "very soon" with AI companies...is happening now? How many of the largest such companies have already had that discussion in some conference room or boardroom?
A.I. Ethics is a growing field, and one that seems to be a very small part of the overall chatter of the world's fast-paced development of AIs.
It seems LLMs and the like are a powerful means to diffuse out (or absolve entirely) responsibility in ethnics, law, and morality.
Comments
Post a Comment