AI and Ethics: Paranoia and Trust

Share This Article

AI and Ethics: Paranoia and Trust

In the third installment on AI and Ethics, we ask if we’re being too paranoid about AI and if I think we’ll ever be able to trust it.

Q: Are we being too paranoid about AI?
No.
But we should be less reactive, and more reflective.
The question isn’t “Will AI destroy us?” It’s “What are we letting it do now that we’ll wish we could take back later?”

Q: How do you build AI that people can trust?
You don’t start with a model. You start with values.
At AiSensum, every agent we build is ring-fenced. That means we define what it can do, what it should do, and, just as important, what it must not do.
Sometimes that means pulling back. Sometimes it means waiting.
Because trust isn’t built when AI performs well… it’s built when AI knows when not to act.

The real issue with trusting AI is building something we can trust.

Self-driving cars are a perfect example: can we trust self-driving cars right away? We probably shouldn’t… but we can teach the cars to be better and always keep a human-in-the-loop to make sure that we can maintain that trust.

Do you trust AI? Do you keep the human-in-the-loop?

#AISensumInsights #AIethics #HumanInTheLoop #ResponsibleAI #PrivacyMatters

Scroll to Top