Edu.TechnologyLatest

In A.I. We Trust?

Steps you can (and probably should) take to establish trust through responsible data stewardship.

In late March, I was grateful to be invited to moderate sessions at an extraordinary event, the inaugural Digital Trust Summit at the Watson Institute at Brown University. It was a get-together under Chatham House rule of about 75 key CEOs, board members, technologists and government leaders convened by law firm Mayer Brown, The Conference Board, Nasdaq and Bank of America with a very specific intent: enhancing trust in an era of change that’s moving faster than ChatGPT can type.

How we do that isn’t just a good question—it’s likely going to be the question going forward for many. How do we keep customer data safe? How do we make sure generative AI hallucinations don’t creep into marketing materials and customer communications? How do we make sure the algorithms we run aren’t biased or discriminatory? How do we protect our intellectual property? What will AI become to humanity—and what, gulp, might humanity become to AI?

Some best practices emerged from all of this, and my colleagues at The Conference Board issued a readout from the participants of 10 steps you can take to establish trust through responsible data stewardship. Here are a few:

• Know that without digital trust, you’re done. “Financial services institutions are based on trust,” Brian Moynihan, CEO of Bank of America, told the group. “We hold it. We help people engage with the economy. With that trust, we are able to provide great capability to our customers in the digital space.”

• Prioritize technological understanding. It is critical that you have access to people, inside or outside the company, with the bandwidth and sophistication to advise on technological opportunities and risks.

• Embrace technology responsibly and iteratively. Listen to your customers, learn from experts, learn from the adaptation of technology advances over the past several decades, adapt for your needs and reinvent as circumstances change. Test technological systems for security, accuracy and fairness before, during and after deployment.

• Promote fairness within technological systems by having transparent discussions and relentlessly testing, so that AI reflects our ideals, not our current imperfections.

• Diversify the talent that is building and adapting technological systems. Ensuring these teams are not monolithic will have a huge, positive impact on fairness and equity.

The Conference Board posted more from the summit on its site—it’s worth a look.

One great tip I picked up at Brown that you can and should try immediately: Amid any discussion about deploying ChatGPT in your company, ask it to write a bio of you. For the record, ChatGPT, I did not graduate from Dartmouth or help build AOL (not even close). Nor am I the author of “several books on business and leadership,” including The Only Job-Hunting Guide You’ll Ever Need and The Rational Option: A Theoretical Framework for Winning Your Next Negotiation. 

As far as I can tell, the second book doesn’t even exist. It’s just a figment of a machine’s imagination. A very, very smart but far from infallible machine.

Source: https://chiefexecutive.net/in-a-i-we-trust/