In the coming decade, the amount of information – or misinformation – created by AI could dwarf that generated by people, meaning those controlling AI safeguards will have huge influence on the world, Mr Weaver said.

Ms Karen Palmer, an award-winning mixed-reality creator with Interactive Films, said she could imagine a future in which someone gets into a robot-taxi and “if the AI scans you and thinks that there are any outstanding violations against you… you’ll be taken into the local police station,” instead of one’s intended destination.

AI is trained on mountains of data and can be put to work on a growing range of tasks, from image or audio generation to determining who gets a loan or whether a medical scan detects cancer.

But that data comes from a world rife with cultural bias, disinformation and social inequity – not to mention online content that can include casual chats between friends or intentionally exaggerated and provocative posts – and AI models can echo those flaws.

With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity.

The effort backfired.

“It can really be tricky, nuanced and subtle to figure out where bias is and how it’s included,” said technology lawyer Mr Alex Shahrestani, a managing partner at Promise Legal law firm for tech companies.

Even well-intentioned engineers involved with training AI can’t help but bring their own life experience and subconscious bias to the process, he and others believe.

Valkyrie’s Mr Burgoyne also castigated big tech for keeping the inner workings of generative AI hidden in “black boxes”, so users are unable to detect any hidden biases.

“The capabilities of the outputs have far exceeded our understanding of the methodology,” he said.

Experts and activists are calling for more diversity in teams creating AI and related tools, and greater transparency as to how they work – particularly when algorithms rewrite users’ requests to “improve” results.

A challenge is how to appropriately build in perspectives of the world’s many and diverse communities, Mr Jason Lewis of the Indigenous Futures Resource Center and related groups said here.

At Indigenous AI, Mr Lewis works with far-flung indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world, something he does not always see in the “arrogance” of big tech leaders.

His own work, he told a group, stands in “such a contrast from Silicon Valley rhetoric, where there’s a top-down ‘Oh, we’re doing this because we’re going to benefit all humanity’ bull****, right?“

His audience laughed. AFP



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *