CEO and Founder of Redwerk and QAwerk, delivering robust SaaS solutions and ensuring software quality since 2005.

In a world where technology outpaces our grasp on ethics, Asimov’s Runaround doesn’t seem like sci-fi anymore. As machines become more sentient, how do we ensure they won’t go rogue? Apparently, we need more than the Three Laws of Robotics to benefit from AI and build a safe future.

Having analyzed the EU AI Act draft and the responsible AI policies of leading LLM developers like Google, OpenAI and Meta, my team and I have singled out 10 foundational principles for developing an ethical generative AI strategy.

1. Be Transparent About AI’s Inner Workings

Modern deep learning models are basically a black box. It’s often challenging to understand how they arrived at a particular conclusion and whether we can trust the output. No wonder researchers talk about the need for explainable AI. We need neural networks that provide reasoning behind their predictions or unveil how their algorithms work.

Transparency is crucial in industries where an AI error could have a detrimental impact on human life. AI has enormous potential in medical diagnostics, but who’s to blame if it makes a wrong diagnosis? For health practitioners to adopt AI, they need to trust it and be able to interpret its answers to validate their accuracy.

Being transparent benefits not only the users but also the AI companies. The greater control and visibility a company has over its AI model, the easier it can adjust to ever-changing regulations.

2. Be Accountable

The people designing and deploying AI products should be held accountable for their decisions. It’s up to them how they implement this, but there should be an internal self-regulation mechanism helping to minimize harm, learn and iterate to improve those systems.

3. Ensure Human Oversight

AI is meant to complement human decision-making, not overpower us. Ensure that your AI product leaves room for human direction and control. AI systems should not be fully autonomous because it creates risks of violating human rights and causing harm. Human oversight is also necessary to monitor the quality of the output and take action in a timely manner.

4. Implement Anti-Abuse Features

AI is a double-edged sword; it is actively used to automate content moderation, but the algorithm itself is not immune to system abuse. You, too, have probably seen funny posts on LinkedIn about how you can trick ChatGPT into providing unfiltered responses or downright harmful advice. AI systems need rigorous testing to ensure they don’t act as people pleasers and change their output unreasonably.

5. Aggregate Data Responsibly

Big AI players like OpenAI, Stability and Midjourney have faced lawsuits from authors and artists whose copyrighted work was used without permission or compensation for training those companies’ AI models. The fact that you can physically scrape the data off the internet doesn’t validate it as a lawful act.

AI companies must carefully evaluate their data sources to prevent copyright infringement. The current version of the AI Act suggests disclosing all copyrighted materials used during training.

Another huge concern is the unlawful use of personally identifiable information that can be retrieved with a clever prompt. Companies developing LLMs should obtain consent from individuals whose data is used or anonymize their personal information.

6. Train AI On Diverse And Representative Data

Because AI models require such enormous data for training, part of those materials inevitably reflect societal biases. If no measures are taken, AI will continue regurgitating stereotypes through its responses. That’s why it’s important to train AI systems on data from various demographics and backgrounds.

Humans build AI systems, and humans are the primary source of bias. The team involved in developing and fine-tuning the algorithm should also be diverse and educated on how to detect discriminatory outputs.

7. Minimize Carbon Footprint

Developing AI models requires vast computational resources, resulting in a huge carbon footprint. Consider AI’s impact on the environment at the early design stage. For example, MIT has proved it’s possible to build more efficient neural network architectures and significantly reduce the carbon footprint.

Another way to develop eco AI models is to use data centers powered by renewable energy. Training a model could be scheduled during off-peak hours or when renewable energy production is high to use the resources better.

8. Factor In Impact On Users’ Mental Health

People are social creatures, and prolonged exposure to AI systems and isolating work may contribute to anxiety or loneliness.

AI companies could incorporate digital well-being tools into their products such as a reminder to take a break and move your body, a screen time limit or bedtime mode. Also, the AI algorithm must not be designed to cause addiction. Yes, you’ll likely see a smaller profit margin, but you’re less likely to jeopardize the health of millions of people.

9. Invest In Cybersecurity

AI apps are vulnerable to cyberattacks. A prompt injection is probably the most familiar one—a hacker uses a tricky input that jail breaks an LLM and makes it fulfill malicious instructions. However, prompt injections are not the only menace. Bad actors may steal the model with imitation attacks, poison the training data or extract sensitive information.

Cybersecurity organizations like OWASP and ENISA emphasize that AI systems must be secure by design. Adopting time-proven software engineering practices like versioning, documentation, unit testing, integration testing, performance testing and code review are the foundation for a quality AI system. Compliance with general security standards like ISO 27001 and SAMM, regular security audits and penetration testing can also improve security posture.

10. Monitor And Iterate

Continued monitoring is essential to keep your AI product safe for commercial use. Over time, the performance of your system may degrade or new security risks may emerge. Your product roadmap should include enough time for upgrades and security patches.

Developing an ethical generative AI is not about improving Asimov’s laws but about capturing the spirit of human-centered design. We must ensure we embed guardrails that protect us from dire consequences without forcing AI systems into an endless loop.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *