How to implement AI responsibly
Every business and industry will need to set standards and expectations around responsible AI use. They also will need to calculate its specific risk and usage protocols. But some general principles will apply to most businesses, whether they are using these tools or just adapting to the AI environment:
Focus on realistic threats. At this stage, there are many claims made about AI’s capabilities; some are exaggerated. Correspondingly, some concerns about AI’s downsides, while realistic in theory, do not apply to its current stage of development. Companies should not lose sight of immediate concerns (e.g., how an AI model is using their data) by focusing too much on a possible future (e.g., sentient or malicious AI).
Generating value from AI tools is dependent on responsible and cyber-secure use. Businesses should realize the value of acquiring, nurturing and encouraging employees who have aptitude for using AI tools and understanding their potential risks.
Approach AI tool adoption with caution. Rapid scaling of AI tools may not be a wise course for every organization. Legal ramifications of data sharing, penalties for issuing misinformation and exposure of company assets are risks that may motivate a business to put tighter restrictions on the use of AI models until more robust security controls are in place.
Recognize that every AI tool expands the risk of cyber events. AI models store vast amounts of information, and information is always a lure for cyber criminals. The models could be stolen or corrupted by external actors or insiders, leading to data loss or corruption, privacy breaches or reputational damage.
Take extra steps to verify identity. Scammers and bad actors will attempt to create more convincing deepfakes based on publicly available or stolen information and personal identifiers, such as voice recordings and video. Eventually, a text message or even video conferencing feed from an account holder may in fact be a simulation. To adjust to this reality, organizations can train their employees about deepfake threats and develop protocols for verifying the identity of the person in question when receiving an unusual request.
Continue to educate employees around the essentials of cyber hygiene. AI models help people accomplish tasks more quickly and efficiently. This applies to scammers and other bad actors as well as legitimate employees and partner organizations. AI models will not eliminate phishing, credential theft, account takeovers or other established cyber-crime tactics. Rather, they will often make these crimes harder to spot.