Cybersecurity for SMEs: Stay Ahead in the Flux of Change!
Cybersecurity has become a necessity no matter your business size.
It’s less than a year since OpenAI released its (general) artificial intelligence (AI) app, ChatGPT. Since then, it and its alternatives, have transformed how we do business.
Such tech is known as generative AI, a form of machine learning. Trained on a vast database of natural language, it can create content, such as text, code, audio, images, simulations, and video. Think chatbots, Google Bard, Snapchat’s My AI, DALL-E2, Midjourney, and the voice generator Microsoft’s VALL-E.
Generative AI may already be part of your business strategy. According to the Australian Securities and Investment Commission (ASIC), AI can drive efficiencies in operations – price prediction, hedging, automating, or performance tasks – as well as risk management for fraud detection.
Generative AI has been dubbed industry’s next big disruptor, as big as the internet and the smartphone have been. Microsoft’s Bill Gates has forecast a future where we each have an AI personal assistant. Scholarly researchers across several industries last month released this crystal-ball gazing opinion paper.
Australian eSafety Commissioner has issued a position statement on generative AI, detailing the pros and cons. It frames opportunities through the lens of online safety, including to:
KPMG has listed AI use cases and potential opportunities. Specific industries expected to be early adopters in harnessing AI opportunities include health, banking, finance, education, creative industries, and engineering, says Australia’s Chief Scientist, Dr Cathy Foley. She says it’s almost impossible to accurately forecast the opportunities of generative AI over the next decade.
However, the risks are also clear. Here are three broad categories:
Ethical and copyright issues include businesses claiming AI-generated content as their own. If AI-produced code or information became part of a deliverable or product, that could breach copyright or IP, thereby damaging your brand’s reputation, says KPMG.
The eSafety Commissioner cautions that, while many companies are quickly developing and deploying their own generative AI technologies, those organisations need to attend to risks, protection, and transparency for regulators, researchers, and the public.
The Federal Government is reviewing the Privacy Act 1988 to ensure it’s fit-for-purpose in the AI era. The eSafety Commissioner has also flagged concerns about generation AI regarding data ownership, national security, law environments, and the environment and labour market, so there may need to be further regulations implemented.
There’s currently a paucity of regulatory or legal frameworks to protect businesses against the risks of AI. That means if you and your staff are using generative AI without internal or external buffers, your business could be inadvertently:
For example, uploading text or documents to generative AI feeds its dataset. You can change your settings on ChatGPT to incognito mode, so it retains your data for only 30 days for security purposes. But that data is still in its training set for that time, so can be shared via the publicly available knowledge base. Be sure your staff know there’s no safe way to upload confidential information on ChatGPT. It is for this reason that larger companies are opting for ‘internal’ AI systems that don’t expose their data to people outside of their organisations.
You may be using another platform for the automation of some of your processes. Technically, third-party organisations own the platform and could use the data you upload for their own purposes. Video-conferencing service Zoom recently had to clarify terms and conditions which had looked like it could use any audio, video, chat, screen sharing, attachments, etc. to train its own AI models.
The General Data Protection Regulation (GDPR) applies if your business targets or collects data related to people in the European Union. Known as the world’s toughest privacy and security law, it can be daunting for SMEs to comply. Check out this official EU website for guidance, and this one for how to manage the privacy risks of emailing EU countries.
So, how can your business develop responsible AI usage practices? KPMG suggests you:
A recent government discussion paper on safe and responsible AI use details how organisations around the world are tackling policy, training, and monitoring. The global AI Standards Hub offers some useful insights and eLearning modules. Salesforce has tips to look at your approach for short, medium, and long-term time frames.
For guidance on AI ethics principles, look to the Federal Department of Industry, Science and Resources. Its voluntary framework can help:
If you provide a service and that service includes providing advice generated through AI you should ensure that your business insurance such as professional indemnity insurance is adequate to manage your risk.
Article Supplied by OneAffiniti
Photo by Black Jack3D on Unsplash