29.8 C
Bangkok
Thursday, May 9, 2024

“Trust” and “Security” are two key factors that will determine the future of Generative AI.

The neck-to-neck advancement of generative AI innovation has led to increased security concerns and risks. Some lawyers have proposed new rules and regulations to regulate AI tools, and some business and technology leaders have suggested temporarily suspending AI training as a security assessment.

Generative AI is all around us.

The fact that the development of Generative AI does not stop here. Organizations need to act now to establish a comprehensive strategy for managing the trust, risk, and security of Artificial Intelligence Technology (AI TRiSM). There is an urgent need to learn and understand AI TRiSM tools for It manages data and processes between users and companies that own the Generative AI base model.

There are currently no ready-made tools on the market that systematically ensure user privacy or effectively filter content when these models are implemented, for example filtering inaccurate information from facts. pictures that don’t exist Copyrighted material or confidential information

It is urgent for AI developers to work with policy makers. including emerging regulators To formulate policies and guidelines for AI governance and risk management.

Major risks affecting the organization

Generative AI poses several new risks. (Hallucinations)” and forgery such as information that is different from the facts. This is the most popular issue with Generative AI chatbot solutions, training data cases. In case of a biased response are not based on assumptions or are inaccurate These cases are hard to detect, especially when the solution is reliable and has a growing number of users.

The next risk is deepfakes when generative AI is used to create content with malicious intent. These are all major risks, such as fake images, fake videos, and fake audio recordings. It is often used to attack celebrities and politicians. to create and disseminate misleading information Or even use it to create fake accounts or to seize and hack existing legitimate accounts.

A recent example is a photo of Pope Francis wearing an AI-generated fashion jacket that went viral on social media. Even though it seems harmless But it also gives us a glimpse into the future of deepfakes in celebrity spoofing, imitating, deceiving, and political risks to individuals, corporations, and governments.

Data privacy is also of great concern. Especially when organizations grant employees the ability to easily disclose sensitive and proprietary data when using Generative AI Chatbot solutions, these applications may store user input. or even input data used to train other AI models, where such data could end up in the wrong hands in the event of a security breach.

Next comes the copyright issues (Copyright Issues). This may include copyrighted material. Some results may infringe copyright or intellectual property (IP) protections if there is no attribution or transparency about how results are generated. Therefore, the only way to mitigate this risk is for users to fact-check to ensure they are not infringing copyright or intellectual property rights.

Finally, there are cybersecurity concerns. (Cybersecurity Concerns) In addition to advanced social engineering and phishing threats, Attackers can use these tools to generate malicious code more easily.

Vendors offering Generative AI base models must assure their customers that they have been trained and tested to deny malicious cybersecurity requests. However, vendors will not provide organizations with all the robust security control auditing tools. They also focus on the way to prevent The term “red teaming,” a method of testing systems pretending to be a digital or physical intruder, requires organizations to have full confidence in the capabilities offered by vendors. for managing security goals

Respond and manage risks from AI

There are two general approaches in which we can leverage the potential of ChatGPT and similar applications: the Out-Of-The-Box Model that leverages existing services; The second approach is Prompt Engineering, which uses tools to create, adjust and evaluate input and output data.

For using the Out-Of-The-Box Model, organizations must manually verify all model output data. to check for invalid results. Wrong or inaccurate information is not neutral. It establishes a governance and compliance framework to enable these types of solutions in the organization. with a clear policy Employees are prohibited from asking questions that reveal sensitive corporate or personal information.

Organizations should monitor unauthorized use of ChatGPT and similar solutions with security controls and dashboards available for detecting usage policy violations, such as a firewall that can block user access to ChatGPT. organization Security information and incident management system that can verify breach records. and a secure web gateway that can block unauthorized API calls.

For using Prompt Engineering, all risk mitigation measures are available. Organizations should take steps to protect internal data and other sensitive data used to communicate with AI on third-party infrastructures. By generating and storing engineering communication messages with AI as fixed assets.

And these assets also show significant communication with AI and are validated to be used safely. It also serves as a repository of highly optimized and enhanced AI communications that can be reused, shared, or resold.

Source

Latest Articles