ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.
ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees and the intellectual property you create, own and protect.
How is ChatGPT Being Used?
With over 100 million users, the applications for ChatGPT are legion. However, there are many real-world examples of how businesses are leveraging this app. IT companies are applying the app to software development, debugging, chatbots, data analysis and more. Service companies are streamlining sales, improving customer service and automating routine tasks. Government and public service sectors see benefits in creating draft language for laws and bills and creating content in multiple languages. And countless individuals are using the app as a personal productivity tool.
Of course, as with all innovations, thieves discover uses as well. Generative AI tools are being used in phishing attempts, making them faster to execute, harder to detect and easier to fall for. ChatGPT imitates real human conversation. That means the typos, odd phrasing and poor grammar that often alert users to phishing foul play may soon disappear. Fortunately, while generative AI can be used by criminals to create problems, cybersecurity pros can use ChatGPT to counter them.
Pitfalls of ChatGPT and its Intellectual Property Implications
OpenAI, the developer of ChatGPT, notes hazards of the generative AI app. They state that “…outputs may be inaccurate, untruthful and otherwise misleading at times” and that the tool, in their words, will “hallucinate” or simply invent outputs. Generative AI models improve as they learn from an ever larger language data set, but inaccuracy remains common. Any output generated by the app requires human fact-checking and quality control before use or distribution.
These inaccuracies can complicate your company’s IP rights. IP rights fall into four main categories: patents, trademarks, copyrights and trade secrets. If you claim IP rights to something even partially AI-generated, you need to ensure its accuracy first. To make matters muddier, one big question remains unresolved about AI-generated IP: ownership.
Who Owns ChatGPT Output? It’s Complicated.
Many issues revolve around the intersection of AI and intellectual property. A few have been decided, while others are not yet litigated and remain unresolved. Thaler v. Vidal decided the issue of patents in the U.S. In April 2023, the U.S. Supreme Court upheld that AI inventorship is not a thing and that patents can only be obtained by humans. However, Congress is now considering the issue and seeking guidance on how AI inventorship should be treated.
In March of 2023, the U.S. Copyright Office delivered guidance on registering copyright for works with AI-generated material. During the copyright application, the applicant must disclose if the material contains AI-generated content. The guidance also states that the applicant has to explain the human’s contributions to the work and that there must be sufficient human authorship established to ensure copyright protection for that part of the work.
What About User Input? That’s Complicated Too.
AI language models use data to continuously improve their models. ChatGPT captures your chat history data to help train its model. Its model training could use your input. If you input confidential or proprietary information, that could put your company’s intellectual property at risk of theft or dissemination. Samsung discovered this the hard way when Samsung engineers accidentally leaked internal source code in an upload to ChatGPT. In response, the company has temporarily banned staff from using generative AI tools on company-owned devices.
Samsung isn’t alone. One data security service discovered and blocked requests to input confidential data into ChatGPT from 4.2% of 1.6 million workers at its client companies. The inputs included client data, source code and other proprietary and confidential information. One executive pasted corporate strategy into the app and requested the creation of a PowerPoint deck. In another incident, a doctor input a patient’s name and condition into the model to help write a letter to an insurance company. The fear is that this confidential data could resurface as output in response to the right query.
What Can Security Teams Do to Safeguard IP?
Generative AI is a fast-moving target. Keeping your employees and confidential information secure takes vigilance. Review and update your security posture regularly. For now, here are some simple things you can do to safeguard your IP.
- Provide employee training. Tell staff how these models work and that their inputs could become public, harming the company, partners, customers, patients or other employees. Also, teach staff how generative AI improves phishing and vishing schemes to increase their vigilance for those types of attacks.
- Follow relevant IP legal proceedings. Globally, there will be more laws and rulings about IP and its intersection with generative AI. Corporate legal teams need to follow court proceedings and keep security teams informed of how they might affect security guidelines and adherence to the law.
- Use the least privilege principle. Give employees the least access and authorizations required to perform their jobs. This might help cut down on unauthorized access to information that can be shared with external AI tools.
The easy proliferation of generative AI has democratized and accelerated its adoption. This tech-led trend will drive disruption. Questions about intellectual property protection will arise from it. Learn more about how IBM helps you embrace the opportunities of generative AI while also protecting against the risks.