Innovation

How founders can avoid becoming Hansel and Gretel in the AI forest

Headshot of Jessica Furr
Jessica FurrOctober 8, 2024
Two figures standing at the edge of a forest path, gazing at an old, vine-covered house amidst a dense, foggy woodland. The house appears abandoned and slightly eerie, cloaked in autumn foliage and surrounded by tall, bare trees. Leaves fall gently around, adding to the scene's mystical and mysterious ambiance.

As a tech lawyer advising startups and evaluating them for venture capital firms, I've seen the incredible potential for AI along with hidden risks. Within a short period, AI has become an essential tool for founders and a priority for investors evaluating potential investments. However, entrepreneurs must approach AI with enthusiasm and caution, leveraging its benefits while safeguarding against its risks.

I like to think of AI through the lens of Hansel and Gretel. Two children are lost in the woods, feeling hopeless and hungry, until they stumble upon a house made of cookies, cake and candy.

What they don't realize is there’s an evil witch who built this scrumptious house to capture and eat naïve, unsuspecting children. Similarly, while AI chatbots may seem like an incredible tool to accelerate startup growth, founders should be cautious of what lies in the AI black box.

This is the first in a series of articles that will help you navigate the legal complexities of AI. Note that laws vary across the world, so I will only focus on issues from a U.S. perspective.

Two people, each with a backpack, walk along a sunlit forest path. Hovering beside them are multiple large monitors displaying glowing green computer code. These monitors float amidst the lush greenery of the forest, blending advanced technology with the natural environment.
This image was generated using Midjourney.

AI is the new intern

If you've ever coached an intern, you know the best ones are eager and smart but don't have real-world experience and sometimes make mistakes (like getting the Starbucks order right).

AI systems are like that bright-eyed, bushy-tailed intern; keen to please, ready to do what you ask of them, and sometimes they miss the mark. They have a basic level of competence, but you need to check their work. They’ll build a good foundation, but you may need to polish and finish the work yourself.

If you approach AI with the same attitude as managing an intern, it can provide meaningful and useful support for your business operations.

Using AI chatbots for branding and marketing

Intellectual property rights (IP) protect you by preventing others from using your intellectual property without your permission. Two forms you might be familiar with are copyrights and trademarks. For original works of authorship like books, music, software, website content, etc., you can seek a copyright. For brand assets like logos, phrases and names, you can apply for a trademark.

Whether AI-generated IP can be formally registered is a complicated question. In the United States, copyrights (like patents) are only given for human expression. You generally cannot copyright AI-generated content because there must be human creative control over the work. Entering a prompt isn’t considered enough control. For works that are a mixture of human and AI material, typically only the human-made aspects are copyrightable.

Trademarks may be easier to obtain. You could trademark an AI-generated logo as long as it is unique, distinguishes and identifies your goods or services, and is used in commerce.

The big caveat is that you must check that you aren’t accidentally infringing on someone else’s trademark. AI models scrape data and may reproduce it in some form or another, which could be a trademark violation.

One thing you can do is run a trademark search to ensure you are clear on whether your AI-generated brand assets can be trademarked.

How to avoid being a thief or victim of IP theft

Generative AI models consume anything and everything. They are trained on vast amounts of data scraped and crawled from the internet, often without the consent of the owner or creator. This includes websites, catalogs, books,­­ movies, music and anything it can ingest.

Given companies who use AI and AI chatbots often pull data without permission, be careful not to misappropriate content or become a victim of misappropriation. Make sure the content you generate is not plagiarizing or using someone else’s without consent. Likewise, protect your content from being scraped by AI companies without your consent. Include language in your website’s terms of use that prohibits data scraping, which then gives you a contractual basis to pursue legal action against violators. Some helpful technical measures that block AI scraping include opting out of data scraping or installing web application firewalls to block AI bots, among other methods. If you’re using an outside service provider to create content or brand assets, include rules about how AI is used in your contract. Monitor their output to ensure they don't break others' rights and use tools that check for content infringement, such as Grammarly or GPTRadar.

AI-generated content offers limited ownership rights

Your right to own AI-generated content is complicated. Check the terms of use for each AI chatbot or program to see whether they assign you ownership and usage rights. Some promise to give you the copyright and/or exclusive ownership, but as discussed above, you can’t copyright AI-generated content.

This contradiction may be due to several reasons: a lack of court precedents given the newness of AI, a potential marketing ploy, or even a misunderstanding of copyright law.

While AI companies may promise you own the generated image‌, what these companies are likely granting you is the right to use the output, but so could anyone else!

AI-generated content is considered part of the public domain, and if exclusive use of the output is important to you, there are alternatives.

If ‌exclusive ownership and use of an image is a priority, consider alternatives to AI-generated images. Some stock photo companies allow you to purchase the sole use of an image, or you can pay someone to create the images. You’ll need to weigh the cost against exclusivity.

Avoid using someone’s likeness without permission

AI isn’t an excuse to break the law. Many U.S. states have laws on the books that don’t allow you to utilize someone’s likeness without their permission. For example, the state of New York doesn’t allow you to use someone’s likeness for 40 years after their death!

If you generate an AI image of a celebrity endorsing your product, you could be sued or threatened with one. Just ask ChatGPT about ScarJo.

Using AI-driven customer profiling can be biased

Humans built AI so it has incorporated our biases into its decisions. This can trickle down into what the AI program creates for you. So be careful when using AI to research customer profiles for marketing campaigns, and keep an eye out for such prejudice. You may want to re-prompt the model to purge anything offensive.

Other ways to mitigate these biases include using data sets with comprehensive diversity, having humans review AI-generated output, conducting periodic audits, using bias detection tools and conducting user testing.

Additionally, many states are becoming more proactive in including AI in privacy laws. States have put limits on how you profile your customers to assess individual characteristics through processing personal information. These are called "automatic decision-making" or "profiling technologies".

Some states require notice of such personal data processing, the ability to opt-out, and consumer access to AI-driven decision-making processes. AI chatbots for customer service can collect personal information and user behavior without a customer's knowledge or permission. This could be a risk you don't know about, depending on the law where you do business.

Look out for hallucinations leading to deceptive advertising

Besides being vigilant about not using someone else’s IP without permission, be careful to make sure the output is accurate and true. AI models can sometimes “hallucinate” or make up content due to biased or inaccurate training data.

Making incorrect or misleading claims to consumers may lead to liability for false or deceptive advertising.

Always double-check the information to avoid making wrong or misleading claims to customers. This could make your company responsible for false or deceptive advertising. It’s also good practice to label AI-generated images as synthetic (using both invisible and visible markers). This means it should be added to the file name, image caption, etc. If you’re unsure if a service provider has created an image with AI, there are tools like Nuanced that can detect if an image is real.

This vibrant image portrays a fantastical candy house in a mystical forest setting, illuminated by colorful lights. The house, adorned with various candies and a glowing tree silhouette in the doorway, exudes a whimsical charm. Surrounding the pathway leading to the house are glowing candy canes and lollipops, with gingerbread figures and gift boxes enhancing the enchanting scene. The image is set against a backdrop of tall, dark trees.
This image was generated using ChatGPT.

When it comes to AI, be like Gretel: thoughtful, proactive and strategic

Gretel eventually outsmarts the witch (death by oven), saving her brother and emerging from the forest with a newfound wealth. While a bit of a Grimm tale, pun intended, there is much you can learn from Gretel. AI can be a powerful tool, but it’s not a magic solution. Just as Gretel used her wits to navigate dangers, entrepreneurs must apply their human intuition and strategic thinking to reap the benefits of AI, while safeguarding against the risks.

Disclaimer: This article is for general information purposes only. It does not constitute legal advice. This article reflects the current opinions of the author. The opinions reflected herein are subject to change without being updated.

Share
Headshot of Jessica Furr
Jessica Furr

Currently serving as Associate General Counsel at crypto venture capital firm Dragonfly, Jessica brings a wealth of knowledge on the legal landscape of tech-related investments. Prior to Dragonfly, she was an Associate at Gunderson Dettmer, where she represented a wide range of emerging tech companies, VCs and entrepreneurs, handling everything from commercial contracts to mergers and acquisitions. Jessica holds a law degree from New York University School of Law and an MBA from Northwestern University - Kellogg School of Management. Her expertise encompasses a broad spectrum of legal and corporate issues, making her a valuable voice in any discussion about tech law.