Journal 2 Peter Fodor July 29
As a non-native English speaker, I started using AI to check my grammar when I moved to an international work environment. Initially, I was so nervous that I paid for a ChatGPT account to double-check every sentence. Curiosity eventually got the best of me and I began using GPT more extensively, making it my everyday tool.
As a Creative Strategist, I currently use AI to search for information in languages other than English, write summaries, generate reference images, role-play with personas of the target audience, and so much more.
But when a certain tab is open during your entire workday, it’s inevitable to stumble upon the walls that stop you from getting the results you want.
Here are three of my main learnings about AI’s capabilities, and its limitations.
1. Using AI for structured content can be a challenge.
Time and time again, I’m reminded that you cannot assign AI the task of using the structure of one text and filling it with the content of another.
Let’s say you have a well-organized presentation about flowers and want to create a similar one about animals using the flower presentation as a template and a Wikipedia article for content:
Title: Types of Flowers
Intro: Flowers are a crucial part of our ecosystem.
Section 1: Characteristics of Roses
Section 2: Characteristics of Tulips
Conclusion: Flowers are essential for biodiversity.
If you use this structure to create a slide about animals without removing the flower content, you might end up with silly example sentences like “Characteristics of Roses: Lions are known for their majestic appearance.”
If you want a specific structure, you need to remove any existing content from it. You cannot use meaningful sentences as examples for organizing other sentences unless they are on the same topic.
2. Don’t underestimate the importance of task-specific chats.
When using an LLM, each type of task should have its own chat: one for briefs, one for captions, one for creative content, etc.
Once the AI learns a format, it will continue to use it. You cannot interact with it like a person, asking questions along the way; instead, treat it like an automation tool with each chat dedicated to a single task.
For example, if you start a chat for writing captions and then switch to tasking the AI with creating and then switch to tasking the AI with creating a creative brief, it will, it will likely continue generating captions because it remembers the format from the initial chat. Thus, separate chats for each task ensure clarity and accuracy.
3. You’ll probably run into inconsistent guidelines and author rights issues.
For sensitive or personal topics, the AI tool may decline to assist due to certain guidelines. This includes anything related to intimacy or harm, even in the context of ads. While there are ways to work around this, it can be frustrating.
For example, if you ask AI to create an ad script that includes scenarios of someone getting injured (even in a safe context like a safety demonstration), the AI might refuse, saying it’s against the guidelines.
AI’s stance on author rights can also be inconsistent. For example, although “Alice in Wonderland” is in the public domain, DALL-E won’t generate images related to it, citing potential author rights issues. Conversely, DALL-E on Bing search will create such images but with lower quality and a limit of five messages per chat.
To add to the confusion, DALL-E on Bing can generate images of Benedict Cumberbatch as Sherlock Holmes directly. However, DALL-E in ChatGPT requires a workaround, like describing “a detective in a hat with a pipe” instead of mentioning Sherlock Holmes directly.
If you ask DALL-E in ChatGPT for an image of Sherlock Holmes, it might decline and prompt you to describe the detective instead. In contrast, DALL-E on Bing will create an image of Sherlock Holmes directly and sometimes even depict Benedict Cumberbatch without any problems.
These examples illustrate the limitations and inconsistencies in using AI for structured tasks, sensitive topics, and content creation involving copyrighted materials.
In our recent webinar, A Reality Check on AI, Brian Bowman and I asked the audience to share their experiences where AI has fallen short. Here are a few of the responses we received:
“Image and animation-wise, it is really easy to recognize at this point. It all looks the same.”
“3D modeling for use in Maya or Max. It’s always the same treasure chest, no matter the prompt–just the scale and straps change.”
“Just like with the first fake ads, the general public quickly learned to read them and understand that they are looking at a generated image, and I see a lot of hate towards them.”
“ChatGPT is increasingly sounding robotic, and it takes a lot of work to make it sound natural or human.”
Please login or subscribe to continue.
No account? Register | Lost password
✖✖
Are you sure you want to cancel your subscription? You will lose your Premium access and stored playlists.
✖