Adobe's new AI aims to become the invisible helping hand for marketing teams and creatives

Bob O'Donnell

Posts: 88   +1
Staff member
Forward-looking: Using generative AI to create large chunks of text is certainly interesting, but there's something particularly compelling about entirely new images generated from a simple text prompt. It's not surprising then that GenAI graphics tools have been a source of fascination for the last year or so. What's interesting now is that just as the early experiments with text-based GenAI tools are quickly evolving from interesting exercises to essential real-world applications, so too is the world of generative graphics.

At this year's Adobe Summit, the graphics and imaging giant announced the evolution of its Firefly image creation tool into a suite of business-focused applications. From content production to custom models and integrated AI assistants, the new offerings are intended to give businesses different ways to integrate generated graphics into their environments.

A new application called GenStudio pulls together two sets of capabilities that Adobe has been developing over the years: its widely recognized image editing tools and its ad campaign management and reporting – into a single application. GenStudio can leverage the image creation and editing capabilities of an enhanced version of Firefly and then also track how effectively those assets perform against certain metrics that the organization wants to measure.

On the content creation side, Adobe is adding the ability for companies to train the generation tool with as little as 20 images. These custom models will then let companies create new content and marketing material with their signature elements and unique graphic style, which could be a huge time saver for existing staff.

As part of the new Firefly Services, Adobe has created about 20 APIs that organizations can tap into as part of this effort to help them leverage these new capabilities.

These custom models also help overcome a potential issue that has kept many organizations from using GenAI-powered imaging tools: copyright issues. In fact, a study by TECHnalysis Research showed that the number one reason (70% of respondents) who hadn't done much with GenAI cited copyright-related concerns as the key deterrent from doing so.

Many of the other web-based GenAI tools essentially ignore brand copyrights and potentially make it easy to illegally use copyrighted logos, images, and other materials. On the other hand, Adobe has been a strong advocate for honoring copyrighted material within Firefly and also helped start the Content Authenticity Initiative (CAI) to ensure that copyrighted materials weren't being used to train image-based GenAI models. This is a problem that recent tests with most of the other major GenAI image tools, such as Midjourney, Dall-E and Stable Diffusion, have clearly shown.

Another new capability is Adobe Experience Platform AI Assistants, so instead of having to figure out exactly how to do something in a tool like Photoshop, or even how to speed the creation process in something like Adobe Express, you can ask these assistants what you want done and they will do it. The proof will be in the real-world testing of these capabilities.

On the side of content management and tracking, GenStudio will incorporate new tools to see how well any generated content is performing in the targeted markets. This is a critical capability, because while it's great to make the process of creating content easier, if it doesn't work effectively in the real world, then the efforts are all for naught. GenStudio also integrates asset management features, workflow and project management capabilities, reporting tools, and more.

Finally, Firefly has added an intriguing new capability that allows it to quickly structure an image similarly to an example you provide. Previously, Firefly could use a reference image to learn and "inherit" a style that it could use when it generated an image. However, this new "Structure Reference" feature lets you create something that's laid out similarly to an example image you provide.

So, for example, if you have an image with an object lying on its side or a person in silhouette, you can make sure the generated graphic will have those same basic structures. It's a classic case of an image being worth a thousand words because using only text-based prompts to get a similarly structured and laid out image has typically been little more than an exercise in frustration.

As we've seen from other tech innovators, 2024 is proving to be the year when GenAI capabilities move from fantasy to real-world production. Adobe's new suite of offerings for Firefly and the new GenStudio are another great example of this phenomenon that highlight that the world of generated graphics is also entering into this more practical, productive phase.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter

Permalink to story.

 
Back