Early Adopters & AI: An Unexpected Tale

by Melissa Goldman

Melissa Goldman is a User Researcher at Lightricks

Back

In 2022, the concept of generative AI made its way into mainstream media. Starting with the release of DALL-E 2 in April, the world was introduced to realistic AI generated images. Perhaps more widely known as a text to image feature, the technology behind DALL-E 2 allows anyone to describe what they’d like to see, and with the press of a button, produce an entirely new and unique image. 

Not too long after DALL-E 2, ChatGPT came along and stole the show in November of 2022. ChatGPT, similarly to DALL-E 2, is another generative AI technology that can be used as an all-in-one chatbot search engine, so to speak. What makes ChatGPT so remarkable is its ability to remember previous prompts, allowing you to continue your conversation without needing to give it context each and every time. ChatGPT can be used in a variety of ways including writing articles, poetry, code, and music.

Thanks to these innovative breakthroughs, AI seems to be everywhere. Of course this means that Lightricks couldn’t pass up the chance to prove ourselves as an industry leader and incorporate generative AI into our suite of photo and video editing apps. We first started with Text to Image, which is our take on the DALL-E 2 technology. This feature allows users to describe anything, in as little or much detail as they wish, and generate an image based on their words. This is such a cool feature because it challenges people to be as creative as they want and enables those who may not consider themselves ‘artists’ to generate a one-of-a-kind piece of art. After Text to Image came Sketch to Image, which is exactly what it sounds like. Users can either do a rough sketch within the app using their finger or a stylus, or upload a picture of something they’ve previously sketched and go from there! And of course, the fan favorite, AI Avatars. AI Avatars allows anyone to upload 10-20 selfies and generate a bundle of AI generated images that look just like the user. Recently, we’ve also launched AI Avatars for pets (just cats and dogs at the moment – sorry hamster people), and are working on a few more fun use cases that should be coming out soon.

Thanks to these innovative breakthroughs, AI seems to be everywhere.

Because we’ve been in a mad dash to release these features and capitalize on the AI trend, we haven’t really taken the time to do an exploration of potential use cases for our AI technology. That’s where my team, User Research, comes into play. We reached out to 21 creators who’ve already started using generative AI in their creative process to better understand how we can further develop and implement generative AI features to bring all of our creators, not just those inclined to test out AI, even more value. 

The creators we interviewed are the early adopters of generative AI. As a more general category, they are the people who rush to get the latest iPhone, try out the newest development in augmented reality, or in our case, those who explore the various use-cases of tech’s newest buzzworthy advancement. As the name suggests, early adopters are the subset of people that immediately start using a new technology or product. In addition to the early adopters, we also have the innovators, early majority, late majority, and the laggards. By talking to the early adopters of generative AI, we’re hoping to improve the experience of the other groups of people who may be more hesitant to try out new technologies. 

While I personally can’t speak on the patience of early adopters as a whole, we can assume that those who we’ve identified as early adopters generally have a larger tolerance for risk and assume that said risk will work in their favor. Now, none of this is super interesting or revolutionary, so what’s the big deal? 

After completing our qualitative research, it seems that when it comes to AI, users have more patience for less than ideal results, compared to other technologies we’ve seen early adopters run towards in the past. Basically, we’ve seen that creators will use, and continue to use, generative AI, even if the results don’t match their expectations. To follow up with the statement above, they’re taking a risk and it isn’t exactly paying off, yet they’re somehow still okay with this.

When it comes to AI, Users have more patience for less than ideal results, compared to other technologies we’ve seen early adopters run towards in the past.

Think about it. You invest in a new computer, let’s say the world’s first laptop and expect to have the world now at the tip of your fingers. If you don’t see a massive improvement from your desktop computer, chances are your patience will wear out pretty quickly. Interestly enough, that’s not the case for generative AI. We’ve found that users have a lot more patience for trial and error and poor results because they’re operating on the understanding that this technology will get better over time. Is this phenomenon something specific to AI, or could this patience be learned from past lessons where new technologies have been over-hyped? 

Regardless, one thing is certainly clear. Creators of all levels are using generative AI to enhance their creative process. What’s most remarkable is that these conclusions were drawn from AI’s most basic, initial iterations. Despite seeing limitations and poor usability, creators still believe that AI brings added value to their process.

Creators of all levels are using generative AI to enhance their creative process. What’s most remarkable is that these conclusions were drawn from AI’s most basic, initial iterations.

This is great initial feedback and leads us to our next challenge. How can we improve AI and make it more accessible for everyone, especially those who have less faith or patience for new trends?

Blog