TechTech newsTechnology

When AI Makes Art, Humans Supply the Creative Spark

New products often come with disclaimers, but in April the artificial intelligence company OpenAI issued an unusual warning when it announced a new service called DALL-E 2. The system can generate vivid and realistic photos, paintings, and illustrations in response to a line of text or an uploaded image. One part of OpenAI’s release notes cautioned that “the model may increase the efficiency of performing some tasks like photo editing or production of stock photography, which could displace jobs of designers, photographers, models, editors, and artists.”

So far, that hasn’t come to pass. People who have been granted early access to DALL-E have found that it elevates human creativity rather than making it obsolete. Benjamin Von Wong, an artist who creates installations and sculptures, says it has, in fact, increased his productivity. “DALL-E is a wonderful tool for someone like me who cannot draw,” says Von Wong, who uses the tool to explore ideas that could later be built into physical works of art. “Rather than needing to sketch out concepts, I can simply generate them through different prompt phrases.”

DALL-E is one of a raft of new AI tools for generating images. Aza Raskin, an artist and designer, used open source software to generate a music video for the musician Zia Cora that was shown at the TED conference in April. The project helped convince him that image-generating AI will lead to an explosion of creativity that permanently changes humanity’s visual environment. “Anything that can have a visual will have one,” he says, potentially upending people’s intuition for judging how much time or effort was expended on a project. “Suddenly we have this tool that makes what was hard to imagine and visualize easy to make exist.”

It’s too early to know how such a transformative technology will ultimately affect illustrators, photographers, and other creatives. But at this point, the idea that artistic AI tools will displace workers from creative jobs—in the way that people sometimes describe robots replacing factory workers—appears to be an oversimplification. Even for industrial robots, which perform relatively simple, repetitive tasks, the evidence is mixed. Some economic studies suggest that the adoption of robots by companies results in lower employment and lower wages overall, but there is also evidence that in certain settings robots increase job opportunities.

“There’s way too much doom and gloom in the art community,” where some people too readily assume machines can replace human creative work, says Noah Bradley, a digital artist who posts YouTube tutorials on using AI tools. Bradley believes the impact of software like DALL-E will be similar to the effect of smartphones on photography—making visual creativity more accessible without replacing professionals. Creating powerful, usable images still requires a lot of careful tweaking after something is first generated, he says. “There’s a lot of complexity to creating art that machines are not ready for yet.”

The first version of DALL-E, announced in January 2021, was a landmark for computer-generated art. It showed that machine-learning algorithms fed many thousands of images as training data could reproduce and recombine features from those existing images in novel, coherent, and aesthetically pleasing ways.

A year later, DALL-E 2 markedly improved the quality of images that can be produced. It can also reliably adopt different artistic styles, and can produce images that are more photorealistic. Want a studio-quality photograph of a Shiba Inu dog wearing a beret and black turtleneck? Just type that in and wait. A steampunk illustration of a castle in the clouds? No problem. Or a 19th-century-style painting of a group of women signing the Declaration of Independence? Great idea!

Many people experimenting with DALL-E and similar AI tools describe them less as a replacement than as a new kind of artistic assistant or muse. “It’s like talking to an alien entity,” says David R Munson, a photographer, writer, and English teacher in Japan who has been using DALL-E for the past two weeks. “It is trying to understand a text prompt and communicate back to us what it sees, and it just kind of squirms in this amazing way and produces things that you really don’t expect.”

//platform.twitter.com/widgets.jshttps://platform.instagram.com/en_US/embeds.js


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button