Chenling Meng et al. from Stanford University and Carnegie Mellon University can generate new images from any user-based inputs. Even people like me with zero artistic skills can now generate beautiful images or modifications out of quick sketches. It may sound weird at first, but by just adding noise to the input, they can smooth out the undesirable artifacts, like the user edits, while preserving the overall structure of the image. This new noisy input is then sent to the model to reverse this process. Learn more in the video and watch the amazing results!

image

Louis Bouchard Hacker Noon profile picture

@whatsaiLouis Bouchard

I explain Artificial Intelligence terms and news to non-experts.

Say goodbye to complex GAN and transformer architectures for image generation. This new method by Chenling Meng et al. from Stanford University and Carnegie Mellon University can generate new images from any user-based inputs.

Even people like me with zero artistic skills can now generate beautiful images or modifications out of quick sketches. It may sound weird at first, but by just adding noise to the input, they can smooth out the undesirable artifacts, like the user edits, while preserving the overall structure of the image.

So the image now looks like this, complete noise, but we can still see some shapes of the image and stroke, and specific colors. This new noisy input is then sent to the model to reverse this process and generate a new version of the image following this overall structure.

Meaning that it will follow the overall shapes and colors of the image, but not so precisely that it can create new features like replacing this sketch with a real-looking beard. Learn more in the video and watch the amazing results!

Watch the video

References:

►Read the full article: https://www.louisbouchard.ai/image-synthesis-from-sketches/
►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
►SDEdit, Chenlin Meng et al., 2021, https://arxiv.org/pdf/2108.01073.pdf
►Project link: https://chenlin9.github.io/SDEdit/
►Code: https://github.com/ermongroup/SDEdit
►Demo: https://colab.research.google.com/drive/1KkLS53PndXKQpPlS1iK-k1nRQYmlb4aO?usp=sharing

Video Transcript

       00:00

say goodbye to complex GAN and

00:02

transformer architectures for image

00:03

generation

00:04

this new method by channing meng el from

00:07

stanford university and carnegie mellon

00:09

university can generate new images…

Continue reading: https://hackernoon.com/sdedit-helps-regular-people-do-complex-graphic-design-tasks-61aq3738?source=rss

Source: hackernoon.com