The AI community has a new obsession. It’s called ‘generative artificial intelligence’, and it refers to the idea of having computers take over creative tasks such as writing, filmmaking, and graphic design. 
2022 will definitely be remembered as the year when the renaissance of AI art began. AI art generators are paving a new path towards the freedom of artistic expression. In an extremely short period, they’ve allowed everybody with internet access and a keyboard to generate incredible art from simple text prompts.
Considering the current state of things, it’s too early to tell whether this new wave of apps will end up costing artists and illustrators their jobs. What seems clear though is that these tools are already being used in creative industries.
Let’s have a look at how AI changed the art reality.
In May, Google showcased how ‘Imagen’, a text-to-image diffusion model, is able to create images based on written text or words describing a scenario.
Imagen still needs to improve in generating art that depicts people, since it mostly produces stereotypical results. For example, it tends to have an overall bias while generating images of people with lighter skin tones, states Google.
One of the notable names in the AI community, Midjourney, opened for beta testing in July this year. Testers received 25 free prompts initially. This way, the users could test the new algorithm. The images created are not uploaded to a public gallery with this plan. However, Midjourney has plans to make this feature available to private users soon. 
In August, Emad Mostaque, the founder of Stable Diffusion, announced that the codes for Stable Diffusion would be open. In the light of this announcement, the speculation around the launch of this AI generator as just another text-to-image generator quickly turned into Stable Diffusion’s outstanding reputation as a game changer.
Bonus: 12 Ways To Use Stable Diffusion Without Coding
Meta challenged the monotony of the text-to-image generation process with ‘Make-a-scene’, which also sketches to create visual masterpieces apart from taking text prompts. 
With this model, Meta altered the current dynamics of the AI text-to-image generation process. However, it remains to be seen whether Meta’s improved AI model would be able to hold its own against text-to-image models.
The key idea for the model is to allow users to create photorealistic renditions of their desired subject instance and bind it with the text-to-image diffusion model. Thus, this tool proves effective in synthesising subjects in different contexts.
Google’s ‘DreamBooth’ takes a moderately different approach than other text-to-image tools by providing more control of the subject image and guiding the diffusion model using text-based inputs.
Denis Shilo, CEO of Facel, developed ‘Phraser’, the world’s first-ever application that employs machine learning to write prompts for neural networks. 
The main idea behind the tool is to promote smart search. The main features of Phraser include simple steps like choosing a style, selecting the content type, picking the quality of colour, adjusting the camera settings, and such. 
In April 2022, OpenAI caused an uproar with the launch of its latest model, ‘DALL-E-2’. Later in September, the company announced that the waiting list no longer exists
Open access seems fair. Artists can now experiment, given this new resource to test novel creative ideas and potentially speed up their workflows.
A game designer in Colorado spent over 80 hours working on his art piece to enter the Colorado State Fair’s digital arts competition. As a result, Jason Allen won the $300 first prize.
Allen started receiving backlash online when he revealed that he’d created his art using Midjourney. Even though he made that clear to officials while submitting his artwork, called Théâtre D’opéra Spatial, his blue ribbon has sparked a fiery debate about what constitutes ‘real art’.
We all knew it was just before text-to-3D technology would be here. 
Released in September, Google’s method called ‘DreamFusion’ uses 2D Diffusion to generate diverse 3D models, bringing advancements to text-to-image synthesis.
Currently, only some models are downloadable. These samples aim to push the boundaries of AI creations. It’s interesting to see how the result comes out and how the AI handles the text to spit out the 3D model.
Visit DreamFusion Gallery
Recently, researchers at Meta AI took a leap in generating art through prompts by announcing ‘Make-A-Video’, a new AI system that turns text prompts into brief, soundless video clips.
Apart from text-to-video generation, the tool can add motion to static images and fill in the content between two images. Furthermore, one can also present a video and ‘Make-A-Video’ will generate different variations. Head to Make-A-Video’s web page to see more of what it can do.
Workshop, Virtual
Building Data Solutions on AWS
19th Nov, 2022
Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023
Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023
Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023
Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023
Stay Connected with a larger ecosystem of data science and ML Professionals
Discover special offers, top stories, upcoming events, and more.
Stay up to date with our latest news, receive exclusive deals, and more.
© Analytics India Magazine Pvt Ltd 2022
Terms of use
Privacy Policy
Copyright

source

Shop Sephari

Leave a Reply