Midjourney, an AI image generator, is ending its free trials due to “extraordinary demand and trial abuse.” CEO David Holz says on Discord that new safeguards haven’t been “sufficient” to prevent misuse during trial periods. To use the technology, users will now have to pay at least $10 per month. Midjourney’s decision to end free trials comes after people created high-profile deepfakes using the tool. The company is taking steps to ensure that its technology is not misused and that users are aware of the potential consequences of using AI image generators.
Midjourney has recently been in the spotlight due to its users creating fake images of Donald Trump and Pope Francis. While the images were quickly identified as fake, there is a concern that bad actors might use Midjourney and other similar generators to spread misinformation. This has raised questions about the potential for AI-generated deepfakes to be used to deceive people and spread false information. Companies like Midjourney and OpenAI’s DALL-E must take steps to ensure that their technology is not used for malicious purposes.
Midjourney has been struggling to establish content policies due to the increasing realism of AI-generated images. In 2022, founder Holz justified a ban on images of Chinese leader Xi Jinping by citing a desire to “minimize drama” and maintain access in China. During a Wednesday chat with users, Holz revealed that Midjourney is working to improve AI moderation to better screen for abuse. Holz’s comments demonstrate the difficulty of setting content policies in an increasingly AI-driven world, and the need for platforms to balance user freedom with the need to protect against abuse.
OpenAI and Stability AI are two companies that have implemented different rules to prevent incidents related to the use of their products. OpenAI has strict rules that forbid images of ongoing political events, conspiracy theories, politicians, hate, sexuality, and violence. On the other hand, Stability AI has relatively loose guidelines that only forbid users from copying styles or making not-safe-for-work pictures. OpenAI’s rules are designed to protect users from potentially harmful content, while Stability AI’s guidelines are more lenient and allow users to create whatever they want. Both companies are committed to providing a safe and secure environment for their users.
AI image production has become increasingly popular, but it has raised some concerns. Misleading content and stolen images are two of the main issues. Companies are embracing AI art in their products, but there is also hesitation from firms worried about potential legal issues. AI image production can be a great tool for businesses, but it is important to be aware of the potential risks and to take steps to ensure that the images used are not stolen or misleading.