Getty has threatened to prohibit AI generated images as privacy and ownership issues increase

Getty has threatened to prohibit AI generated images as privacy and ownership issues increase.


Getty Images has made it so that AI generated images can't be uploaded or sold. This is to protect the company from any legal problems that could come up in what is essentially the Wild West of art production today.



using ai to generate images


Getty images CEO Craig Peters says, "There are serious concerns about the copyright of the outputs of these models, as well as unresolved rights issues with the images, the metadata of the picture, and the people in the photos."



With the emergence of AI art tools like DALL-E, Stable Diffusion, and Mid Journey, there has been a surge in the number of AI-generated images on the web.


We've mostly seen these photographs come and go as amusing gaffs on Twitter and other social media sites. But as these AI algorithms get more complicated and convincing at making pictures, these pictures will be used for a lot more. 


Getty, one of the largest curated picture library suppliers, wants to remain out of that market.



Getty's CEO refused to clarify whether the firm has faced legal concerns over AI-generated photos. Still, he did note that the company had "very little" AI-generated content in its database.


Training is essential for all AI picture-generating algorithms, and vast image collections are required to do it efficiently. According to The Verge, Stable Diffusion is trained on photos collected from the internet using a dataset from the German nonprofit LAION. 


The Stable Diffusion website says that this data collection was made in accordance with German law, but it also says that the copyright for photos made with their program "will vary from jurisdiction to jurisdiction."


Because of this, it will become harder and harder to tell if a piece of art is based on a protected image. 

Other concerns have been raised about image datasets and scraping techniques after a California-based artist uncovered confidential medical record photographs(opens in new tab) taken by their doctor within the LAION-5B picture collection. 


The artist, Lapine, realized their photographs had been copied by using a website called "Have I Been Trained?".


Ars Technica authenticated these photographs in a conversation with Lapine, who has kept their identities private for privacy concerns. 


Evidently, the purportedly secret medical records stored by the artist's doctor were not allowed privacy following the doctor's death in 2018, and it's pretty concerning to consider how these got into a highly public dataset without consent since.


When looking for Lapine's photographs, Ars reports that when looking for Lapine's photographs, they uncovered more images that may have been obtained through similar techniques.



When asked about the image set, the CEO of Stability AI, the company behind Stable Diffusion, stated that he couldn't speak for LAION but that it might be possible to un-train Stable Diffusion to remove specific images from its algorithm, but that the end result as it stands today is not an exact copy of any information from a given image set.


There are growing privacy and legal issues about the production and sharing of AI produced photographs that will certainly arise in the next months and years. 


What was once an entertaining, and possibly even useful, tool at times, is very likely to become a thorny issue for legislators, rights holders, and ordinary individuals.


In the meantime, I don't blame traditional picture libraries for taking a step back from technology.

Comments
No comments
Post a Comment



    Reading Mode :
    Font Size
    +
    16
    -
    lines height
    +
    2
    -