Everything you need to know about the AI-free social media platform for artists.
Key points:
Cara's Rapid Growth: The app gained 600,000 users in a week
Artists Leaving Instagram: The controversy around Instagram using images to train AI led many artists to seek an alternative
Cara's Features: The app is designed specifically for artists and offers a 'Portfolio' feature. Users can tag fields, mediums, project types, categories, and software used to create their work
While Cara has grown quickly, it is still tiny compared to Instagram's massive user base of two billion.
Glaze Integration: Cara is working on integrating Glaze directly in the app to provide users with an easy way to protect their work from be used by any AI
Artists are mostly not going to figure out the fediverse. There really needs to be some kind of way of accessing it that is more layman friendly if we ever want it to be adopted by non-nerds
Artists are perfectly able to use the fediverse, that is not what is stopping them.
They don't come because they need to be where their fans are. That is why Cara will only be a splash: their niche is artists who place more value on the anti-AI slant than on meeting their audience where it lives. By definition that is not conducive to a lot of organic growth.
It’s really not that complicated and with shit like Threads, companies are introducing the concept to the masses while the enshittification of Instagram and the like will force people to look for alternatives.
We need to welcome people with open arms and not push them away the moment someone has a question about how federation works.
Idk I hear misskey (activitypub micro blogging software, compatible but distinct from mastodon) is really big in Japan, used by lots of artists. lots of Japanese users on bluesky as well
Cara is popular because of it's anti ai stance. They have a detector to not allow ai images to be on the platform. Pixelfed allows it and also lack active users that are not artists.
The main problem with all the alternatives is for me (as a hobby photographer) the lack of models on these platforms. When looking for models, I find them on Instagram and no other platform. As with WhatsApp the majority of "normal" people have decided to use that, so if I'm telling them to contact me on Signal, they shy away from that (and stillI I refuse to use it as much as possible).
So looking at Signal, it's free and very, very close to WhatsApp and yet still people don't want to use that. Getting them to use pixel fed would be much, much harder.
Does it seem odd... This is a crowd that is all about "hands off muh property". And yet they see nothing suspicious about someone giving them a free service.
Ok, the lady behind Cara just WON a f-ing copywrite lawsuit against some dick that stole her artwork. I'm 100% sure the wording is so if you *think* about stealing from Cara, she will come after your ass with both guns blazing.
Regardless, their terms of service let's Cara not only sell prints and your artwork to third parties but also let's them sell your artwork for AI training if they wanted to.
Instagram for all it's fault specifically says that they don't own your artwork and only get a license to show it.
I don't really care what she won, people tend to cave really fast if given proper financial incentive.
Why did you specifically not put in bold this part and are the property of Cara. Clearly you saw it off you took the time to avoid putting stars around it.
Pixelfed looks like they are doing a huge push to get up to speed. It has been an immature app/platform for a long time and slow to get the features that people need from a photo sharing social media.
According to their mastodon, they are working for better AI management features, and launching an app that will make it a genuinely positive experience.
I really want Pixelfed to take off and this really could have been a moment, but after using it for more than a year now, I just can't see it. Development is very slow - it feels like a one-man show (it might not be). We do need an alternative to Instagram, but yeah..
Your account does not appear to have spend management enabled, which would allow you to pause your project entirely if you hit a certain level of spend.
So, this is something of a devil's bargain. Either shut down your website just as it's catching fire and gaining traction. Or get billed a year's server budget in a matter of days because of exploding costs.
In a saner world, this might be used as an argument for treating the Internet as a public utility and not a for-profit rent. Perhaps more companies could grow and sustain large pools of customers if they weren't kneecapped by their own momentum.
Instead, I'm sure we're going to see more exotic insurance and finance services designed to siphon money out of websites as a hedge against unexpected growth.
OAuth2 itself is nice, especially for company internal SSO, but I don't have an Apple account and don't want to have a Google account eigther. Stop forcing me to tie my identity to big tech.
the crowdfunding/patronage of this platform only helps them build their proprietary empire. It's like giving money to your neighbor who wants to build a swimming pool on their property because they promise you'll be able to swim in it.
I'll be watching this curiously from a safe distance for now. I am interested in a new platform without AI, but this stinks of early-stage enshitification.
🤷 all we have to do is keep moving faster than our waste stream.
Platform has cool ideas, gets users, gets greedy, gets infected with bots and scammers, users leave for new platform with cool ideas....
Accept the idea that you are not going to have a thirty year old Yahoo Answers account and even if you did you won't be using it, and make peace with it.
I did and they said selling user data, promoting certain content, targeted ads, superchats, and a checkmark thing that costs five dollars a month that only 4 people get.
It's not. It's supposed to target certain open source AIs (Stable Diffusion specifically).
Latent diffusion models work on compressed images. That takes less resources. The compression is handled by a type of AI called VAE. For this attack to work, you must have access to the specific VAE that you are targeting.
The image is subtly altered so that the compressed image looks completely different from the original. You can only do that if you know what the compression AI does. Stable Diffusion is a necessary part of the Glaze software. It is ineffective against any closed source image generators that have trained their own VAE (or equivalent).
This kind of attack is notoriously fickle and thwarted by even small changes. It's probably not even very effective against the intended target.
If you're all about intellectual property, it kinda makes sense that freely shared AI is your main enemy.
Not only is this kind of attack notoriously unstable, finding out what images have been glazed is a fantastic indicator for finding high-quality art that is the stuff you want to train on.
It pollutes the data pool. The rule of gigo (garbage in garbage out) is used to garbage the AI results.
Basically, it puts some imperceptible stuff in the image file's data (somebody else should explain how because I don't know) so that what the AI sees and the human looking at the picture sees are rather different. So you try and train it to draw a photorealistic car and instead it creates a lumpy weird face or something. Then the AI uses that defective nonsense to learn what "photorealistic car" means and reproduce it - badly.
If you feed a bunch of this trash into an AI and tell it that this is how to paint like, say, Rembrandt, and then somebody uses it to try to paint a picture like Rembrandt, they'll end up getting something that looks like it was scrawled by a 10-year-old, or the dogs playing poker went through a teleporter malfunction, or whatever nonsense data was fed into the AI instead.
If you tell an AI that 2+2=🥔, that pi=9, or that the speed of light is Kevin, then nobody can use that AI to do math.
If you trained Chat GPT to explain history by feeding it descriptions of games of Civ6 them nobody could use it to cheat on their history term paper. The AI would go on about how Gandhi attacked Mansa Musa in 1686 with all out nuclear war. It's the same thing here, but with pictures.
Right but, AFAIK glaze is targeting the CLIP model inside diffusion models, which means any new versions of CLIP would remove the effect of the protection