That's a load of shit lol, also there's absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.
This has not at all been my experience. Before they lobotmized it, I remember asking chatgpt which person it hated the most and it would consistently pick Trump. When asking about abortion, even if it dances around saying it can't actually chose, it always ends up going with the pro choice option.
because they have received their content from decades of already biased human knowledge, and because achieving unblemished neutrality is in many cases probably unattainable.
We could train the AI to pretend to be unbiased. That's how the news media already works.
What would neutrality be? An equal representation of views from all positions, including those people consider "extreme"? A representation that focuses on centrism, to which many are opposed? Or a conservative's idea of neutrality where there's "normal" and there's "political" and normal just happens to be conservative? Even picking an interpretation of "neutral" is a political choice which will be opposed by someone somewhere, so they could claim you're not being neutral towards them. I don't know that we even have a very clear idea of what "unbiased" would be. This is not to deny that there are some ways of presenting information that are obviously biased and others that are less so. But this expectation that we can find a position or a presentation that is simply unbiased may not even make much sense.
I was being sarcastic. My opinion is that it is impossible for a journalist to be unbiased. And it' ridiculous to expect them to pretend anyway. I think news media would benefit from prioritizing honesty over "objectivity", because when journalists pretend to be objective, the lie is transparent and undermines their credibility.
Yeah, what they're calling AI can't create, they're still just chatbots.
They get "trained" by humans telling them if what they responded was good or bad.
If the humans tell the AI birds aren't real, it's going to tell humans later that birds aren't real. And it'll label everything that disagrees as misinformation or propaganda by the CIA.
Tell an AI that 2+2= banana, and the same thing will happen.
So if conservatives tell it what to say, you'll get an AI that agrees with them.
It's actual a topical concern with musk wanting an AI and likely crowdsourcing trainers for free off Twitter. When every decent human being has left Twitter. If he's able to stick around trumps government long enough and grift the funds to fast track it...
This is a legitimate concern.
As always it's projection, when musk tweeted:
Imagine an all-powerful woke AI
Like it was a bad thing, he was already seeing dollar signs from government contracts to make one based on what Twitter thinks.