“On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” - Charles Babbage
The business people adopting AI: "who cares what it's trained on? It's intelligent right? It'll just sort through the garbage and magically come up with the right answers to everything"
I believe Robustness was the term I learned years ago: the ability of a system to gracefully handle user error, make it easy to recover from or fix, clearly communicate what was wrong etc.
Of course, nothing is ever perfect and humans are very creative at fucking up, and a lot of companies don't seem to take UX too seriously. Particularly when the devs get tunnel vision and forget about user error being a thing....
Model degeneration is an already well-known phenomenon. The article already explains well what's going on so I won't go into details, but note how this happens because the model does not understand what it is outputting - it's looking for patterns, not for the meaning conveyed by said patterns.
Frankly at this rate might as well go with a neuro-symbolic approach.
I'm autistic and sometimes I feel like an ai bot spewing out garbage in social situations. If I do what people normally do and make it sound believable, maybe no one will notice.
Well, you've got a timestamped copy of much of the Web that existed up until latent-diffusion models at archive.org. That may not give you access to newer information, but it's a pretty whopping big chunk of data to work with.
Hopefully archive.org have measures in place to stop people from yanking all their data too quickly. As least not without a hefty donation or something. As a user it can chug a bit, and I'm hoping that's the rate-limiting I'm talking about and not that they're swamped.
That would go against the principal of the archive imo but regardless, if you take away all means of acquiring data freely, you are just giving companies like OpenAI and Google who already have copies of it an insane advantage.
AI isn't going away, we need to make sure we have free access to it as to not give our whole economy to a handful of companies.
I'd be very wary of extrapolating too much from this paper.
The past research along these lines found that a mix of synthetic and organic data was better than organic alone, and a caveat for all the research to date is that they are using shitty cheap models where there's a significant performance degrading in the synthetic data as compared to SotA models, where other research has found notable improvements to smaller models from synthetic data from the SotA.
Basically this is only really saying that AI models across multiple types from a year or two ago in capabilities recursively trained with no additional organic data will collapse.
It's not representative of real world or emerging conditions.
provenance requires some way to filter the internet into human-generated and AI-generated content, which hasn’t been cracked yet
It doesn't need to be filtered into human / AI content. It needs to be filtered into good (true) / bad (false) content. Or a "truth score" for each.
We don't teach children to read by just handing them random tweets. We give them books that are made specifically for children. Our filtering mechanism for good / bad content is very robust for humans. Why can't AI just read every piece of "classic literature", famous speeches, popular books, good TV and movie scripts, textbooks, etc?
It doesn’t need to be filtered into human / AI content. It needs to be filtered into good (true) / bad (false) content. Or a “truth score” for each.
That isn't enough because the model isn't able to reason.
I'll give you an example. Suppose that you feed the model with both sentences:
Cats have fur.
Birds have feathers.
Both sentences are true. And based on vocabulary of both, the model can output the following sentences:
Cats have feathers.
Birds have fur.
Both are false but the model doesn't "know" it. All that it knows is that "have" is allowed to go after both "cats" and "birds", and that both "feathers" and "fur" are allowed to go after "have".
It's not just a predictive text program. That's been around for decades. That's a common misconception.
As I understand it, it uses statistics from the whole text to create new text. It would be very rare to output "cats have feathers" because that phrase doesn't ever appear in the training data. Both words "have feathers" never follow "cats".
Both sentences are true. And based on vocabulary of both, the model can output the following sentences:
Cats have feathers.
Birds have fur
This is not how the models are trained or work.
Both are false but the model doesn't "know" it. All that it knows is that "have" is allowed to go after both "cats" and "birds", and that both "feathers" and "fur" are allowed to go after "have".
Demonstrably false. This isn't how LLMs are trained or built.
Just considering the contextual relationships between word embeddings that are created during training is evidence enough. Those relationships from the multi-vector fields are an emergent property that doesn't exist in the dataset.
If you want a better understanding of what I just said, take a look at this Computerphile video from four years ago. And this came out before the LLM hype and before ChatGPT 3, which was the big leap in LLMs.
That's what smaller models do, but it doesn't yield great performance because there's only so much stuff available. To get to gpt4 levels you need a lot more data, and to break the next glass ceiling you'll need even more.
Then these models are stupid. Humans don't start as a blank slate. They have an inherent aptitude for language and communication. These models should start out with basics of language, so they don't have to learn it from the ground up. That's the next step. Right now they're just well read idiots.
And they're overlooking that radionuclide contamination of steel actually isn't much of a problem any more, since the surge in background radionuclides caused by nuclear testing peaked in 1963 and has since gone down almost back to the original background level again.
I guess it's still a good analogy, though. People bring up Low Background Steel because they think radionuclide contamination is an unsolved problem (despite it having been basically solved), and they bring up "model collapse" because they think it's an unsolved problem (despite it having been basically solved). It's like newspaper stories, everyone sees the big scary front page headline but nobody pays attention to the little block of text retracting it on page 8.