If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.
Ok, but who is making those “open weight” models though? Individuals don’t really have the resources to run these huge scraping operations, so they’re often still corporate releases with fake open source branding.
Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.
Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.
That’s pretty much what local ML is.
If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.
Ok, but who is making those “open weight” models though? Individuals don’t really have the resources to run these huge scraping operations, so they’re often still corporate releases with fake open source branding.
Corporate, for now.
Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.
Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.
They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.
Consumers aren’t in the best position right now though, especially with the price hikes.
There are huge public datasets that are often used for pretraining. Common Crawl and C4 are probably the most prominent, but there are others.
There are also big public datasets available for fine-running and instruction tuning.
The open weight models are getting pretty powerful, thanks to some Chinese labs.