

It would be really cool if they didn’t do that this time.


It would be really cool if they didn’t do that this time.


There’s also an option to bring your own LLM, with fields for model name, endpoint, and API token available for entry when the manual option is enabled. However, the page itself warns local models may not work correctly.
It looks like there’s an option for people to self-host too. You won’t have to send your history to someone else’s computer.


I hope you do finish it. It sounds really cool.


Calling it a “Distillation Attack” is wild. Get fucked Anthropic.
Asagao to Kase-san, a one episode OVA 00:05:25 - 00:05:57.


VAEs are used in image generation too at the end of generation to convert latent images to pixel space.


With this feature’s character creator we’ll finally get to see if you’re allowed to be black in Genshin Impact.


Record them anyway. There’ll be more ways to de-anonymize them in the future.


This is definitely the type that grants wishes.


But the people making money off of all of that are mad now, hence this article.


You can’t be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.
https://www.imdb.com/title/tt16369708/
https://www.imdb.com/title/tt15054592/


The dream is dead.


This doesn’t mean you can misrepresent facts like this though. The line I quoted is misinformation, and you don’t know what you’re talking about. I’m not trying to sound so aggressive, but it’s the only way I can phrase it.


Generating an AI voice to speak the lines increases that energy cost exponentially.
TTS models are tiny in comparison to LLMs. How does this track? The biggest I could find was Orpheus-TTS that comes in 3B/1B/400M/150M parameter sizes. And they are not using a 600 billion parameter LLM to generate the text for Vader’s responses, that is likely way too big. After generating the text, speech isn’t even a drop in the bucket.
You need to include parameter counts in your calculations. A lot of these assumptions are so wrong it borders on misinformation.


30B means the model is 30 billion parameters, basically how big the model is. MoE means it’s a Mixture of Experts. A mixture of Experts model is a team of smaller models that work together to process tasks. In the case of an MoE model, the 30B stands for the total parameters of the whole team. For example, Qwen3-30B-A3B’s parameter total is split between 16 checkpoints.


So you don’t interact with AI stuff outside of that? Have you seen any cool research papers or messed with any local models recently? Getting a bit of experience with the stuff can help you better inform people and see through the more bogus headlines.


It definitely seems that way depending on what media you choose to consume. You should try to balance the doomer scroll with actual research and open source news.
(●‘ω’●)