• 3 Posts
  • 828 Comments
Joined 10 months ago
cake
Cake day: June 25th, 2025

help-circle





  • That’s a good point, but this technique isn’t suddenly going to be replacing people’s works, jobs, or IP.

    I’ve criticized NVIDIA for being the abusive monopoly that it is in nearly every other comment, but what you are describing isn’t even limited to just NVIDIA, and frankly, I’m not going to be really defending major IP holders, just the small creators. There’s a lot of generative AI that’s endangering small creators, but in my opinion this is not endangering it.

    My biggest problem with AI training ignoring intellectual property is their hypocrisy to rush technology that is being used to try to replace a lot of real world jobs blindly, not to protect excessively long IP laws and IP copyright trolls and hoarders. People should download their free cars if they can.



  • It’s worse, I made plenty of criticism to go along with those comments. They are downvoting not only anything that breaks the circlejerk. They’ve been given free reign to be as toxic as possible, to the point of just being able to get away with spamming multiple communities with claims I’m a bot - and I can’t even have my usual fun trolling back in the sea of downvotes because I wrongly dismissed that this was generative AI, something Jensen himself had referred to as 😭

    It would be funny if my account is lost for an issue so temporary. I’ve already begun to see the circlejerk try to get over what this actually is by switching over from claiming it is AI slop to saying it just makes it look like it and criticism that’s actually more valid. Still, it’s led to the pretty funny circumstances of people implying I’m an NVIDIA shill, because that also implies that NVIDIA shills have to now get around by telling people they should not support them and their abusive monopoly feeding the AI bubble through global cartels.





  • That’s ok, I can paste what you were trying to compare here:

    I’m not seeing the relevance of your new video. This filter manipulates brightness and material at a pixel level, which my video shows at several. At the level of focus you are trying to show, there are still material differences being applied, like how light bounces of off the skin, eye, and lips, and the filter is working over detail that I already warned you the only frames that could be compared against each other are lacking.

    My video already shows it applying well enough, but if try to zoom up to the pixels in an image that does not have the quality to show what it’s parting from and ignore what’s happening on the quality that can be made it, it certainly can be argued into a different story.

    I think my example already does a decent job at showing that this isn’t just the typical image generation AI, so I’m afraid we’ll have to disagree from here on out, as I don’t think either can make the example to each other any more clearer. Regardless, if you are as interested as I am on this, it will be something true experts go over and point out when it gets released.





  • You are working with different frames, and you are also flickering between them as opposed to using the opacity slider, which makes it difficult to see how the brightness and material effects are being altered between the two. All you need to do is gradually shift the opacity layer from the top layer once you’ve aligned them. You are actually working with the source images while I just down and dirty snipped it, gonna try getting the source image of the side by side comparison from the same frame and see if the higher definition makes a difference. I would make it a streamable, but I have no experience doing it.


    Yeah, just tried it out. The ones actually from the same frame are pretty low res in comparison, but the high res ones you are choosing are from different frames, so even if you align them using the pupil as a reference, zooming out shows just how uneven they are due to minor shifts in position. Unfortunately, that means having to resort to the lower resolution alternative.


  • Not only have I done that, I overlayed one image on top of the other in GIMP to test it out with the opacity slider. Her eyes are not bigger, and the corners have not been moved up. The overlay is perfect, and transitions perfectly. I think that what you are referring to is the optical illusion of the eyes appearing to get “bigger” when they get brighter, but if you say, place it around a fixed reference, it is clear they remain the same size.

    Regarding the football player, if you look at the entire scene, there’s a dark tone applied to everything, including the soccer ball. It seems to make dark scenes brighter and outdoor scenes darker. Having said that, I agree, the filter does exaggerate the skin color of the football player, but that’s what it alters, the lighting and material properties. There’s even a point where you can place the bar that the transition is seamless enough that it appears to be the same shot of the face. To test whether this was the case, I put it into GIMP, and using just the brightness slider tried to see whether I could make the colors match just from changing the brightness - and I could.

    What I actually found more interesting is that in every other example, even the clothing folds remained the same - this is the only example where the folds in the clothing seem to change. Looking at the background, there’s also some evidence it’s not the same frame. I doubt it’s from a material change, it’s just that they are really one frame apart.

    Without using GIMP, you can also take the football player, anyone of them, and zoom close up. Make a note of every features in their face, because it is preserved, if exaggerated.


  • https://nvidianews.nvidia.com/news/nvidia-dlss-5-delivers-ai-powered-breakthrough-in-visual-fidelity-for-games

    DLSS 5 introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials. Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.

    DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame. DLSS 5 runs in real time at up to 4K resolution for smooth, interactive gameplay.

    At most it’s texture generation, but if Jensen already identified it as generative AI, I’m not sure why they would then go and lie about it only affecting lighting and material at the pixel level.

    It’s image generation just like the prototypes that converted your drawings into “realistic” image are.

    This it is not, and if Jensen hadn’t referred to as generative AI, I would still think it’s just an evolution of what DLSS had been doing, except expanded to lighting and materials. Jensen obviously has no problem referring to this as generative AI, so I’m not sure why he would hide and lie about doing what you claim.