Yes :)
- 23 Posts
- 220 Comments
BB84@mander.xyzto
Technology@beehaw.org•Does traning AI/ML-models on AI-generated content causes collapse on the quality of the output?English
1·22 hours agoNo one feeds random LLM output straight back though. The whole idea of reinforcement learning is you take some ML model output, check if it is good, and push the model in that direction if it is good.
As long as you believe that e.g. it’s easier to verify a mathematical result than to come up with one, then RL should work.
BB84@mander.xyzto
Technology@beehaw.org•Does traning AI/ML-models on AI-generated content causes collapse on the quality of the output?English
133·2 days agoYou took those quotes wildly out of context. Of course there is a hard limit on how much information can be extracted from data. Clever processing won’t break that limit. But only in basic cases have we seen proofs that certain statistical inference methods make optimal use of the data. In complicated systems like neural nets it is basically impossible to prove such optimality. In fact the models are almost definitely not using the data optimally. Processing can help. A lot.
How does the crosspost listing on Lemmy work?
The original post (just 10 minutes earlier) is here but is not shown in the list of crossposts for this post. Why?
On a separate note, I hope the author of this post and his colleagues are safe. His university got bombed by America and Israel earlier this month.
This is great! The value of having your computational pipeline nicely debuggable and visualizable cannot be overstated.
The float32 limitation sounds a bit arbitrary. Would be cool if blender allows float64 in geo nodes in the future.
Check this comment https://mander.xyz/comment/26729021
@[email protected] do you happen to know what the less dense ring in the outer part is?
My not very confident guess is that it’s just to label what kind of galaxies are observed by the instrument at that range. Really not sure about the less dense ring in the outer part though.
what. there definitely are differences between the universe today and the universe billions years ago.
the latter. the map looks different further away from the center of the circle because further away = earlier time. if they attempted to compensate for how things far away have changed since the light was emitted, the map would look uniform.
hmm brb imma go invent a refrigeration cycle that runs 15C <-> 250C
startup idea: fridge with warm little nook for dog
i want that too. lots of houses already have hot and cold water lines so it shouldn’t be too hard. problem is getting appliances to adopt a standard on how to connect to this network
that’s the joke. i tried to imply it in the title but i didn’t realize that in english you call it 2nd law of thermodynamics rather than 2nd rule
a heat pump oven sounds like an actually cool idea. why is it not a thing yet?
if you’re already heating your home, then what does it hurt to have the fridge do a bit more of that?
in fact, the fridge is a tiny heat pump using your food as the reservoir. so unless your house is heat pump equipped, it is beneficial energy wise to keep the fridge inside.
if your house is heat pump equipped, then it depends on how the efficiency compare. if you put lots of hot food into your fridge then you should
definitelyprobably keep it inside.
if you’re cooking on the stove but the fridge is next to you and pumping out lots of heat that heat may inadvertently make your food overcooked.
the startup entrepreneurs have thought this through. give them some credit.















Reinforcement learning makes the model better over time, so why should there be fewer and fewer good results?
If you’re talking about the rate of improvement going down, then yes, of course. That’s bound to happen (unless you have an actual intelligence explosion, but in that case you won’t know what “good results” even mean anyway).