Dudes obviously magic tho, summoning a strong enough concentrated beam of light inside with a futuristic lens design of unknown origin for the time period!
Dudes obviously magic tho, summoning a strong enough concentrated beam of light inside with a futuristic lens design of unknown origin for the time period!
The threat stemming from AGI doesn’t rely on linearly scaling intelligence, just imagine one human that can think faster than you by the same margin that a computer can do pretty much anything faster than you + it’s directly connected to the internet. As long as one grants the assumption of speed and superior programming skills, I believe one must acknowledge that a certain level of risk arises.
Also you don’t need to ascribe it any anthropomorphic intentions or even consciousness. The argument stands that if what it is going to do, be it by an errant instruction/misunderstanding or mischief, is not what a morally sane person would consider to be in our best interests, how are we going to stop it from performing said undesirable task if it’s as smart/smarter, faster and better at programming and networking than us?
My brother in Christ, have you heard about our lord and saviour the Scientific Method and the proliferation of cross-domain ideas? How do you imagine the li-ion batteries came about as the go-to energy storage solution? Incremental improvements of ideas would be my guess, ideas have to start somewhere and of course they’re going to be hyperbolic since researchers are both excited and have to draw attention to their ideas.
I sympathise with your point but the alternative is little to no research into different battery technologies because close to nothing will ever emerge as a competitive day-one drop-in replacement, but some ideas may prove exciting to others who understand the value and they might push the ball further towards realistic alternatives.
Lost me in the center there
(Click saving) this guy (and myself) move the potatoes to another ceramic, add celery to the carrot-onion mix & chop that and leave it under the roast
I’d go so far as to venture a guess and say this particular individual wouldn’t like any skin colour darker than Marine Le Pen’s bleached asshole…
Absolutely, but that’s the easy case, computerphile had this interesting video discussing a proof of concept exploration which showed that indirectly including stuff in the training/accessible data could also lead to such behaviours. Take it with a grain of salt cause it’s obviously a bit alarmist, but very interesting nonetheless!
The what! That’s so cool, thanks for sharing!