Looking for something unique on Google to blog about, I mindlessly clicked a link to an article titled "We're Underestimating the Risk of Human Extinction." That title sums up the central idea, but one would never guess what scenarios are actually considered risks to the existence of the human species. In this article, Nicholas Bostrum, a philosopher and professor for the Oxford University, answers a few interview questions concerning whether humanity should direct its resources towards existing problems, or towards preventing future risks. His analysis will immediately alter one's ideas of what could actually be the downfall of the Homo Sapiens.
The strange thing about evolution is that it is discussed so frequently, yet never evaluated for how it will affect our future. Specifically, there is never discussion on how technology will alter the future species. Bostrum argues that humans may not evolve at all, and that is because advances in technology will develop an entirely new "species" - and that is artificial intelligence. Think iRobot, but with mathematical and model-based support for their development. The "Singularity" theory indicates that if even just ONE machine develops intelligence, or cognitive capabilities, that machine could immediately outpace humanity in reasoning and computing, and replicate itself. Bostrum argues that technology need not be a bad thing, but humanity is extremely careless, and the risk of one machine being hostile could risk a new type of war. Technology is immune to diseases, do not need sleep, would be developed to be indestructible, could shut down our technology systems, and could replicate at full capacity - absolutely no military of any size or skill could trump those advantages. Your iPhone's grandchildren could be robot Hitler, but to humanity. The second part of that statement is not at all humorous or exaggerated, one hostile machine could overnight become a human genocide.
The second interesting concept that Bostrum alludes to is why exactly an existential risk is more important to prepare for, even if the scenarios are potentially deep in the future. Preventing a massive asteroid from hitting the Earth, which is an objective scenario for extinction, is much more preferable to pursue than preventing a famine from happening this winter. Even if a fourth of the population died in that famine, the rest of the Earth would repopulate and continue to prosper. When an asteroid hits, there is not a person left breathing, the nearly infinite generations that would've been produced by those people will never be able to breathe. This is somewhat apocalyptic, but not impossible my friends. The interviewee offers insight to us all as global citizens, and what we should do with our skills on Earth. First, we all must reject our genetic predispositions to deal with immediate issues, and consider what kinds of ludicrous situations we might get ourselves into.
Read this!! http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/#
The strange thing about evolution is that it is discussed so frequently, yet never evaluated for how it will affect our future. Specifically, there is never discussion on how technology will alter the future species. Bostrum argues that humans may not evolve at all, and that is because advances in technology will develop an entirely new "species" - and that is artificial intelligence. Think iRobot, but with mathematical and model-based support for their development. The "Singularity" theory indicates that if even just ONE machine develops intelligence, or cognitive capabilities, that machine could immediately outpace humanity in reasoning and computing, and replicate itself. Bostrum argues that technology need not be a bad thing, but humanity is extremely careless, and the risk of one machine being hostile could risk a new type of war. Technology is immune to diseases, do not need sleep, would be developed to be indestructible, could shut down our technology systems, and could replicate at full capacity - absolutely no military of any size or skill could trump those advantages. Your iPhone's grandchildren could be robot Hitler, but to humanity. The second part of that statement is not at all humorous or exaggerated, one hostile machine could overnight become a human genocide.
The second interesting concept that Bostrum alludes to is why exactly an existential risk is more important to prepare for, even if the scenarios are potentially deep in the future. Preventing a massive asteroid from hitting the Earth, which is an objective scenario for extinction, is much more preferable to pursue than preventing a famine from happening this winter. Even if a fourth of the population died in that famine, the rest of the Earth would repopulate and continue to prosper. When an asteroid hits, there is not a person left breathing, the nearly infinite generations that would've been produced by those people will never be able to breathe. This is somewhat apocalyptic, but not impossible my friends. The interviewee offers insight to us all as global citizens, and what we should do with our skills on Earth. First, we all must reject our genetic predispositions to deal with immediate issues, and consider what kinds of ludicrous situations we might get ourselves into.
Read this!! http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/#
No comments:
Post a Comment