1
   

Transhumanist Agenda and the Ethical Ramifications Therein

 
 
Reply Fri 8 Aug, 2008 10:35 pm
As we approach new horizons in genetic sciences and nanotechnology, mankind must consider the possibility of self improvement. It has long been conjectured that the advent superhuman mental and physical ability shall arise as a result of specific scientific developments. Particularly promising are some of the advents in nanotechnology which hold possible applications to health. Also there is the development of the interface between nuerons and artifical devices, which may lead to some aspect of mental improvments in memory, computation ect.

Parellel to this line of development, we have A.I., technological singularity, and the general fear of becoming obsolete, giving way to our superior creations. Could development of A.I., Formal Ontologies and information science force us to improve fundamentally or perish?

What should be our approach as a people?
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 1 • Views: 970 • Replies: 8
No top replies

 
paulhanke
 
  1  
Reply Sat 9 Aug, 2008 09:41 am
@Zetetic11235,
... all excellent questions that together circumscribe an ethical void that philosophers must continue to work hard (and fast!) to fill ... human well-being is at stake here!

The world is already at the point where competition between individual humans is no longer a simple matter of pitting genetic capability against genetic capability - the moneyed have an incredible advantage with their unlimited access to technology and education that is beyond the means of the masses; and when being perpetually disadvantaged breaks a mind, low-tech "equalizing" prostheses are often brought to bear (e.g., guns and bombs) ... so what are the ethics of the (equitable?) distribution of such technology and education?

And as for humans becoming obsolete, that's certainly a concern - but a more immediate concern may be that of terrestrial life itself becoming obsolete ... we're still a long way from understanding the human mind (the implication being that we're still a long way from developing an artificial human intelligence); but at the same time we're already proficient at creating lifeless machines that that in concert "make the world go 'round" ... at the point at which we make these lifeless machines so robust, so fault tolerant, and so interconnected as to form a self-perpetuating whole, have we created a new form of life to compete with terrestrial life? ... and would terrestrial life survive the competition? ... here, a scientific understanding of terrestrial life and the human mind can be a double-edged sword - such an understanding could help us to find ways to build and apply machines that complement (and do not threaten) terrestrial life and the human mind; on the other hand, this same understanding could be a global disaster in the hands of a broken mind ... the scientific and philosophic communities are already at work forging this sword - what are the ethics of wielding it?
Zetetic11235
 
  1  
Reply Sat 9 Aug, 2008 11:44 pm
@paulhanke,
We have yet to create a machine with the essential components for any form of self will to occur. There is no mechanical form of the limbic system, and we may never have to worry about machines taking on a personal motive without something of that sort being developed and implemented.

On the other hand, hybrids of mind and machine have been explored, and perhaps a cyborg type fusion might occur prompting the development of a life and will within the machine? This is very hypothetical, however it is still potentially possible, though maybe only by quite a stretch.

I personally am in favor of the transhumanist agenda provided that no one be privilaged to more improvement than anyone else. This will probably not happen, and it is a very frightening proposition to think of the possibility of beings far beyond our ability with vast stores of wealth at their disposal. What is to stop them from ruling us?
Holiday20310401
 
  1  
Reply Sun 10 Aug, 2008 11:15 am
@Zetetic11235,
Zetetic11235 wrote:
As we approach new horizons in genetic sciences and nanotechnology, mankind must consider the possibility of self improvement.


Yeah, have you read about the nanobullets?! And then I thought about the literal possibility, of military nano bullets.

Zetetic11235 wrote:
Also there is the development of the interface between nuerons and artifical devices, which may lead to some aspect of mental improvments in memory, computation ect.


We must be careful with this technology. I don't think that humanity should keep the same view of virtue as has become clear. "The market has spoken". Obviously this is a result of the idea for virtue of society. But we want virtue of humanity, the word itself means something. I don't want humanity to become the borg. In the future people may buy into the idea of increased ram for their brain!! Laughing but serious.

paulhanke wrote:
... all excellent questions that together circumscribe an ethical void that philosophers must continue to work hard (and fast!) to fill ... human well-being is at stake here!


Yeah, I mean why are we all here anyways.:rolleyes::nonooo: I enjoy philosophy too much to be spoiled by the pessimism of the future.:Not-Impressed:

paulhanke wrote:

the scientific and philosophic communities are already at work forging this sword - what are the ethics of wielding it?


Unfortunately the system is incompatible to such aptitude. Laughing

Zetetic11235 wrote:
We have yet to create a machine with the essential components for any form of self will to occur. There is no mechanical form of the limbic system, and we may never have to worry about machines taking on a personal motive without something of that sort being developed and implemented.


I doubt that. I think consciousness is possible, that quantum research shall give us an answer.
0 Replies
 
paulhanke
 
  1  
Reply Sun 10 Aug, 2008 07:27 pm
@Zetetic11235,
Zetetic11235 wrote:
We have yet to create a machine with the essential components for any form of self will to occur. There is no mechanical form of the limbic system, and we may never have to worry about machines taking on a personal motive without something of that sort being developed and implemented.


... does the obsolescence of terrestrial life require a machine with a sense of self? ... consider a scenario where a self-reproducing nanobot has been designed to destroy cancer cells ... a stray cosmic ray knocks a bit out of place in its programming and the result is a self-reproducing nanobot that destroys all biological cells ... would we humans be able to counter such a creature before it covered the Earth, destroying all terrestrial life in its wake? ... hopefully, we'd be ethical enough not to create machines with life-like capacities (self-reproduction; self-adaptation; etc.) in the first place - but then again, we created the atom bomb ... ... ...
Zetetic11235
 
  1  
Reply Sun 10 Aug, 2008 07:38 pm
@paulhanke,
What like a grey goo scenario? That seems unlikely, a cosmic ray? Common, then what of a self reproducing nanomachine which is designed to destroy the other? You could nip it in the bud with a fail safe of that kind.
FatalMuse
 
  1  
Reply Sun 10 Aug, 2008 07:45 pm
@paulhanke,
Without going into too many details for privacy reasons, I have a relative who completed his PhD in nuclear physics but left the university afterwards because the ethical issues raised by the research they were doing (nano-computing being combined with human cells - cyborgs etc) didn't sit well with him. He was uncomfortable with where the research funds were coming from and what was expected to be researched. He never said outright because of legal reasons I guess, but he strongly hinted that a lot of their research was being funded by the military for weaponry.
0 Replies
 
paulhanke
 
  1  
Reply Sun 10 Aug, 2008 08:14 pm
@Zetetic11235,
Zetetic11235 wrote:
You could nip it in the bud with a fail safe of that kind.


... certainly you could try (what's our track record on "fail safes"?) ... but doesn't the fact that you have to be able to foresee the need for engineering such "fail safes" into mindless machines imply that it doesn't require a sense of self for mindless machines to render terrestrial life obsolete? ...
Holiday20310401
 
  1  
Reply Sun 10 Aug, 2008 08:44 pm
@paulhanke,
Yes, it seems that nanoweaponry is going to become America's next big thing. I can see it getting a lot of funding.

I was always interested though in the potential of nanotechnology and I don't know why one would quit their job when in such an abstract field. The research is unfortunately inevitable.
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » Transhumanist Agenda and the Ethical Ramifications Therein
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 05/03/2024 at 09:23:00