@failures art,
Like Eorl, I'm confident that there are no jobs robots will not ever be able to do.
The question is whether or not humanity will chose to ban them from performing any jobs, and if it has the ability to enforce its choice.
Personally, I believe that artificial intelligence is inevitable. If you subscribe to the happy predictions of folks like Ray Kurzweil, most of us may be alive to see its arrival. I'm afraid I don't and that unless his predictions about extending life spans come true, I won't be around to see it.
In any case, once that milestone is achieved, what machines can do will be limited only by the pace of
their engineering skills. With machines more intelligent than humans employing their dumb robotic cousins, a thoroughly convincing human facsimile is also inevitable.
It's quite possible that we will see a gradual merging of man and machine to the point that drawing distinctions between the two will be meaningless. Of course this can only happen if early on we can keep hold of the reins.
Undoubtedly there will, at some point in time, be a tremendous backlash to the evolution of machines, and one that will probably reshuffle the ideological separations with which we are currently accustomed. It’s not difficult to imagine (as numerous sci-fi writers have) a human culture that fears and steadfastly restricts the development of machine intelligence. I think though that based on human history such a backlash will not occur until the genie is already out of the bottle, and in the case of intelligent machines, it's highly unlikely we will find a way to shove them back in. No Captain Kirk tricks of logic will rule the day. (In fact, I would love to be present when the first AI is shown one of those old episodes of Star Trek, where the machine starts smoking because Kirk has masterfully befuddled it. It could be the first indication that intelligent machines have a sense of humor)
A lot will depend upon how machine intelligence evolves. There is a general assumption that a machine that is intelligent will, in effect, be a human machine. The Turing Test is predicated on this premise. However, there is reason to believe that an intelligence that is essentially born overnight and not subject to hundreds of thousands of years of painful progressive, cultural programming, will not be quite as similar to human intelligence as we might expect and hope.
After all, an intelligent machine that can procreate and exist totally independent of humans is hardly going to care whether or not it has passed the Turing Test and humans consider it intelligent.
At the same time, machine intelligence is likely to have the ability to evolve incredibly fast by human standards and outpace us in no time. Unless humans are able to apply the brakes to some extent, the machines may not permit us the opportunity to merge with them. By this I don't mean to suggest that machines will not allow us to merge or, worse, decide to eliminate us (those things are possible of course), but instead that the evolution of machine intelligence may be so rapid that it is as different from human intelligence as human intelligence is from insect intelligence.
It's anyone guess what a race of super intelligent machines might do or not do.
Having sprinted past us, will they allow us to retard the development of the next wave of machine intelligence so that we can continue to use machines as slaves, and would they even permit us to do so that humans and machines can operate at roughly the same level...the merge scenario?
If, as I believe, machine intelligence is inevitable (barring a zombie apocalypse) the question of what jobs they can do for us or what jobs we will allow them to do is probably not something we will have the opportunity to decide.