Starglider wrote:Human uploading, extremely likely if we build AGI and don't kill ourselves in the process.
Upgrading for immortality is more tolerant of imperfection and more reasonable than trying flash uploading.
MRI has a resolution of a fraction of a millimeter when doing an object that big. Field strength of equipment went from 0.3 tesla to 3 tesla to increase the resolution. Some units are 14 tesla, but too high is unsafe for humans.
This is true only for single-coil machines that rely on magnetic gradient for localisation. Modern phased-array machines have already exceeded the resolution possible by magnetic gradient alone; I am certainly not an expert on MRI, but I have debated the software side of brain simulation with people who are and most of them seem quite confident that massively parallel phased arrays will eventually deliver the resolution required. Of course this still assumes that synapse properties can be inferred from structure and a predictive model; fMRI could potentially help with this, but neuronal scale fMRI would be very hard.
Can you reference a publication for assumptions and math? "Brain simulation" is a bit nebulous here, since what is useful for medical or AI research is a lot different from what could debatedly constitute perfect uploading and immortality.
I'd be curious to see, but so far it looks like they may be just those who incorrectly assume Moore's Law applies everywhere.
If you have a billion coils instead of the handful in current phased array coil equipment, do you get a corresponding increase like a billion times the data per unit volume, or is scaling less than linearly proportional, subject to diminishing returns or running into physical limits long before?
Nanometer resolution has long been possible from nanometers away, but nanometer resolution from centimeters (many millions of nanometers) away on a 1E15 cubic micron object?
It is not necessary to scan the entire brain volume in one go as long as you are not concerned about preserving the human copy. The standard approach (typically discussed by transhumanists as the 'first generation' of the technique) is to perfuse the brain with vitrification (and optionally staining) agents then cool it below glassification temperature. You can then scan the brain in a large number of flat layers using mechanical or laser ablation. This is rather time consuming even with a massively parallel scanner array, but that's ok, there's no particular rush.
Much more likely to be possible.
However, even aside from the terrible economics and astronomical resource requirements, in the process the person would suffer a clear time of death, disruption of the original electrochemical potentials and activity.
Afterward, depending upon the amount of imperfection in the scan, either that person is restored to life, or, debatedly, an AI imperfectly resembling him with partially similar personality and memories is created.
Vitrification to somewhat reduce freezing damage as done in cryopreservation, ablation, ... the whole process is going to tend to have imperfections on the micro scale on the very inhomogeneous frozen brain.
Actually the limiting factors are simply cost and time. We already have technology sufficient to image the entire brain at a molecular level, but a scan with current AFM technology would take an impossibly long time (I have not done the maths but I suspect billions of years). Fortunately nearly all of the relevant technology is subject to Moore's-law type scaling, which we have already observed in the rapid progress in imaging over the last two decades or so.
Moore's Law? Not quite so applicable.
Even for CPUs, over recent decades, the size of a common PC processor made for hundreds of dollars has remained like a 1 square centimeter die and a substrate of microns to sub-micron thickness. Transistor number in that area increased due to finer lithography, but it's as impossible now as it was 30 years ago to cheaply produce a huge volume of anything manufactured to submicron complexity.
Expressed in terms of volume, the cost relative to the tiny thin area of CPU has remained proportionally like billions of dollars per liter.
More directly relevantly here, 50 years ago, electron microscopes imaging regions of a small number of cubic microns at nanometer scale existed. Now electron microscopes aren't thousands of times cheaper, still millions of dollars each for fancy ones. They are not subject to the same Moore's Law scaling.
Ordinarily, to image an entire cross-section of a frozen brain at once, with AFMs which do like a 100 micron by 100 micron region at a time, you'd need like a million of them or else spend eons moving a lesser number around, and that's just the tip of the iceberg when going through so many thousands of layers.
A quadrillion cubic micron object is about as unaffordable to image to sufficient resolution now as it was several decades ago.
You need rather so advanced technology as to be able to throw out the window the slightest resemblance of economic and practical limits as known today. That's maybe possible someday, but other methods for life extension are easier.
In active mode you can replace neurons (with simulated equivalents) on a progressive basis, e.g. for 'gradual uploads' for the squeemish, or implement any of the interesting hypothetical splits discussed here.
Yes, once the tech existed.
Before then it could be possible to counter the decline with aging biologically. The flexibility of the brain and its ability to restructure is enormous. In a famous case
one guy lost most of his original brain volume but survived, with it reduced to a centimeter or two thick layer from extreme hydrocephalus. Moderately advanced biotechnology might allow adding neurons to the brain or even to its exterior after removing or rearranging some of the skull.
Eventually going electronic would offer obvious advantages, although enough nanorobotic technology is probably a far harder goal for near-term timeframes than using biological cells which already self-replicate. Nanorobots have to be self-replicating. Else there's no damn way you'd be able to economically afford to make billions of them per person, as the complex machines can't cost even 1 cent each.
Killing one neuron doesn't kill a person. A few neurons out of 1E11 die every week. Killing them all at once, vaporizing the whole brain, would be totally different.
In that case please answer my question of 'exactly how much of your brain can I flash upload in one go without killing you'. Of course you may not subscribe to the ludicrous binary notion of selfhood that certain other individuals in this thread hold to.
A non-binary view may be appropriate.
A perfect copy is doubtful magic, with far more limits at greater scales than the heisenberg uncertainty principle alone, for anything operating in the real world with messy 3D biological structures and chemical distributions. So, if one assumes imperfections, the goal should be to minimize the degree of imperfection, errors, and damage.
Analogy and example: How great of a brain injury can somebody take and stay the same person? That's about impossible to precisely define. People have survived a lot in recorded medical cases, but are they the same person, especially if their personality changed drastically in the more extreme cases?
Yet I know enough for pragmatic decision-making. I know I want to minimize the degree of a brain injury I take.
I would feel better about anything that killed 0.1% of my neurons annually over many years than suddenly losing 10% in an accident. Likewise, I would prefer an imperfect replacement of a small fraction of neurons per year over trying to (imperfectly) upload them all at once.
There might even be some personality changes and memory loss even in the former case, but, at least then, it would be so gradual that one could try to track such, keep journals, etc. Either one never died, or one died so slowly spread out over the years that it would be hard to call it death at all, with never any particular day to fear.