Imperial528 wrote:Simon_Jester wrote:Do you seriously think that a plate two thirds of a millimeter thick is going to fully absorb the X-ray pulse of a nuclear weapon? What's the tenth-value (or half-value, or whatever) thickness for X-rays in tungsten?
All x-ray emissions in an Orion pulse unit are absorbed by the beryllium filler, vaporizing the filler, and the resulting plasma then vaporizes the propellant material launching it outwards in a conical plasma jet that strikes the pusher plate.
My apologies for the fuzzy thinking,
BUT...
What matterbeam wants to do is replace the full-sized pusher plate and spacecraft with the minimal thickness of plating that wouldn't be completely vaporized by ablation and use that as a projectile.
I don't know enough about the physics myself to say for sure, but I suspect that without the larger backing mass of a full pusher plate, matterbeam's plate will undergo significant warping or buckling during the pulse from any uneven areas of ablation, thus either throwing it off aim at best or fracturing/disintegrating the plate at worst.
As you note, there is still a problem.
[The following is NOT me lecturing you directly, it is simply me talking about what you have said, to the general audience]
Basically, if you take a four thousand ton system and try to replace its heaviest component with a one ton object, removing the entire rest of the system... The resulting system is not going to respond in a simple or linear fashion. One cannot simply say "this accelerates four thousand tons by X meters per second, so it should accelerate one ton by 4000X meters per second."
A large, abrupt force that is sufficient to cause large delta-V in a four thousand ton object is very likely to
destroy a one ton object. Just as a force large enough to accelerate a two-ton pickup truck will probably destroy a one-pound object that is placed so as to get run over or crushed by the truck.
We had long-lasting cyclic problems of this nature with matterbeam's orbital tether concept thread, for instance- because he kept designing systems without paying attention to how much force they would be subjected to. This resulted in a huge array of variations on the original design concept, all of which had the same fundamental flaw, namely that at some point some part of the system had to be accelerated to tremendous forces. Forces great enough to accelerate a large payload from (relative) rest to 7 km/s, in a short amount of time. While the exact result ("which part of the system gets torn apart") varied from version to version, the broad outlines of the problem with the plan were fairly consistent.
And the fundamental insight that kept getting dodged by repeated attempts to design around the problem was, well... "kicking something from zero to seven kilometers per second in a matter of a couple of minutes is hard, and most things will break if you try to kick them that forcefully."
This is not an especially
difficult insight, but it's easy to miss the insight if one is getting bogged down in the weeds of trying to model and re-model and re-re-model the same system over and over using freshman physics equations on systems that don't apply to very well.
The same thing
seems to be an issue here.
Linear extrapolation can be a good model for engineering systems when you are scaling the system up or down by 10%, or 20%, or even by a factor of two or something. They are almost
never a good model when you scale things up or down by a factor of a thousand.