We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.
I'm really shaky at this kind of stuff, but as I read it, our encryptions are still safe - time may now be linear, but energy is exponential as a result, making it completely unfeasible to implement in any form. Is that right?
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
From what little I understood of the thought experiment, it seems like what is happening here is that a photon is, via physics magic, traversing a path and becoming marked with the path it took. Take an arbitrarily large number of photons, send them all at once, find the one that represents the shortest distance and bingo, you've got your solution. The trick is getting all those photons.
Being a CS major with only relatively little physics education (intro level only), here's my take:
it looks to me like all that's going on is a (physics magic enabled) variant on a parallelized solution to TSP. If you take your O(N!) TSP problem, get yourself N! (as an extreme example) computers, and assign each computer a different path to compute the length thereof, you've turned the 'time complexity' of the problem into one for sorting ( O(n log n), which is itself parallelizable ) instead of TSP proper ( O(N!) ) . The thing is, we haven't really changed the 'computational complexity' of the problem- we still need to do N! distance calculations. We're just doing them all at once instead of one at a time. The trick, like it seems for the optical thought experiment, is getting all those computers.
One a side note: are O(N!) and O(N^N) the same complexity? N! expanded out is N*(N-1)*(N-2)*...*(N-(N-1)) which, if you expand out, becomes (N^N-1 + ... ). Since we customarily drop the non largest exponent, we get O(N^N),
The Monarch: "Anyone wanna explain to me why my coccoon is charred?"
24: "Because you told us to blow it up"
The Monarch: "And why it is sideways?"
21: "We were following orders! You can't yell at us for following orders."
24: "Or kill us for following orders."
It will turn out that this number of photons is proportional to N^N for a traveling salesman problem with N cities
If you arrange a set of mirrors appropriately, and fire off N^N uniquely identifiable photons such that they will all complete the path, you can simply time the best result.
They pretty much admit to replacing operations with photons in the article.
Similar (but not quite as slick) tricks are done using DNA computing and quantum computing.
EnsGabe wrote:One a side note: are O(N!) and O(N^N) the same complexity? N! expanded out is N*(N-1)*(N-2)*...*(N-(N-1)) which, if you expand out, becomes (N^N-1 + ... ). Since we customarily drop the non largest exponent, we get O(N^N),
N^N, N!, and X^N are sometimes grouped in the same class of difficulty. X^N is usually better than N!, which is always better than N^N, however, and not by a constant factor.
Xeriar wrote:
If you arrange a set of mirrors appropriately, and fire off N^N uniquely identifiable photons such that they will all complete the path, you can simply time the best result.
Is that what they are doing, or is that yet another analogy to what the experiment is like?
Xeriar wrote:
N^N, N!, and X^N are sometimes grouped in the same class of difficulty. X^N is usually better than N!, which is always better than N^N, however, and not by a constant factor.
Hmm. Thinking about it, N^N becomes 2^( N log N), so I wouldn't strictly put it in the same category as 2^N, unless the class of difficulty is actually "very."
The Monarch: "Anyone wanna explain to me why my coccoon is charred?"
24: "Because you told us to blow it up"
The Monarch: "And why it is sideways?"
21: "We were following orders! You can't yell at us for following orders."
24: "Or kill us for following orders."
EnsGabe wrote:Is that what they are doing, or is that yet another analogy to what the experiment is like?
They find out what the optimal path-length is, then go through and figure out what path the winning photon took.
Hmm. Thinking about it, N^N becomes 2^( N log N), so I wouldn't strictly put it in the same category as 2^N, unless the class of difficulty is actually "very."
Just NP-Hard, as opposed to EXP-Hard problems a la 2^2^N
But the point remains the same - N! is between N^N and 2^N in terms of progression, and you can't drop a constant factor to convert between them.
Correct. Polynomial time is defined as m(n) = O(N^k), where k is a constant.
One a side note: are O(N!) and O(N^N) the same complexity? N! expanded out is N*(N-1)*(N-2)*...*(N-(N-1)) which, if you expand out, becomes (N^N-1 + ... ). Since we customarily drop the non largest exponent, we get O(N^N),
No, they are not the same.
N! = N * (N - 1) * (N - 2) * ... * (N * (N - 1)) N^N = N * N * N * ... * N
In the factorial case, the most significant exponent is N^(N - 1). Obviously, this will be less than N^N except in the case of 0 for all positive integers. N! will never approach N^N.
Damien Sorresso
"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
Ender wrote:I'm really shaky at this kind of stuff, but as I read it, our encryptions are still safe - time may now be linear, but energy is exponential as a result, making it completely unfeasible to implement in any form. Is that right?
I'd say the time it takes to build the setup might end up being the sticking point. You need each node to be able to reflect photons differently depending on their tag, and this needs to be built in somehow.
I'm not seeing a way of doing that systematically and spanning the solution space, short of enumerating the paths, which entails doing enough work to solve the problem before you even get started.
Durandal wrote:No, they are not the same. N! = N * (N - 1) * (N - 2) * ... * (N * (N - 1)) N^N = N * N * N * ... * N
In the factorial case, the most significant exponent is N^(N - 1). Obviously, this will be less than N^N except in the case of 0 for all positive integers. N! will never approach N^N.
It's even worse than it might appear at first glance: O(n^n) = O(n![e^n/sqrt(n)]), so the factor between them is not even polynomial, but almost exponential.
Oh and as to this solution's implications on computing, as others have said, it's just a different way of doing the computation. If we did a physical experiment to calculate the shortest path, each photon would basically be one of the possible paths in the solution space, and the entire system would act as a giant parallel computing grid and become a non-deterministic machine that could solve the problem in polynomial time. We'd essentially be using the laws of physics as a simulation engine.
Damien Sorresso
"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
You shoot a different frequency laser at each of the N nodes, and see which completes first. That way you get the first node. Then start the lasers at that point and continue. The winner becomes the next starting node. Repeat until you've reduced the problem to triviality.
You can use a polychromatic light source and a prism in the nodes to do the frequency separation.
As I count it, time complexity has been reduced to N^2 (N cycles, with each cycle taking N time to execute), though in practice because light is a hell of a lot faster than your ability to reconfigure the system, it'd be done in N time.
I feel like I've wandered into a Charles Stross novel, he uses this several times in various settings.
Chronological Incontinence: Time warps around the poster. The thread topic winks out of existence and reappears in 1d10 posts.
Out of Context Theatre, this week starring Darth Nostril.
-'If you really want to fuck with these idiots tell them that there is a vaccine for chemtrails.'
drachefly wrote:I figured out a way around my objection.
You shoot a different frequency laser at each of the N nodes, and see which completes first. That way you get the first node. Then start the lasers at that point and continue. The winner becomes the next starting node. Repeat until you've reduced the problem to triviality.
You can use a polychromatic light source and a prism in the nodes to do the frequency separation.
As I count it, time complexity has been reduced to N^2 (N cycles, with each cycle taking N time to execute), though in practice because light is a hell of a lot faster than your ability to reconfigure the system, it'd be done in N time.
Yeah, but now you have the energy problem. Temperature concerns means it won't really work for anything less then 50, so lets use that as a minimum - assume the standard photon here is 1.8 eV. 50^50 * 1.8, convert to joules is something like 10^60 joules.
Getting more energy then there is in the universe is a bit tricky.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
drachefly wrote:I figured out a way around my objection.
You shoot a different frequency laser at each of the N nodes, and see which completes first. That way you get the first node. Then start the lasers at that point and continue. The winner becomes the next starting node. Repeat until you've reduced the problem to triviality.
You can use a polychromatic light source and a prism in the nodes to do the frequency separation.
As I count it, time complexity has been reduced to N^2 (N cycles, with each cycle taking N time to execute), though in practice because light is a hell of a lot faster than your ability to reconfigure the system, it'd be done in N time.
Unless I'm misreading, you're proposing a greedy algorithm, which is not guaranteed to give the shortest path. You're finding the closest node from the start, setting that node to the new start and repeating. By visiting the nearest unvisited node from your current node, you can end being forced to traverse a very long path to the next node. The traveling salesman problem requires visiting every node in a graph by using the shortest overall route.
Basically, you have to pay the piper eventually if you want mathematical certainty. You'll have to take every possible path (somehow) and check which is the shortest.
Damien Sorresso
"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
You have one frequency for each node, and shine your pulses so they simultaneously arrive at their nodes and proceed. One of them will complete the circuit first. This one is the overall winner. You measure the frequency of that one. That tells you what the first step in the overall-winning path was. Then you knock that one off and repeat.
Ender wrote:Yeah, but now you have the energy problem. Temperature concerns means it won't really work for anything less then 50
50 what? From context I am hazarding that you mean nodes. I have no idea why you think fewer nodes would be a problem.
This might clobber normal computers on anything from, say, 20 nodes on up (not sure, but I get a factor of 20^20/19^19 >> 10 leeway in either direction if I'm off). No need to demand that it do 50^50/20^20 ~~= 5*10^58 times better.
By the way, and not that it makes much difference compared to the above issue, 1.8 eV for a photon is seriously excessive. Room Temperature is 1/40 eV. And we are going to be cooling this thing, right?
Traditional algorithms use (n-1)!/2 to solve the equation.
Tree pruning allows for this to become somewhat faster (or much faster in some instances). In order for this to actually become superior, it's going to have to work well and efficiently on at least 26 nodes or more (considering the sheer immensity of constructing the model in the first place - a supercomputer network will be cheaper until N gets larger than that).