In 2006 and 2010, I logged the price of 16 "xiao long bao" (soup) dumplings at the restaurant Nanxiang, located in Shanghai. (This restaurant was the inventor of the tasty treat)
If you recall from my previous post about this
, prices were:
1997: 4 RMB
2006: 8 RMB
2010: 12 RMB
This represented an annualized inflation rate of 8%
from 1997 to 2006, and an inflation rate of 10.67%
from 2006 to 2010.
I am (un)happy to report that the price in 2013 is now 20 RMB. From 2010 to 2013, this would represent an annual inflation rate of 18.67%
I mean this in the most unsarcastic way possible: Wow.
I visited Shanghai for the first time in 1997. It was then that I discovered the famous "Nanxiang Xiao Long Bao Restaurant". Xiao Long Bao are commonly known in the US as soup dumplings or steamed dumplings. They are exceedingly bad for you and exceedingly delicious.
I remembered this store in particular because they were so cheap too This famous store sold 16 pieces for 4RMB (about 50 cents at the time).Then next time I visited in 2006, the prices had risen so much in 9 years that I decided to take a picture of their menu. 16 pieces now cost 8 RMB. In reality this is about 8% inflation per year, which in a fast growing place like China is not surprising.
(2006 - 16 pieces cost 8RMB)
This year prices have risen to 12 RMB per 16 pieces, or about 10.67% annualized inflation rate.
(2010 - 16 pieces cost 12 RMB)
The bigger realization is how much a number as small as "10%" compounds. Can you imagine what life would be like if we experienced 10% inflation in the US? I checked interest rates at local banks and they were fairly low, comparable to the US.. no wonder the equity and stock markets have been such a bubble over the last 10 years in China!! putting the money in the bank is akin to throwing it away.
Studies have shown that if you leave USB sticks on the ground outside an office building, 60% of them will get picked up and plugged into a computer in the building. If you put the company logo on the sticks, closer to 90% of them will get picked up and plugged in.
USB sticks, as you probably know, can pretend to be CD-ROMs and that means on many Windows systems, the computer will execute an “autorun” binary on the stick, giving it control of your machine. (And many people run as administrator.) While other systems may not do this, almost every system allows a USB stick to pretend to be a keyboard, and as a keyboard it also can easily take full control of your machine, waiting for the machine to be idle so you won’t see it if need be. Plugging malicious sticks into computers is how Stuxnet took over Iranian centrifuges, and yet we all do this.
I wish we could trust unknown USB and bluetooth devices, but we can’t, not when they can be pointing devices and mice and drives we might run code from.
New OS generations have to create a trust framework for plug-in hardware, which includes USB and firewire and to a lesser degree even eSata.
When we plug in any device that might have power over the machine, the system should ask us if we wish to trust it, and how much. By default, we would give minimum trust to drives, and no trust to pointing devices or keyboards and the like. CD-Roms would not get the ability to autorun, though it could be granted by those willing to take this risk, poor a choice as it is.
Once we grant the trust, the devices should be able to store a provided key. After that, the device can then use this key to authenticate itself and regain that trust when plugged in again. Going forward all devices should do this.
The problem is they currently don’t, and people won’t accept obsoleting all their devices. Fortunately devices that look like writable drives can just have a token placed on the drive. This token would change every time, making it hard to clone.
Some devices can be given a unique identifier, or a semi-unique one. For devices that have any form of serial number, this can be remembered and the trust level associated with it. Most devices at least have a lot of identifiers related to the make and model of device. Trusting this would mean that once you trusted a keyboard, any keyboard of the same make and model would also be trusted. This is not super-secure but prevents generic attacks — attacks would have to be directly aimed at you. To avoid a device trying to pretend to be every type of keyboard until one is accepted, the attempted connection of too many devices without a trust confirmation should lock out the port until a confirmation is given.
The protocol for verification should be simple so it can be placed on an inexpensive chip that can be mass produced. In particular, the industry would mass produce small USB pass-through authentication devices that should cost no more than $1. These devices could be stuck on the plugs of old devices to make it possible for them to authenticate. They could look like hubs, or be truly pass-through.
All of this would make USB attacks harder. In the other direction, I believe as I have written before that there is value in creating classes of untrusted or less trusted hardware. For example, an untrusted USB drive might be marked so that executable code can’t be loaded from it, only classes of files and archives that are well understood by the OS. And an untrusted keyboard would only be allowed to type in boxes that say they will accept input from an untrusted keyboard. You could write the text of emails with the untrusted keyboard, but not enter URLs into the URL bar or passwords into password boxes. (Browser forms would have to indicate that an untrusted keyboard could be used.) In all cases, a mini text-editor would be available for use with the untrusted keyboard, from where one could cut and paste using a trusted device into other boxes.
A computer that as yet has no trusted devices of a given class would have to trust the first one plugged in. Ie. if you have a new computer that’s never had a keyboard, it has to trust its first keyboard unless there is another way to confirm trust when that first keyboard is plugged in. Fortunately mobile devices all have built in input hardware that can be trusted at manufacture, avoiding this issue.
For an even stronger level of trust, we might want to be able to encrypt the data going through. This stops the insertion of malicious hubs or other MITM intercepts that might try to log keystrokes or other data. Encryption may not be practical in low power devices that need to be drives and send data very fast, but it would be fine for all low speed devices.
Of course, we should not trust our networks, even our home networks. Laptops and mobile devices constantly roam outside the home network where they are not protected, and then come back inside able to attack if trusted. However, some security designers know this and design for this.
Yes, this adds some extra UI the first time you plug something in. But that’s hopefully rare and this is a big gaping hole in the security of most of our devices, because people are always plugging in USB drives, dongles and more.
Earlier this week Georgia Tech announced the Online Masters of Science in Computer Science
, a MOOCs-based degree with a total tuition of about $7000. This degree came out of a collaboration between Sebastian Thrun of Udacity and my dean Zvi Galil with some significant financial support from AT&T. We've spent several months getting faculty input and buy-in to the program and we're very excited about taking a new leading role in the MOOCs revolution.
We will roll out slowly, with a smaller scale courses to corporate affiliates to work out the kinks and the plan to go to the general public in fall 2014. Read the FAQ
to get more information about the program.
It's been fun watching the development of this degree, in particular hearing Sebastian talk about his MOOC 2.0 plans to scale courses with a small amount of expense that we pull from the tuition. No doubt we will have challenges in making this degree truly work at a large scale but I'm truly bullish that we'll a self-sustaining quality Masters program that will reach tens if not hundreds of thousands of students.
Here we go.
Hints from the release this week of the 2014 Mercedes S-Class suggest that it doesn’t have the promised traffic jam assist. Update: Other reports suggest it might still be present.
The S-class only gets major updates infrequently, though an intermediate update will come in 2017.
A story on Auto Express quotes Mercedes as saying “We can do it now, but there are rules in place that we have to accept” but that a fully autonomous car will come before the next full-revision of the S class due in 2021.
Instead, this car features a lanekeep + ACC mode that requires your hand be “touching” the wheel, and starts complaining if you take your hands off for a while.
This is a setback on what was to be the first commercially released car. While the various state laws do not tend to cover cars that provide an autopilot that requires constant visual attention from the driver, Mercedes may have been afraid of the regulatory environment in the Europe.
In addition, there has always been a special risk to this approach. Even if you insist to the driver that they must pay attention, they will surely ignore that warning once they get away with occasional inattention — after all, they will send text messages now with no auto-driving at all. Car companies can build a lane-keeping car today, but to stop you from trusting it too much they end up with systems like “keep touching the wheel” or a gaze detector that makes sure you keep watching the road, and people don’t like these systems very much.
Will Volvo and Audi, who have also announced plans for lakekeep+ACC super-cruise cars also pull back? Cadillac, which actually uses the name super-cruise, has pulled back from their 2015 date while at the same time talking to the press about their testing program.
In other news, the hearings in the Senate yesterday had most of their focus on these early technologies, and as expected, both David Strickland of NHTSA and the various industry folks were gung-ho on DSRC for V2V and very eager to recommend that the FCC not be allowed to convert the DSRC spectrum to unlicenced as it wishes to do. Here is a summary of the meeting which was attended by only a few senators. Both Johnson and Rockefeller surprised me with their skill in the questions. While Johnson was not up on all the ADAS technologies, he was able to see through a number of the industry claims.
Update (May 17): Daniel Lidar emailed me to clarify his views about error-correction and the viability of D-Wave’s approach. He invited me to share his clarification with others—something that I’m delighted to do, since I agree with him wholeheartedly. Without further ado, here’s what Lidar says:
I don’t believe D-Wave’s approach is scalable without error correction. I believe that the incorporation of error correction is a necessary condition in order to ever achieve a speedup with D-Wave’s machines, and I don’t believe D-Wave’s machines are any different from other types of quantum information processing in this regard. I have repeatedly made this point to D-Wave over several years, and I hope that in the future their designs will allow more flexibility in the incorporation of error correction.
Lidar also clarified that he not only doesn’t dispute what Matthias Troyer told me about the lack of speedup of the D-Wave device compared to classical simulated annealing in their experiments, but “fully agrees, endorses, and approves” of it—and indeed, that he himself was part of the team that did the comparison.
In other news, this Hacker News thread, which features clear, comprehending discussions of this blog post and the backstory that led up to it, has helped to restore my faith in humanity.
Two years ago almost to the day, I announced my retirement as Chief D-Wave Skeptic. But—as many readers predicted at the time—recent events (and the contents of my inbox!) have given me no choice except to resume my post. In an all-too-familiar pattern, multiple rounds of D-Wave-related hype have made it all over the world before the truth has had time to put its pants on and drop its daughter off in daycare. And the current hype is particularly a shame, because once one slices through all the layers of ugh—the rigged comparisons, the “dramatic announcements” that mean nothing, the lazy journalists cherry-picking what they want to hear and ignoring the inconvenient bits—there really has been a huge scientific advance this past month in characterizing the D-Wave devices. I’m speaking about the experiments on the D-Wave One installed at USC, the main results of which finally appeared in April. Two of the coauthors of this new work—Matthias Troyer and Daniel Lidar—were at MIT recently to speak about their results, Troyer last week and Lidar this Tuesday. Intriguingly, despite being coauthors on the same paper, Troyer and Lidar have very different interpretations of what their results mean, but we’ll get to that later. For now, let me summarize what I think their work has established.
Evidence for Quantum Annealing Behavior
For the first time, we have evidence that the D-Wave One is doing what should be described as “quantum annealing” rather than “classical annealing” on more than 100 qubits. (Note that D-Wave itself now speaks about “quantum annealing” rather than “quantum adiabatic optimization.” The difference between the two is that the adiabatic algorithm runs coherently, at zero temperature, while quantum annealing is a “messier” version in which the qubits are strongly coupled to their environment throughout, but still maintain some quantum coherence.) The evidence for quantum annealing behavior is still extremely indirect, but despite my “Chief Skeptic” role, I’m ready to accept what the evidence indicates with essentially no hesitation.
So what is the evidence? Basically, the USC group ran the D-Wave One on a large number of randomly generated instances of what I’ll call the “D-Wave problem”: namely, the problem of finding the lowest-energy configuration of an Ising spin glass, with nearest-neighbor interactions that correspond to the D-Wave chip’s particular topology. Of course, restricting attention to this “D-Wave problem” tilts the tables heavily in D-Wave’s favor, but no matter: scientifically, it makes a lot more sense than trying to encode Sudoku puzzles or something like that. Anyway, the group then looked at the distribution of success probabilities when each instance was repeatedly fed to the D-Wave machine. For example, would the randomly-generated instances fall into one giant clump, with a few outlying instances that were especially easy or especially hard for the machine? Surprisingly, they found that the answer was no: the pattern was strongly bimodal, with most instances either extremely easy or extremely hard, and few instances in between. Next, the group fed the same instances to Quantum Monte Carlo: a standard classical algorithm that uses Wick rotation to find the ground states of “stoquastic Hamiltonians,” the particular type of quantum evolution that the D-Wave machine is claimed to implement. When they did that, they found exactly the same bimodal pattern that they found with the D-Wave machine. Finally they fed the instances to a classical simulated annealing program—but there they found a “unimodal” distribution, not a bimodal one. So, their conclusion is that whatever the D-Wave machine is doing, it’s more similar to Quantum Monte Carlo than it is to classical simulated annealing.
Curiously, we don’t yet have any hint of a theoretical explanation for why Quantum Monte Carlo should give rise to a bimodal distribution, while classical simulating annealing should give rise to a unimodal one. The USC group simply observed the pattern empirically (as far as I know, they’re the first to do so), then took advantage of it to characterize the D-Wave machine. I regard explaining this pattern as an outstanding open problem raised by their work.
In any case, if we accept that the D-Wave One is doing “quantum annealing,” then despite the absence of a Bell-inequality violation or other direct evidence, it’s reasonably safe to infer that there should be large-scale entanglement in the device. I.e., the true quantum state is no doubt extremely mixed, but there’s no particular reason to believe we could decompose that state into a mixture of product states. For years, I tirelessly repeated that D-Wave hadn’t even provided evidence that its qubits were entangled—and that, while you can have entanglement with no quantum speedup, you can’t possibly have a quantum speedup without at least the capacity to generate entanglement. Now, I’d say, D-Wave finally has cleared the evidence-for-entanglement bar—and, while they’re not the first to do so with superconducting qubits, they’re certainly the first to do so with so many superconducting qubits. So I congratulate D-Wave on this accomplishment. If this had been advertised from the start as a scientific research project—”of course we’re a long way from QC being practical; no one would ever claim otherwise; but as a first step, we’ve shown experimentally that we can entangle 100 superconducting qubits with controllable couplings”—my reaction would’ve been, “cool!” (Similar to my reaction to any number of other steps toward scalable QC being reported by research groups all over the world.)
No Speedup Compared to Classical Simulated Annealing
But of course, D-Wave’s claims—and the claims being made on its behalf by the Hype-Industrial Complex—are far more aggressive than that. And so we come to the part of this post that has not been pre-approved by the International D-Wave Hype Repeaters Association. Namely, the same USC paper that reported the quantum annealing behavior of the D-Wave One, also showed no speed advantage whatsoever for quantum annealing over classical simulated annealing. In more detail, Matthias Troyer’s group spent a few months carefully studying the D-Wave problem—after which, they were able to write optimized simulated annealing code that solves the D-Wave problem on a normal, off-the-shelf classical computer, about 15 times faster than the D-Wave machine itself solves the D-Wave problem! Of course, if you wanted even more classical speedup than that, then you could simply add more processors to your classical computer, for only a tiny fraction of the ~$10 million that a D-Wave One would set you back.
Some people might claim it’s “unfair” to optimize the classical simulated annealing code to take advantage of the quirks of the D-Wave problem. But think about it this way: D-Wave has spent ~$100 million, and hundreds of person-years, optimizing the hell out of a special-purpose annealing device, with the sole aim of solving this one problem that D-Wave itself defined. So if we’re serious about comparing the results to a classical computer, isn’t it reasonable to have one professor and a few postdocs spend a few months optimizing the classical code as well?
As I said, besides simulated annealing, the USC group also compared the D-Wave One’s performance against a classical implementation of Quantum Monte Carlo. And maybe not surprisingly, the D-Wave machine was faster than a “direct classical simulation of itself” (I can’t remember how many times faster, and couldn’t find that information in the paper). But even here, there’s a delicious irony. The only reason the USC group was able to compare the D-Wave one against QMC at all, is that QMC is efficiently implementable on a classical computer! (Albeit probably with a large constant overhead compared to running the D-Wave annealer itself—hence the superior performance of classical simulated annealing over QMC.) This means that, if the D-Wave machine can be understood as reaching essentially the same results as QMC (technically, “QMC with no sign problem”), then there’s no real hope for using the D-Wave machine to get an asymptotic speedup over a classical computer. The race between the D-Wave machine and classical simulations of the machine would then necessarily be a cat-and-mouse game, a battle of constant factors with no clear asymptotic victor. (Some people might conjecture that it will also be a “Tom & Jerry game,” the kind where the classical mouse always gets the better of the quantum cat.)
At this point, it’s important to give a hearing to three possible counterarguments to what I’ve written above.
The first counterargument is that, if you plot both the runtime of simulated annealing and the runtime of the D-Wave machine as functions of the instance size n, you find that, while simulated annealing is faster in absolute terms, it can look like the curve for the D-Wave machine is less steep. Over on the blog “nextbigfuture”, an apparent trend of this kind has been fearlessly extrapolated to predict that with 512 qubits, the D-Wave machine will be 10 billion times faster than a classical computer. But there’s a tiny fly in the ointment. As Troyer carefully explained to me last week, the “slow growth rate” of the D-Wave machine’s runtime is, ironically, basically an artifact of the machine being run too slowly on small values of n. Run the D-Wave machine as fast as it can run for small n, and the difference in the slopes disappears, with only the constant-factor advantage for simulated annealing remaining. In short, there seems to be no evidence, at present, that the D-Wave machine is going to overtake simulated annealing for any instance size.
The second counterargument is that the correlation between the two “bimodal distributions”—that for the D-Wave machine and that for the Quantum Monte Carlo simulation—is not perfect. In other words, there are a few instances (not many) that QMC solves faster than the D-Wave machine, and likewise a few instances that the D-Wave machine solves faster than QMC. Not surprisingly, the latter fact has been eagerly seized on by the D-Wave boosters (“hey, sometimes the machine does better!”). But Troyer has a simple and hilarious response to that. Namely, he found that his group’s QMC code did a better job of correlating with the D-Wave machine, than the D-Wave machine did of correlating with itself! In other words, calibration errors seem entirely sufficient to explain the variation in performance, with no need to posit any special class of instances (however small) on which the D-Wave machine dramatically outperforms QMC.
The third counterargument is just the banal one: the USC experiment was only one experiment with one set of instances (albeit, a set one might have thought would be heavily biased toward D-Wave). There’s no proof that, in the future, it won’t be discovered that the D-Wave machine does something more than QMC, and that there’s some (perhaps specially-designed) set of instances on which the D-Wave machine asymptotically outperforms both QMC and Troyer’s simulated annealing code. (Indeed, I gather that folks at D-Wave are now assiduously looking for such instances.) Well, I concede that almost anything is possible in the future—but “these experiments, while not supporting D-Wave’s claims about the usefulness of its devices, also don’t conclusively disprove those claims” is a very different message than what’s currently making it into the press.
Comparison to CPLEX is Rigged
Unfortunately, the USC paper is not the one that’s gotten the most press attention—perhaps because half of it inconveniently told the hypesters something they didn’t want to hear (“no speedup”). Instead, journalists have preferred a paper released this week by Catherine McGeoch and Cong Wang, which reports that quantum annealing running on the D-Wave machine outperformed the CPLEX optimization package running on a classical computer by a factor of ~3600, on Ising spin problems involving 439 bits. Wow! That sounds awesome! But before rushing to press, let’s pause to ask ourselves: how can we reconcile this with the USC group’s result of no speedup?
The answer turns out to be painfully simple. CPLEX is a general-purpose, off-the-shelf exact optimization package. Of course an exact solver can’t compete against quantum annealing—or for that matter, against classical annealing or other classical heuristics! Noticing this problem, McGeoch and Wang do also compare the D-Wave machine against tabu search, a classical heuristic algorithm. When they do so, they find that an advantage for the D-Wave machine persists, but it becomes much, much smaller (they didn’t report the exact time comparison). Amusingly, they write in their “Conclusions and Future Work” section:
It would of course be interesting to see if highly tuned implementations of, say, tabu search or simulated annealing could compete with Blackbox or even QA [i.e., the D-Wave machines] on QUBO [quadratic binary optimization] problems; some preliminary work on this question is underway.
As I said above, at the time McGeoch and Wang’s paper was released to the media (though maybe not at the time it was written?), the “highly tuned implementation” of simulated annealing that they ask for had already been written and tested, and the result was that it outperformed the D-Wave machine on all instance sizes tested. In other words, their comparison to CPLEX had already been superseded by a much more informative comparison—one that gave the “opposite” result—before it ever became public. For obvious reasons, most press reports have simply ignored this fact.
Troyer, Lidar, and Stone Soup
Much of what I’ve written in this post, I learned by talking to Matthias Troyer—the man who carefully experimented with the D-Wave machine and figured out how to beat it using simulated annealing, and who I regard as probably the world’s #1 expert right now on what exactly the machine does. Troyer wasn’t shy about sharing his opinions, and while couched with qualifications, they tended toward extremely skeptical. For example, Troyer conjectured that, if D-Wave ultimately succeeds in getting a speedup over classical computers in a fair comparison, then it will probably be by improving coherence and calibration, incorporating error-correction, and doing other things that “traditional,” “academic” quantum computing researchers had said all along would need to be done.
As I said, Danny Lidar is another coauthor on the USC paper, and also recently visited MIT to speak. Lidar and Troyer agree on the basic facts—yet Lidar noticeably differed from Troyer, in trying to give each fact the most “pro-D-Wave spin” it could possibly support. Lidar spoke at our quantum group meeting, not about the D-Wave vs. simulated annealing performance comparison (which he agrees with), but about a proposal of his for incorporating quantum error-correction into the D-Wave device, together with some experimental results. He presented his proposal, not as a reductio ad absurdum of D-Wave’s entire philosophy, but rather as a positive opportunity to get a quantum speedup using D-Wave’s approach.
So, to summarize my current assessment of the situation: yes, absolutely, D-Wave might someday succeed—ironically, by adapting the very ideas from “the gate model” that its entire business plan has been based on avoiding, and that D-Wave founder Geordie Rose has loudly denigrated for D-Wave’s entire history! If that’s what happens, then I predict that science writers, and blogs like “nextbigfuture,” will announce from megaphones that D-Wave has been vindicated at last, while its narrow-minded, theorem-obsessed, ivory-tower academic naysayers now have egg all over their faces. No one will care that the path to success—through quantum error-correction and so on—actually proved the academic critics right, and that D-Wave’s “vindication” was precisely like that of the deliciousness of stone soup in the old folktale. As for myself, I’ll probably bang my head on my desk until I sustain so much brain damage that I no longer care either. But at least I’ll still have tenure, and the world will have quantum computers.
The Messiah’s Quantum Annealer
Over the past few days, I’ve explained the above to at least six different journalists who asked. And I’ve repeatedly gotten a striking response: “What you say makes sense—but then why are all these prestigious people and companies investing in D-Wave? Why did Bo Ewald, a prominent Silicon Valley insider, recently join D-Wave as president of its US operations? Why the deal with Lockheed Martin? Why the huge deal with NASA and Google, just announced today? What’s your reaction to all this news?”
My reaction, I confess, is simple. I don’t care—I actually told them this—if the former Pope Benedict has ended his retirement to become D-Wave’s new marketing director. I don’t care if the Messiah has come to Earth on a flaming chariot, not to usher in an age of peace but simply to spend $10 million on D-Wave’s new Vesuvius chip. And if you imagine that I’ll ever care about such things, then you obviously don’t know much about me. I’ll tell you what: if peer pressure is where it’s at, then come to me with the news that Umesh Vazirani, or Greg Kuperberg, or Matthias Troyer is now convinced, based on the latest evidence, that D-Wave’s chip asymptotically outperforms simulated annealing in a fair comparison, and does so because of quantum effects. Any one such scientist’s considered opinion would mean more to me than 500,000 business deals.
The Argument from Consequences
Let me end this post with an argument that several of my friends in physics have explicitly made to me—not in the exact words below but in similar ones.
“Look, Scott, let the investors, government bureaucrats, and gullible laypeople believe whatever they want—and let D-Wave keep telling them whatever’s necessary to stay in business. It’s unsportsmanlike and uncollegial of you to hold D-Wave’s scientists accountable for whatever wild claims their company’s PR department might make. After all, we’re in this game too! Our universities put out all sorts of overhyped press releases, but we don’t complain because we know that it’s done for our benefit. Besides, you’d doubtless be trumpeting the same misleading claims, if you were in D-Wave’s shoes and needed the cash infusions to survive. Anyway, who really cares whether there’s a quantum speedup yet or no quantum speedup? At least D-Wave is out there trying to build a scalable quantum computer, and getting millions of dollars from Jeff Bezos, Lockheed, Google, the CIA, etc. etc. to do so—resources more of which would be directed our way if we showed a more cooperative attitude! If we care about scalable QCs ever getting built, then the wise course is to celebrate what D-Wave has done—they just demonstrated quantum annealing on 100 qubits, for crying out loud! So let’s all be grownups here, focus on the science, and ignore the marketing buzz as so much meaningless noise—just like a tennis player might ignore his opponent’s trash-talking (‘your mother is a whore,’ etc.) and focus on the game.”
I get this argument: really, I do. I even concede that there’s something to be said for it. But let me now offer a contrary argument for the reader’s consideration.
Suppose that, unlike in the “stone soup” scenario I outlined above, it eventually becomes clear that quantum annealing can be made to work on thousands of qubits, but that it’s a dead end as far as getting a quantum speedup is concerned. Suppose the evidence piles up that simulated annealing on a conventional computer will continue to beat quantum annealing, if even the slightest effort is put into optimizing the classical annealing code. If that happens, then I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera. The same pointy-haired bosses who now flock toward quantum computing, will flock away from it just as quickly and as uncomprehendingly. Academic QC programs will be decimated, despite the slow but genuine progress that they’d been making the entire time in a “parallel universe” from D-Wave. People’s contempt for academia is such that, while a D-Wave success would be trumpeted as its alone, a D-Wave failure would be blamed on the entire QC community.
When it comes down to it, that’s the reason why I care about this matter enough to have served as “Chief D-Wave Skeptic” from 2007 to 2011, and enough to resume my post today. As I’ve said many times, I really, genuinely hope that D-Wave succeeds at building a QC that achieves an unambiguous speedup! I even hope the academic QC community will contribute to D-Wave’s success, by doing careful independent studies like the USC group did, and by coming up with proposals like Lidar’s for how D-Wave could move forward. On the other hand, in the strange, unlikely event that D-Wave doesn’t succeed, I’d like people to know that many of us in the QC community were doing what academics are supposed to do, which is to be skeptical and not leave obvious questions unasked. I’d like them to know that some of us simply tried to understand and describe what we saw in front of us—changing our opinions repeatedly as new evidence came in, but disregarding “meta-arguments” like my physicist friends’ above. The reason I can joke about how easy it is to bribe me is that it’s actually kind of hard.
Some interesting robocar surveys are out.
Today, a survey conducted by Cisco showed very high numbers of people saying, “yes, they would ride in a robocar.” 57% said yes globally, with 60% in the USA and an incredible 95% in Brazil. (Perhaps it is the trully horrible traffic in the big cities of Brazil which drives this number.) A bit more surprising was the 28% number for Japan.
When they asked people if they would put their kids in a car, the answer was lower, but only slightly lower, which surprises me, as I felt it should take a bit more trust demonstration for people do do that. The reality is that if 60% are saying yes right now, without having seen the technology at all, the real number is actually quite a bit higher.
The Japanese number is also curious, since our stereotype is that the Japanese are the people most accepting of robotics in the world.
An British Survey reported similar results, with highest desire in London — possibly also related to the amount of traffic.
Another survey from the UK asked the question “which company would you trust to improve car safety” with astonishing results. The winner was Apple, which has no announced car safety plans, with Google in 2nd place. What is shocking is that Volvo comes 3rd — really a close tie with Google, and Mercedes 4th. Volvo’s entire brand is to be the car safety leader, and Mercedes has been trying to take that status away, but I would never have guessed that the silicon valley tech companies would win this.
It’s even more surprising that Apple beats Google. While Apple certainly has a quality brand, Google is the only one known to be working on cars and safety. One has to wonder just how the questions were put to these new-car buyers.
Yesterday’s KALW radio show went pretty well, the phone-in questions were pretty reasonable. The MP3 is up on their site.
Problem: On Mothers day (May 12 this year) restaurants are very crowded because many people take their mothers, grandmothers, great-grandmothers, etc out to lunch. (Grandparents day
is in September but I think most people ignore that and honor their grandmothers on mothers day and their grandfathers on fathers day.)
My solution: Take mom out to lunch the FOLLOWING week. Some of my friends tell me NO- you can't just MOVE Mothers day- what are you--- The Master of Space and Time? The key is that my mom AGREES with me and in fact raised me with these values: (1) Never do X when everyone else is doing X, its too crowed, and (2) Learn the polynomial VDW theorem.
While this solution may work for me, it may not work for everyone. Here are some options to alleviate the restaurant crunch:
- Declare the second WEEKEND in May to be MOTHERS WEEKEND. People take their moms out to lunch SATURDAY or SUNDAY. This would split the restaurant load in half.
- Declare May MOTHERS MONTH. People take their moms out to lunch ONE Sunday in May. This would split the restaurant load by 4.
- Declare May MOTHERS MONTH. People take their moms out to lunch ONE Saturday OR Sunday in May. This would split the restaurant load by 8.
- Declare May MOTHERS MONTH. People take their moms out to breakfast OR lunch OR Dinner ONE Saturday OR Sunday in May. This would split the restaurant load by 24.
How would people DECIDE which day to do:
- The last day of April have mom either (depending on which of the above schemes) flip a coin, role a 4-sided die, or role an 8-sided die or role two 12-sided dice to determine which day to be taken to lunch. Fortunately, due to the Dungeons-and-Dragons craze that girls got into about 40 years ago, most mothers have these dice. But in case she does not, here is a nice MATH PROBLEM (I am sure already solved): USE fair coins and fair 6-sided dice to simulate other random choices fairly. In our case 4-sided, 8-sided, and 24-sided. Which random choice can be simulated? Which can't?
- Say we do the Saturday/Sunday/breakfast/lunch/dinner solution. Everyone with last name beginning with A goes to breakfast on the first Saturday. Everyone with last name beginning with B goes to lunch on the first Saturday. etc. There are only 24 lunches and 26 letters, so merge P and Q, and merge Y and Z.
How likely is any of this to come about? It would need to evolve naturally as a social custom. It also would have to not be that hard to implement. As such the 24-meal-plan probably won't catch on. Also, if Mother's Day become Mothers one-of-24-meals-day it may lose something. Hence the 2-meal-plan solution is probably the best.
However, the entire tradition of taking mom out to lunch on mothers day may fade. The origin is that mom cooks for the family most days, so this ONE day they take her out. Nice! But more and more households share responsibilities (NOTE- I have no facts or stats to back this up but it has a certain truthiness about it) hence the notion of taking mom out to lunch may seem more and more odd over time. Then again, its still nice being taken out to lunch.
I will be a guest on Monday the 13th (correction — I originaly said the 14th) on a the “City Visions” program, produced by one of San Francisco’s NPR affiliates, KALW. The show runs at 7pm, and you can listen live and phone in (415-841-4134), or listen to the podcast later. Details are on the page about the show.
Other guests include Bryant Walker Smith of Stanford, Martin Sierhuis of the Nissan robocar lab and Bernard Soriano from the California DMV. Should be a good panel.
In other news — it develops fast these days:
- South Carolina has introduced a robocar bill. Here is the text.
- The Senate science, transportation and commerce committee will be holding hearings on robocars on May 15. Expect to see more of this.
Here’s a roundup of various recent news items on robocars. There are now a few locations, such as DriverlessCarHQ and the LinkedIn self-driving car group which feature very extensive listing of news items related to robocars. Robocars are now getting popular enough that there are articles every day, but only a few of them contain actual real news for readers of this site or others up on the technology.
An offhand remark from Elon Musk reveals he is interested in an “autopilot” some day for Tesla models, and has spoken to Google about it. Google declined comment. Musk says he wants a cheaper, camera based system, a surprising mistake for him. (Cameras are indeed much cheaper but not yet up to the task. LIDARs are super expensive but Musk’s mistake is in not remembering that electronics technology that’s expensive in early, small volume models does not stay expensive.)
The Tesla Model S is not a good car to make into a robocar though — it’s super fun to drive, and that’s part of why you pay so much money for it. Nothing wrong with fun to drive cars, but you should automate the boring car and leave the fun car on manual, at least for now.
Shuttles driven by maps
The Cybergo made by French company Induct is a low speed robotic shuttle for campus use. Particularly interesting is that it drives using a laser and mapping for localization — a similar fashion to the Google car and other DARPA challenge cars. It is able to mingle with pedestrians by virtue of just going slow enough to be able to stop in time and be safe.
More states are debating robocar laws. This wiki at Stanford has a good summary of existing actions. Notable is that Oregon pulled back on their law, and New York has just drafter theirs. Massachusetts is also working on one.
The Oregon pullback is notable because one of the cited reasons was the desire to study V2V. While I have written recently on issues with V2V this moves it out of the “mostly harmless” category. V2V efforts will be useful for robocars, but not for decades, and I strongly believe it would be extreme folly to allow V2V issues to affect the progress of robocars.
Unlike Nevada’s law, many of the other state laws do not cover unmanned operation. While the reasons for this are obvious, because it’s harder to understand unmanned operation in the context of existing law, we should not forget that unmanned operation is where most of the real benefits of robocars accrue — self-delivery, mobility on demand, parking, self-refueling, service to the elderly and disabled and much more. Not that manned operation is a slouch, offering the reduced accidents and recovery of productive time as benefits.
California’s DMV recently held hearings in Sacramento as part of their process of writing the regulations required by the California bill, passed in 2012. The regulations need to be done by 2015 but may be done sooner. The US DOT also solicited comments last month.
Google hits 500,000
I noted earlier that Google announced it had hit 500,000 miles of autonomous operation on ordinary streets. Even more notable was chief engineer Chris Urmson’s report of over 90,000 miles without a safety-critical incident. (This is an incident where the safety drivers had to take over where the vehicle would have probably caused an accident.) That’s not as good as a human yet — humans have an accident about ever 250,000 miles in the USA, but getting much closer. 500,000 miles, by the way, is more than the distance to the moon and back — Google [X] always talks about moonshots — and more than many people will drive in their lifetimes.
Cadillac & Car Companies
Cadillac has pushed back the supposedly 2015 delivery for their “super cruise” product. It now will come later in the decade. Car maker conservatism is to be expected, but other makers are pushing their dates forward. The Mercedes 2014 S Class is still on track to be first.
BMW has announced a partnership with Continental, the major auto parts supplier. Continental has been pushing their cruising car for a while — I’ve ridden in it — but BMW has its own impressive effort in ConnectedDrive Connect. Today, it is quite common for systems branded by a car maker to actually be made entirely by a supplier, who gives up the branding and limelight for money. It will be interesting to see how this collaboration works. They will be testing on the autobahn.
Car company date forecasts continue to be long term, with dates in the range of 2025 for full autonomy as cited by BMW.
Bosch, another top supplier, has been making its own announcements of advanced sensors and other tools.
Princeton slide deck
Many more papers and reports on robocars are being written. This slide deck from Princeton PAVE’s Kornhauser is notable for providing a number of worthwhile statistics on road use and related issues.
Fake Google Car in New York
A crew created a fake Google car and drove it around NYC. What’s impressive is how many people thought they were seeing the real thing.
While there have been scores of articles, I will point to my friend Virginia Postrel’s Bloomberg article on Silicon Valley and robocars since I was her prime source — so it must be good. :-)
A nice trick from Daimler which I liked — a system to be kind to pedestrians as they walk down the street near parked robocars that sense them. Their plan is to light the way for these pedestrians as a favour.
Whole magazine issue
The military magazine Mission Critical has devoted an entire issue to civilian robocars which includes an article on insurance by Guy Fraker (formerly of State Farm) and a few other items of interest.
More news to come. I have also updated my Robocar Teams page with more details on teams around the world building robocars.
Back around 1980, I used to write
computer games for the Apple II. Plotting a point on the Apple II screen required dividing by 7, a lengthy process for the 6502 microprocessor. Asking around, we learned how to make division by 7 much faster--lookup tables.
As computer gaming got more intense in the decades that followed, we first had graphics cards designed to speed up the process and later Graphics Processing Units
or GPUs, dedicated processors devoted to graphics.
Around the turn of the century, people started using GPUs for more than just graphics. GPUs did certain kinds of vector manipulation quickly and one could use these for a variety of mostly scientific computing. But GPUs weren't really well designed for other purposes. Following the cupholder principle
, GPUs began to evolve to allow easier to access APIs from more common programming languages becoming General Purpose GPU or GPGPUs. Several systems researchers at Georgia Tech and elsewhere are now redesigning chip layouts to make the best most efficient uses of CPUs and GPGPUs.
The theory community hasn't seem to catch on yet. There should be some nice theoretical model that captures the vector and other operations of a GPGPU and then we should search for algorithms that make the best use of the hardware. The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use.
You’ve probably noticed that with many of our portable devices, especially phones and tablets, a large fraction of the size and weight are the battery. Battery technology keeps improving, and costs go down, and there are dreams of fancy new chemistries and even ultracapacitors, but this has become a dominant issue.
Every device seems to have a different battery. Industrial designers work very hard on the design of their devices, and they don’t want to be constrained by having to standardize the battery space. In many devices, they are even giving up the replaceable battery in the interests of good design. The existing standard battery sizes, such as the AA, AAA and even the AAAA and other less common sizes are just not suitable for a lot of our devices, and while cylindrical form factors make the most sense for many cell designs they don’t fit well in the design of small devices.
So what’s holding back a new generation of standardization in batteries? Is it the factors named above, the fact that tech is changing rapidly, or something else?
I would propose a small, thin modular battery that I would call the EStick, for energy stick. The smaller EStick sizes would be thin enough for cell phones. The goal would be to have more than one b-stick, or at least more than one battery in a typical device. Because of the packaging and connections, that would mean a modest reduction in battery capacity — normally a horrible idea — but some of the advantages might make it worth it.
There are several reasons to have multiple sticks or batteries in a device. In particular, you want the ability to quickly and easily swap at least one stick while the device is still operating, though it might switch to a lower power mode during the swap. The stick slot would have a spring loaded snap, as is common in many devices like cameras, though there may be desire for a door in addition.
Swapping presents the issue that not all the cells are at the same charge level and voltage. This is generally a bad thing, but modern voltage control electronics has reached the level where this should be possible with smaller and smaller electronics. It is possible with some devices to simply use one stick at a time, as long as that provides enough current. This uses up the battery lifetime faster, and means less capacity, but is simpler.
The quick hot swap offers the potential for indefinite battery life. In particular, it means that very small devices, such as wearable computers (watches, glasses and the like) could run a long time. They might run only 3-4 hours on a single stick, but a user could keep a supply of sticks in a pocket or bag to get arbitrary lifetime. Tiny devices that nobody would ever use because “that would only last 2 hours” could become practical.
While 2 or more sticks would be best for swap, a single stick and an internal battery or capacitor, combined with a sleep mode that can survive for 20-30 seconds without a battery could be OK. read more »