Communication networks from malicious hackers

Distributed planning, communication, and control algorithms for autonomous robots make up a major area of research in computer science. But in the literature on multirobot systems, security has gotten relatively short shrift.

In the latest issue of the journal Autonomous Robots, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique for preventing malicious hackers from commandeering robot teams’ communication networks. The technique could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.

“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper.

“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “If you take over a computer system, you can make it release private data — and you can do a lot of other bad things. A cybersecurity attack on a robot has all the perils of attacks on computer systems, plus the robot could be controlled to take potentially damaging action in the physical world. So in some sense there is even more urgency that we think about this problem.”

Identity theft

Most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each robot makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.

A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.” The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.

“There are two ways to think of it,” says Stephanie Gil, a research scientist in Rus’ Distributed Robotics Lab and a co-author on the new paper. “In some cases cryptography is too difficult to implement in a decentralized form. Perhaps you just don’t have that central key authority that you can secure, and you have agents continually entering or exiting the network, so that a key-passing scheme becomes much more challenging to implement. In that case, we can still provide protection.

A ubiquitous model of decision processes more accurate

Markov decision processes are mathematical models used to determine the best courses of action when both current circumstances and future consequences are uncertain. They’ve had a huge range of applications — in natural-resource management, manufacturing, operations management, robot control, finance, epidemiology, scientific-experiment design, and tennis strategy, just to name a few.

But analyses involving Markov decision processes (MDPs) usually make some simplifying assumptions. In an MDP, a given decision doesn’t always yield a predictable result; it could yield a range of possible results. And each of those results has a different “value,” meaning the chance that it will lead, ultimately, to a desirable outcome.

Characterizing the value of given decision requires collection of empirical data, which can be prohibitively time consuming, so analysts usually just make educated guesses. That means, however, that the MDP analysis doesn’t guarantee the best decision in all cases.

In the Proceedings of the Conference on Neural Information Processing Systems, published last month, researchers from MIT and Duke University took a step toward putting MDP analysis on more secure footing. They show that, by adopting a simple trick long known in statistics but little applied in machine learning, it’s possible to accurately characterize the value of a given decision while collecting much less empirical data than had previously seemed necessary.

In their paper, the researchers described a simple example in which the standard approach to characterizing probabilities would require the same decision to be performed almost 4 million times in order to yield a reliable value estimate.

With the researchers’ approach, it would need to be run 167,000 times. That’s still a big number — except, perhaps, in the context of a server farm processing millions of web clicks per second, where MDP analysis could help allocate computational resources. In other contexts, the work at least represents a big step in the right direction.

“People are not going to start using something that is so sample-intensive right now,” says Jason Pazis, a postdoc at the MIT Laboratory for Information and Decision Systems and first author on the new paper. “We’ve shown one way to bring the sample complexity down. And hopefully, it’s orthogonal to many other ways, so we can combine them.”

Graduate as engineering and economics programs

U.S. News and World Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, continuing a trend that began in 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly; it shares with Stanford University the No. 4 spot for the best graduate business program.

This year, U.S. News also ranked graduate programs in the social sciences and humanities. The magazine awarded MIT’s graduate program in economics a No. 1 ranking, along with Harvard University, Princeton University, Stanford, the University of California at Berkeley, and Yale University.

Among individual engineering disciplines, MIT placed first in six areas: biomedical/bioengineering (tied with Johns Hopkins University — MIT’s first-ever No. 1 U.S. News ranking in this discipline); chemical engineering; computer engineering; electrical/electronic/communications engineering; materials engineering; and mechanical engineering (tied with Stanford). The Institute placed second in aerospace/aeronautical/astronautical engineering (tied with Georgia Tech) and nuclear engineering.

In the rankings of graduate programs in business, MIT Sloan moved up one step from its No. 5 spot last year. U.S. News awarded a No. 1 ranking to the school’s specialties in information systems and production/operations, and a No. 2 ranking for supply chain/logistics.

U.S. News does not issue annual rankings for all doctoral programs but revisits many every few years. In its new evaluation of programs in the social science and humanities, the magazine gave MIT’s economics program a No. 1 ranking overall and either first- or second-place rankings for all eight economics specialties listed. MIT’s political science and psychology programs also placed among the top 10 in the nation.

In the magazine’s 2014 evaluation of PhD programs in the sciences, five MIT programs earned a No. 1 ranking: biological sciences (tied with Harvard and Stanford); chemistry (tied with Caltech and Berkeley, and with a No. 1 ranking in the specialty of inorganic chemistry); computer science (tied with Carnegie Mellon University, Stanford, and Berkeley); mathematics (tied with Princeton University, and with a No. 1 ranking in the specialty of discrete mathematics and combinations); and physics.

U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of programs in the sciences, social sciences, and humanities are based solely on reputational surveys.

Simple method for making smaller microchip patterns

For the last few decades, microchip manufacturers have been on a quest to find ways to make the patterns of wires and components in their microchips ever smaller, in order to fit more of them onto a single chip and thus continue the relentless progress toward faster and more powerful computers. That progress has become more difficult recently, as manufacturing processes bump up against fundamental limits involving, for example, the wavelengths of the light used to create the patterns.

Now, a team of researchers at MIT and in Chicago has found an approach that could break through some of those limits and make it possible to produce some of the narrowest wires yet, using a process with the potential to be economically viable for mass manufacturing with standard types of equipment.

The new findings are reported this week in the journal Nature Nanotechnology, in a paper by postdoc Do Han Kim, graduate student Priya Moni, and Professor Karen Gleason, all at MIT, and by postdoc Hyo Seon Suh, Professor Paul Nealey, and three others at the University of Chicago and Argonne National Laboratory. While there are other methods that can achieve such fine lines, the team says, none of them are cost-effective for large-scale manufacturing.

The new approach includes a technique in which polymer thin films are formed on a surface, first by heating precursurs so they vaporize, and then by allowing them to condense and polymerize on a cooler surface, much as water condenses on the outside of a cold drinking glass on a hot day.

“People always want smaller and smaller patterns, but achieving that has been getting more and more expensive,” says Gleason, who is MIT’s associate provost as well as the Alexander and I. Michael Kasser (1960) Professor of Chemical Engineering. Today’s methods for producing features smaller than about 22 nanometers (billionths of a meter) across generally require either extreme ultraviolet light with very expensive optics or building up an image line by line, by scanning a beam of electrons or ions across the chip surface — a very slow process and therefore expensive to implement at large scale.

The new process uses a novel integration of three existing methods. First, a pattern of lines is produced on the chip surface using well-established lithographic techniques, in which an electron beam is used to “write” the pattern on the chip.

Integrates with email

Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.

To help us make the most of these “micro-moments,” researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a series of apps called “WaitSuite” that test you on vocabulary words during idle moments, like when you’re waiting for an instant message or for your phone to connect to WiFi.

Building on micro-learning apps like Duolingo, WaitSuite aims to leverage moments when a person wouldn’t otherwise be doing anything — a practice that its developers call “wait-learning.”

“With stand-alone apps, it can be inconvenient to have to separately open them up to do a learning task,” says MIT PhD student Carrie Cai, who leads the project. “WaitSuite is embedded directly into your existing tasks, so that you can easily learn without leaving what you were already doing.”

WaitSuite covers five common daily tasks: waiting for WiFi to connect, emails to push through, instant messages to be received, an elevator to come, or content on your phone to load. When using the system’s instant messaging app “WaitChatter,” users learned about four new words per day, or 57 words over just two weeks.

Ironically, Cai found that the system actually enabled users to better focus on their primary tasks, since they were less likely to check social media or otherwise leave their app.

WaitSuite was developed in collaboration with MIT Professor Rob Miller and former MIT student Anji Ren. A paper on the system will be presented at ACM’s CHI Conference on Human Factors in Computing Systems next month in Colorado.

Among WaitSuite’s apps include “WiFiLearner,” which gives users a learning prompt when it detects that their computer is seeking a WiFi connection. Meanwhile, “ElevatorLearner” automatically detects when a person is near an elevator by sensing Bluetooth iBeacons, and then sends users a vocabulary word to translate.

Though the team used WaitSuite to teach vocabulary, Cai says that it could also be used for learning things like math, medical terms, or legal jargon.

“The vast majority of people made use of multiple kinds of waiting within WaitSuite,” says Cai. “By enabling wait-learning during diverse waiting scenarios, WaitSuite gave people more opportunities to learn and practice vocabulary words.”

Still, some types of waiting were more effective than others, making the “switch time” a key factor. For example, users liked that with “ElevatorLearner,” wait time was typically 50 seconds and opening the flashcard app took 10 seconds, leaving free leftover time. For others, doing a flashcard while waiting for WiFi didn’t seem worth it if the WiFi connected quickly, but those with slow WiFi felt that doing a flashcard made waiting less frustrating.

In the future, the team hopes to test other formats for micro-learning, like audio for on-the-go users. They even picture having the app remind users to practice mindfulness to avoid reaching for our phones in moments of impatience, boredom, or frustration.