Monthly Archives: May 2017

The Microsoft Windows targeting system

The ransomware program WannaCry, launched on May 12, targets the Microsoft Windows operating system. While this malware has infected over 200,000 computers worldwide, the attack affected around 100 computers across the 50,000 devices on the MIT network.

This limited impact is due to the many security services provided to the community by MIT Information Systems and Technology (IS&T).

“MIT values an open network to foster research, innovation and collaborative learning,” says IS&T Associate Vice President Mark Silis. “We continuously strive to balance potential security risks with the benefits of our open network environment by offering a number of security services to our community, including Sophos anti-virus, CrowdStrike anti-malware, and CrashPlan backup.

“IS&T staff are working with faculty, staff, and students to secure their devices and address any remaining issues related to WannaCry. In the weeks ahead, our department will continue to educate and advise the MIT community.”

A post on the CISCO Talos blog provides in-depth technical details about the WannaCry ransomware attack.

Preventive measures

IS&T strongly recommends that community members take this opportunity to make sure their Windows machines are fully patched, especially with the MS17-010 Security Update. Microsoft has even released patches for Windows XP, Windows 8, and Windows Server 2003, which are no longer officially supported.

In addition, IS&T recommends installing Sophos and CrowdStrike. These programs successfully block the execution of WannaCry ransomware on machines where they have been installed. A third program, CrashPlan, is also recommended. This cloud-based offering, which runs continuously in the background, securely encrypts and backs up data on computers. Should files be lost due to ransomware or a computer breakdown, restoring data is straightforward.

IS&T offers these three programs to the MIT community at no cost and can help with installation questions. The department also encourages users to enable operating system firewalls on computers and laptops.

Drive suggest another approach to developing flying cars

Being able to both walk and take flight is typical in nature — many birds, insects, and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: Imagine machines that could fly into construction areas or disaster zones that aren’t near roads and then squeeze through tight spaces on the ground to transport objects or rescue people.

The problem is that robots that are good at one mode of transportation are usually bad at another. Airborne drones are fast and agile, but generally have too limited of a battery life to travel for long distances. Ground vehicles, on the other hand, are more energy efficient, but slower and less mobile.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to develop robots that can both maneuver around on land and take to the skies. In a new paper, the team presented a system of eight quadcopter drones that can fly and drive through a city-like setting with parking spots, no-fly zones, and landing pads.

“The ability to both fly and drive is useful in environments with a lot of barriers, since you can fly over ground obstacles and drive under overhead obstacles,” says PhD student Brandon Araki, lead author on the paper. “Normal drones can’t maneuver on the ground at all. A drone with wheels is much more mobile while having only a slight reduction in flying time.”

Araki and CSAIL Director Daniela Rus developed the system, along with MIT undergraduate students John Strang, Sarah Pohorecky, and Celine Qiu, and Tobias Naegeli of ETH Zurich’s Advanced Interactive Technologies Lab. The team presented their system at IEEE’s International Conference on Robotics and Automation (ICRA) in Singapore earlier this month.

How it works

The project builds on Araki’s previous work developing a “flying monkey” robot that crawls, grasps, and flies. While the monkey robot could hop over obstacles and crawl about, there was still no way for it to travel autonomously.

To address this, the team developed various “path-planning” algorithms aimed at ensuring that the drones don’t collide. To make them capable of driving, the team put two small motors with wheels on the bottom of each drone. In simulations, the robots could fly for 90 meters or drive for 252 meters, before their batteries ran out.

Adding the driving component to the drone slightly reduced its battery life, meaning that the maximum distance it could fly decreased 14 percent to about 300 feet. But since driving is still much more efficient than flying, the gain in efficiency from driving more than offsets the relatively small loss in efficiency in flying due to the extra weight.

“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” says Jingjin Yu, a computer science professor at Rutgers University who was not involved in the research.

The team also tested the system using everyday materials such as pieces of fabric for roads and cardboard boxes for buildings. They tested eight robots navigating from a starting point to an ending point on a collision-free path, and all were successful.

Rus says that systems like theirs suggest that another approach to creating safe and effective flying cars is not to simply “put wings on cars,” but to build on years of research in adding driving capabilities to drones.

Hardware offers alternative to 3-D

Last year, a team of forensic dentists got authorization to perform a 3-D scan of the prized Tyrannosaurus rex skull at the Field Museum of Natural History in Chicago, in an effort to try to explain some strange holes in the jawbone.

Upon discovering that their high-resolution dental scanners couldn’t handle a jaw as big as a tyrannosaur’s, they contacted the Camera Culture group at MIT’s Media Lab, which had recently made headlines with a prototype system for producing high-resolution 3-D scans.

The prototype wasn’t ready for a job that big, however, so Camera Culture researchers used $150 in hardware and some free software to rig up a system that has since produced a 3-D scan of the entire five-foot-long T. rex skull, which a team of researchers — including dentists, anthropologists, veterinarians, and paleontologists — is using to analyze the holes.

The Media Lab researchers report their results in the latest issue of the journal PLOS One.

“A lot of people will be able to start using this,” says Anshuman Das, a research scientist at the Camera Culture group and first author on the paper. “That’s the message I want to send out to people who would generally be cut off from using technology — for example, paleontologists or museums that are on a very tight budget. There are so many other fields that could benefit from this.”

Das is joined on the paper by Ramesh Raskar, a professor of media arts and science at MIT, who directs the Camera Culture group, and by Denise Murmann and Kenneth Cohrn, the forensic dentists who launched the project.

The system uses a Microsoft Kinect, a depth-sensing camera designed for video gaming. The Kinect’s built-in software produces a “point cloud,” a 3-D map of points in a visual scene from which short bursts of infrared light have been reflected back to a sensor. Free software called MeshLab analyzes the point cloud and infers the shape of the surfaces that produced it.

A high-end commercial 3-D scanner costs tens of thousands of dollars and has a depth resolution of about 50 to 100 micrometers. The Kinect’s resolution is only about 500 micrometers, but it costs roughly $100. And 500 micrometers appears to be good enough to shed some light on the question of the mysterious holes in the jaw of the T. rex skull.

Cretaceous conundrum

Discovered in 1990, the Field Museum’s T. rex skeleton, known as Sue, is the largest and most complete yet found. For years, it was widely assumed that the holes in the jaw were teeth marks, probably from an attack by another tyrannosaur. Ridges of growth around the edges of the holes show that Sue survived whatever caused them.

But the spacing between the holes is irregular, which is inconsistent with bite patterns. In 2009, a group of paleontologists from the University of Wisconsin suggested that the holes could have been caused by a protozoal infection, contracted from eating infected prey, that penetrated Sue’s jaw from the inside out.

The 3-D scan produced by the MIT researchers and their collaborators sheds doubt on both these hypotheses. It shows that the angles at which the holes bore through the jaw are inconsistent enough that they almost certainly weren’t caused by a single bite. But it also shows that the holes taper from the outside in, which undermines the hypothesis of a mouth infection.

New generation of computers

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

Photon photon interactions at room temperature

Ordinarily, light particles — photons — don’t interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

“All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band,” says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. “It’s been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.”

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englund’s lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.