Page 27 - GCN, Oct/Nov 2017
P. 27

EMERGING TECH
BY PATRICK MARSHALL
Goosing flash memory for more efficient data centers
DATA CENTERS depend on reliable, fast caching
to respond to database queries. Web services such as Google or Facebook might have 1,000 servers dedicated to storing the results of common queries to speed user access.
Those cache servers generally rely on dynamic RAM. It’s fast, but it’s also expensive and power- hungry. The price tag on flash memory, on the other hand, is only about one-tenth that of DRAM. In addition, flash memory uses only about 5 percent of the energy and
has about 100 times the storage density of DRAM. What’s more, if the power goes out, the data doesn’t disappear from flash memory.
So if you run a data center, wouldn’t it make sense to lower your energy costs and boost your capacity by moving to flash memory?
The snag has always
been speed. Flash memory lags far behind DRAM, particularly in write operations. Flash memory does not overwrite existing data at the byte level and must instead write in entire blocks. Therefore, if a block contains data, the entire block must be erased before
it can be written to, which is time-consuming.
Researchers at the Massachusetts Institute
of Technology, however, have created a new system for data center caching using flash memory that is
Engineering Professor Arvind.
The second trick to boosting flash memory performance was adding a small amount of DRAM to the system — a few megabytes of DRAM for
the flash store. As soon as I have issued that, I can instantly go and look at the second one and the next one and the next one. So all this is happening in hardware. We don’t talk to the processor while we are
doing this.”
Those operations can
be performed much faster in hardware than in software, he added.
“The flash-based
KV store architecture developed by Arvind and his MIT team resolves many of the issues
that limit the ability
of today’s enterprise systems to harness
the full potential of
flash,” Vijay Balakrishnan, director of the Data Center Performance
and Ecosystem program
at Samsung’s Memory Solutions Lab, told MIT News. “The viability of
this type of system extends beyond caching since many data-intensive applications use a KV-based software stack, which the MIT team has proven can now be eliminated.”
When asked when we might see BlueCache in the marketplace, Arvind said, “Many people are saying, ‘Why don’t you guys start a company?’ But really, industry has already picked up the idea, so it is happening.” •
MIT researchers have solved flash memory’s speed problem, making it possible to reduce power consumption of data center caches by
90 percent.
competitive with existing DRAM implementations. The system, BlueCache, relies on three techniques, the first of which is a tried-and-true method
of improving processing performance called pipelining.
Flash memory takes approximately 200 microseconds to process
a single query, but with pipelining, subsequent queries are sent to the cache before the result of the first query is received.
Although pipelining has been in use for some time, “we are doing it in a very deep fashion” with more dependent steps, said MIT Computer Science and
each million megabytes of flash memory. The faster- performing DRAM is used to store tables that pair data queries with flash memory addresses.
Finally, Arvind’s team developed hardware for performing read, write and delete operations in flash memory — tasks that existing cache servers perform in software.
“Just imagine that all these key-value stores are sitting in flash memory, and you are sending me a constant stream of these three commands,” Arvind said. “So I look at the first one and I have special- purpose hardware, which will actually go and access
GCN OCTOBER/NOVEMBER 2017 • GCN.COM 27


































































































   25   26   27   28   29