PIRL’20 kicks off this week with keynotes from Memverge and Oracle’s x10 group and presentations from independent PMEM developers and universities from the US and Europe.
Software-defined Memory Service Combining Persistent Memory and DRAM
Charles Fan (MemVerge)
Abstract: While persistent memory has higher density and lower cost than DRAM, it is also slower than DRAM. Byte-addressable persistence can not be fully exploited without application change. These are a few of the problems for applications to adopt persistent memory. MemVerge developed software called Memory Machine that manages DRAM and Persistent Memory together, and make them available to applications as software. This software-defined memory paradigm makes it possible to deliver memory service that is big, low cost, and performant. MemVerge also developed an in-memory snapshot capability that takes advantage of persistence characteristics of the persistent memory to protect the data.
Biography: Charles Fan is co-founder and CEO of MemVerge, an early-stage startup building Memory-Converged Infrastructure software on top of the new persistent memory technologies. Prior to MemVerge, Charles was a SVP/GM at VMware, responsible for VMware’s storage business unit and the big data group. He led the teams that created industry-transforming products including Virtual SAN. Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.
The Reality of Using Intel Optane Persistent Memory With a SQL In-Memory Database
Doug Hood (Oracle TimesTen)
Abstract: Everybody wants faster access to their data, but DRAM is expensive and limited in size. Persistent memory promises larger capacity at a cheaper price point, but with a higher latency than DRAM. This talk investigates how Intel Optane persistent memory used in both App Direct Mode and Memory Mode works with the Oracle TimesTen In-Memory Database. SQL benchmarks based on customer workloads are used to measure persistent writes, database load times, and latency/throughput. The effects of cache hit ratios, concurrency, and reads vs writes are considered.
Biography: As a developer and product manager for an In-Memory database, persistent memory is a big deal. Understanding the engineering tradeoffs for DRAM, PMem, NVMe storage, and RDMA are critical for the design and execution of the world’s fastest databases. Benchmarking technologies like Intel Optane persistent memory with customer workloads reveals the reality of these technologies beyond the marketing hype. Doug is an evangelist for Oracle TimesTen In-Memory Database, Oracle In-Memory, and Oracle NoSQL.
Transactional Graph Processing in Persistent Memory
Philipp Götze (TU Ilmenau)
Abstract: We start the presentation by introducing our lessons learned from the application of Persistent Memory (PMem) in our preliminary work. Afterwards, we examine PMem as a very promising technology for graph processing. We present a novel architecture for transactional processing of queries and updates on a property graph model. Its design builds on the previously introduced lessons learned and special characteristics of Intel’s Optane Persistent Memory. Particularly, we look at the storage model, query processing, and transaction processing and present first evaluation results.
Planning to fail with reverse psychology
Steve Heller (2Misses Company)
Abstract: Many people talk about power failure resilience and testing but not many people show how they accomplish these tasks. In this talk, I present a minimal working example of how to write code to allow a program to resume where it left off when restarting after a power failure.
Cross-Failure Bug Detection in Persistent Memory Programs
Sihang Liu (University of Virginia)
Abstract: Ensuring a consistent recovery in event of a failure is one of the key requirements for programs based on the persistent memory. The recoverability not only depends on the execution before the failure but also on the recovery and resumption after failure. We refer to these two stages as the pre- and post-failure execution stages. An incorrect interaction between the pre- and post-failure stages can cause inconsistencies in persistent data. In this talk, I will first categorize the cause of such incorrect cross-failure interactions, and then introduce our tool XFDetector that detects them.
Steven Swanson is a professor in the Department of Computer Science and Engineering at the University of California, San Diego and the director of the Non-volatile Systems Laboratory. His research interests include the systems, architecture, security, and reliability issues surrounding heterogeneous memory/storage systems, especially those that incorporate non-volatile, solid-state memories. He has received an NSF CAREER Award, Google Faculty Awards, a Facebook Faculty Award, and been a NetApp Faculty Fellow. He is a co-founder of the Non-Volatile Memories Workshop.