BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Ellison Unfurls In-Memory Database & Big Memory Machine At Oracle OpenWorld

Oracle

Fresh off two wins on Sunday afternoon, which kept Oracle TEAM USA in the thick of the America’s Cup sailing race, Larry Ellison kicked off Oracle OpenWorld on Sunday evening with his traditional welcome keynote. Speaking before a crowd of more than 10,000 partners and customers packed into San Francisco’s Moscone Center, Oracle’s CEO was in high spirits, hailing his crew for “great tactical calls, great boat handling [on] a tough and tricky day.”

Ellison segued to speed of another sort, this time involving Oracle Database 12c. A major architectural upgrade was big news earlier this year, adding 500 new features and positioning the database as the foundation for Oracle’s extensive set of cloud services. But Ellison made equally significant news on Sunday, announcing the addition of a new in-memory option that accelerates performance by several orders of magnitude.

“When you load your database into memory, one of the reasons you do that is to make your system go faster,” he said. “We had a number of design goals. One of them was to make queries go 100 times faster.”

To stoke performance even further, Ellison took the wraps off Oracle’s M6-32 Big Memory Machine, which he characterized as “a machine that’s ideal for in-memory databases.”  The high-powered server features a new processor in the form of the SPARC M6. The chip incorporates 12 cores—twice the number in the current-generation M5. Most impressively, it can run 96 threads per processor.

Ellison’s excitement about the new offerings was evident in his talk, as his on-stage demeanor shifted from that of proud CEO birthing potential new products, to database expert talking deep tech.

Setting the historical stage, Elision explained that databases traditionally store data in row format, where one row might correlate with a sale or a transaction. So, when the next transaction takes place, the database another row is added to the database.

“These row-format databases, which have been around since the beginning of relational database management systems (RDBMS), have been designed to run very fast when you’re adding a few rows,” he explained. “In the last several years, database researchers proposed an alternative format:  Don’t store your data in rows; store it in columns.” The intent was to speed query processing.

“As long as we’re speeding up queries, we wanted to be very careful not to slow down transactions,” Ellison said. “We figured out a way not only to speed up query processing by several orders of magnitude, but at the same time, at least double your transaction processing rate.”

That one-two boost to both queries and transactions came from a “better idea” hit upon by Oracle’s engineers. “What if we stored the data in both formats simultaneously?,” he said. “We’ve already got that data in row format. Keep that, and add a column format. And while we’re at it, put it all into memory.”

While storing the columns along with the rows might seem like an obvious next step, Ellison explained that it’s really counterintuitive. That’s because it’s important to keep both queries and online transaction processing (OLTP) running fast. However, if a system must start updating columns as well as the rows it’s already updating, at first glance you’d expect OLTP to sag, rather than surge.

“If you have those two formats and you maintain those two formats—especially the column store—in memory, you get that hundred-times speed up in performance,” Ellison emphasized. “But, ironically, when you maintain those two formats, transactions go faster. “

The reason, he explained, is that there’s actually very little overhead consumed by maintaining that added in-memory column store. Equally important, putting the database in memory eliminated the need for maintaining analytical indexes, which put a major performance drag on OLTP. (Such indexes are roughly analogous to the indexing Windows does on folders, so that they’re rapidly searchable by users.)

“So we handle simple scans, complex scans, and complex table joins an order or two of magnitude faster than we do today,” Ellison added.

There’s one additional benefit, which accrues to the in-memory approach. “Queries you never thought to index will also run faster,” explained Ellison. That’s because everything’s in memory.

For users, Ellison emphasized that the in-memory enhancements are essentially transparent. “There are no changes to SQL, there are no changes to your application,” he said. “Everything you have today works with the in-memory option. And the in-memory option runs beautifully with the multitenancy option in Oracle Database 12c, so it runs beautifully in the cloud.”

Big Memory Machine

Ellison made clear that the in-memory database will run just fine on commodity servers and even better on Exadata-class systems.

However, his inner technologist was clearly entranced by the innards of the M6-32 Big Memory Machine. The computational heft of the 12-core, 96-thread SPARC M6 gets a further boost from the silicon-based communications network built into the system. “It’s terabyte-scale computing,” Ellison explained. Specification-wise, it can send data amongst the machine’s multiple processors at 3 terabyte/sec.

Ellison concluded his talk with a look at Oracle Database Backup Logging Recovery Appliance, smiling when he said that he decided on the product name. The appliance is designed specifically for the idiosyncrasies of database backup and recovery. It stores database logs, updates, inserts, and changes, making it possible for a system administrator to do point-in-time recovery of databases in the event of a glitch, attack, or other recovery scenario. “You don’t lose any data—you don’t lose anything,” said Ellison.

Oracle will also offer database backup, logging, and recovery as a cloud service. Thus, businesses can choose to implement the Oracle Database Backup Logging Recovery Appliance in their data center, access those capabilities from the Oracle Public Cloud, or use them in combination.  “There’s nothing like this in the mark place,” said Ellison.

Proud Partner

While Ellison was the rock star of Oracle OpenWorld on Sunday evening, he had some help from accomplished colleagues and partners, and even from Oracle’s legal team. The latter posted a Safe Harbor slide prior just as the session commenced, noting that the keynoters were speaking for informational purposes only, and that their talks were not to be construed as indicative of specific product launches or contractual commitments.

As for real live people,  Oracle chief marketing officer Judith Sim was first to take the stage, welcoming attendees and noting that some 3,600 partners and customers were scheduled to take speak in sessions stretching through Thursday. She was followed by San Francisco mayor Edwin Lee, who lauded Oracle TEAM USA’s performance, remarking that he couldn’t recall ever seeing two wins in a single day.

Next, Noriyuki Toyoki, corporate senior vice president of Fujitsu, was introduced by Edward Screven, Oracle’s chief corporate architect. Toyoki emphasized how the co-engineering of Fujitsu’s hardware with Oracle’s database technology delivered maximum performance.

Toyoki positioned his talk under the banner of Fujitsu’s technological objective, which he said is to forge a human-centric intelligent society. That sparked a discourse on big data. He said Fujitsu’s M10 server is the fastest server on which to run Oracle’s database.

Toyoki presaged Ellison’s appearance by talking up the advantages of the in-memory database concept. “Processing data in memory is the best way to achieve drastic leaps in performance,” he said.

He pointed to software-on-chip as one implementation path to achieving the performance benefit of fast response. “Software-on-chip is one of the fruits of our strategic cooperation with Oracle,” Toyoki said. “The key concept is moving some routines from software into the CPU, thereby achieving significant speed increases. The current Fujitsu M10 [server] already includes some software-on-chip technology.”

Toyoki mentioned Fujitsu’s upcoming SPARC64X+ (aka Athena+) processor. He also noted his company’s use of coherent memory interconnect (CMI) in its servers. CMI provides extremely fast intercommunication between nodes, minimizing latency in modern, multi-core, multi-threaded servers. That’s important, because fast processors such as the SPARC64X+ can’t perform their computational magic if they’re sitting there waiting for data. CMI ensures such chips will be continuously stoked, the better to speed its big-data task.

Toyoki was joined on stage by Andy Mendelsohn, Oracle senior vice president for database server technologies. Mendelsohn sent some love back Fujitsu’s way, emphasizing the importance of Oracle’s collaborating with Fujitsu.

Mendelsohn also picked up the theme of fast in-system communications, but noted that the design imperatives are shifting. “As we move to this new generation of in-memory databases, what becomes really important for database performance is the chip the database is running on,” he said. “I/O is no longer a big determinant of performance once you can [put the software] into memory.”

Wrapping up with an endorsement of the in-memory approach, Mendelsohn added: “This is the future of databases—we’re going to deliver huge performance improvements to our customers.”

For Further Reading from OracleVoice:

Your Next Big Cloud Opportunity: Database As A Service

Billions Of Reasons To Get Ready For Big Data

Turning Vendors Into Partners

10 Things CIOs Should Know About The World's First Cloud Database

Oracle Revs Up Exalytics To Boost Both Speed And Simplicity

Top 5 Processor Myths

Complexity Barrier Makes IT Matter More Than Ever