The In-Memory database has been a buzz word in the industry for quite some time as major companies like SAP, IBM, and Microsoft has released their products which packs breakthrough performance than the regular databases. Database pioneers Oracle, is now joining the league with the ‘Oracle Database In-Memory’ capability to their flagship database product “Oracle Database 12c”.
Larry Ellison, CEO, Oracle Corporation announced “Oracle Database In-Memory” in a special event held at the Oracle Headquarters on June 10th 2014.
Oracle Database In-Memory will pack extreme performance, availability and lucid simplicity. I have a brief summary of his release keynote:
The main goals of developing the Oracle In-Memory database are to provide:
- 100X faster query processing
- 2X speed for processing OLTP
- Provide Transparency
He said that all the In-Memory databases in the marker right now, start and end in achieving only the first goal of faster query processing. But Oracle has been very strong on sticking to all three of these goals which, made it enter into the market bit late. OLTP processing and the query processing has a complimentary behavior when it comes to performance. So, to have both of them work with little trade-off was the biggest challenge. But now Oracle has come up with a smart solution which will make the product stand out in the industry.
Migrating your existing application utilize the In-Memory option would actually be a cake walk. Larry says, “You have got to install the In-Memory option, throw a switch, and then all your applications run faster”. There would absolutely be no code changes at all.
There are two different database formats:
- Traditional relational databases, in which the data is organizes as rows
- Analytic databases, in which the data is organized as columns.
The relational databases are great for transactional processing, in which you have all the columns of a limited number of rows, and the analytic databases are excellent for doing analytic calculations and report generation. We need to have best of both these database formats to push the performance limits further. And here is where, the innovation from Oracle comes.
It stores the data in two formats in memory. The old row format, which has been the de-facto standard till now has been kept as it is but by the introduction of the column format, in In-Memory option will scale up the performance limits.
Some data in the column format is brought In-Memory. Highly compressed data in the column format is brought into the column cache (In-Memory) from the flash or disk as and when needed. The column data is not persistent and hence is not logged. The real data still remain in the row format.
Using the SIMD processing which is right now used for hardware acceleration, is put to use, which would enable a single core of the CPU to scan billions of rows per second. The join operation in database can be converted into fast scans in the In-Memory option. This will ensure at least 10X performance improvement. Dropping the analytical indexed for the OLTP processing can improve performance many fold. They can be replaced with the column store.
The main features of Oracle Database In-Memory are:
- Scale out
The In-Memory feature simply scales out from a multi-socket server, to an Exadata machine or even to SPARC M6-32 Server. There is an inherent memory hierarchy in the system. All the data need not be in the memory at an instance. Suppose we have 200 TB data of which only 10 TB is on active use, then then 10 TB can be accommodated in the memory and the rest can be spread out between the flash storage and the disks.
- Fault Tolerance
“If one of your Exadata nodes fail, then what? The application simply runs.” That’s what Larry says. The column cache is mirrored across at-least 2 nodes, which makes it a fault tolerant system. Larry claims that the Oracle Database In-Memory, is the only product in the market to be fault tolerant.
By maintain the time tested, row oriented database structure in place, Oracle claims high reliability and availability. The data that is stored in the disk is row oriented. The whole system is protected against node failure, site failure, corruption or human error.
Setting up the Oracle Database In-Memory option involves only these three tasks:
- Configure the main memory capacity that the In-Memory tool can use to setup the column cache.
- Configure the tables or partition to be kept in memory.
- *Drop the analytic indexes to speed up the OLTP. (*optional)
“You are done, that’s it, nothing else”, says Larry.
The Oracle Database In-Memory is cloud ready. All the applications can run faster by utilizing this feature. Larry showcased the Oracle E-Business Suite, Oracle PeopleSoft, Fusion Applications and Oracle Siebel in his presentation. The ability to run legacy applications like PeopleSoft and the latest Fusion Applications with no code change at all proves the mettle of Oracle Database In-Memory.
An analytic program in Oracle E-Business Suite, which took 58 hours before now takes just 13 minutes. A financial analyzer based on PeopleSoft performs 1300X faster from 4.3 hours to 11.5 seconds. All these statistics are from real application running on real database in real companies.
The companies are going to benefit a lot when this kind of performance overhaul comes in. Information comes in faster than what they get today. So they will ask questions more frequently, and those questions will be more complicated. The whole business process will have a boost by utilizing this technology.
Juan Loaiza, Senior Vice President, Systems Technology, Oracle Corporation, demonstrated the high availability, improved performance and the scaling feature on a 2 socket server, an Exadata and a M6-32 Server. Trillions of rows were processed in mere seconds.
Oracle Database In-Memory is scheduled for general availability in July and can be used with all hardware platforms on which Oracle Database 12c is supported.