BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Do Oracle's Claims About AWS Pass Scrutiny?

Following
This article is more than 4 years old.

© 2018 Bloomberg Finance LP

It was a big last week for enterprise IT events. In addition to Pure Storage’s Accelerate event, Oracle held its annual customer and partner pilgrimage, OpenWorld 2019. I attended Pure’s event in my hometown of Austin and had analyst Mark Vena attend the Oracle event in San Francisco.

I was able to view the OpenWorld 2019 keynotes and monitored Twitter, though, and wow was it spicy! Oracle mentioned AWS more than I have ever seen a large company talk about its competitor. I had a few press, and even other analysts ask me about some of Oracle’s claims related to Amazon’s AWS. I wanted to get underneath them here and compare to my own compass. Net-net, I don’t believe Oracle made its case against AWS.

Background

Oracle and AWS have very different business models- AWS is a pure cloud vendor primarily in IaaS and PaaS with some hybrid offerings like Snowball and Outposts, and Oracle is primarily an on-prem database and apps vendor with some SaaS and IaaS offerings.  That doesn’t stop the two from colliding at many, many customers. So let’s dive in.

I summarize the Oracle claim, directly quote Oracle CEO Larry Ellison from the keynote, summarize what I think Ellison is saying, and then provide my take.

1/ Oracle Claim: Autonomous systems eliminate human labor, and when you eliminate human labor you eliminate pilot error.

Ellison’s Quote: “Autonomous systems eliminate human labor and when you eliminate human labor you eliminate pilot error. If you eliminate human error in autonomous systems you eliminate data theft. Clouds are complicated. Human beings make mistakes. The Amazon data breach, where Capital One had 100 million of their customers lose their personal information happened because someone made a mistake. Someone made a configuration error. Now, Amazon takes what I think is a very reasonable position, saying, hey, you misconfigured the system. That’s your mistake. We at Amazon can’t be responsible. In the Oracle Autonomous Cloud, when you use the Oracle Autonomous Database, it configures itself. It’s not possible for customers to make configuration errors because there are no pilots to make errors. The system configures itself. So in the AWS cloud, if you make an error and it leads to catastrophic data loss, it's on you. In the Oracle Cloud, when you use the autonomous database, the database automatically provisions itself. The system automatically configures itself. It automatically encrypts itself. It automatically backs itself. All the security systems are automatic. Human beings aren't involved. There can be no human error.” You can find this at 4:20 in the session here.

Pat’s Summary: The premise here was that if you eliminate human error with an autonomous system, you eliminate data theft. Capital One was used as an example where 100M people were impacted a hacker exploiting a misconfigured 3rd party Web Application Firewall, a human error. Someone made a mistake, and Amazon does not accept responsibility for customer configuration errors. According to Oracle, The solution to all this is Oracle’s Autonomous Cloud services which automatically configure themselves as Oracle customers are not able to make configuration errors.

Pat’s Take: I believe it’s impossible for any cloud provider, including Oracle, Microsoft Azure, Google Cloud, or IBM Cloud, to automagically avoid “configuration errors” because the very same action by one customer can be completely intentional and necessary and by another customer, a configuration error. One person’s open bucket is another’s closed bucket. Everyone’s situation is different- you really can’t say for certain that an open security group or even an open proxy is an error. I am looking forward to researching more about Oracle's autonomous database and Linux as the promise is interesting. Most interesting for me would be for an enterprise customer say it has had no issues ever with the Autonomous database after a year of use.

2/ Oracle claim: A single multi-purpose database is better than several single-purpose specialized databases.

Ellison’s quote: “This is just the beginning of a divergent architecture strategy. The one at Oracle where we say we are going to keep adding features and data types and application types to the Oracle database, a single database, a single converged database that handles all your data types and all your applications versus Amazon saying that when something new comes up like the internet of things, we’ll give you a real fast IoT database. We have all the capabilities in one database. Amazon has a separate database for all of this and that creates a bunch of problems. Each database has a fragment of your data. You have to have experts to maintain these databases.” You can find this at 30:40 in the session here.

Pat’s Summary: The premise here is that many unique and specialized databases create issues and that each database has different APIs, security models, recovery procedures, and scalability procedures. Each single-purpose database has different operational characteristics that require a different team with unique skills. Each database has a fragment of customer data. Oracle offers a single converged database that supports various data types like relational, document, spatial, and graph and application types such as transactions, analytics, ML, and IoT. 

Pat’s Take: I believe an approach of using a relational database as the only place for your applications is an outdated viewpoint. Hasn’t this been the notion since the 90’s? When has a one-size-fits-all approach ever worked successfully in tech the last 10 years? A lot has changed since then. With a fully managed cloud database service, developers work with APIs and really don’t care what is running in the background as long as it offers performance, security, and reliability at the right price point. Purpose-built, managed databases allow developers to fractalize complex applications into smaller pieces and therefore using the best tool to solve the problem, whether it be a hammer, screwdriver, or saw. AWS can rollout many customer examples like AirBnB who use DynamoDB for quick lookups and personalized search, ElastiCache for faster (sub-ms) site rendering, and Amazon Aurora as its primary transactional database.  I will be closely monitoring Oracle's Swiss Army knife database and if it can deliver on the promise, will give it kudos.

3/ Oracle claim: Oracle can cut your AWS bill in half.

Ellison’s quote: “It costs way less to run Oracle Autonomous Database than to run Redshift, Aurora, or any Amazon database. Well, the Oracle Autonomous Database not only eliminates human errors, but it’s configured in such a way that the network can fail, and the system keeps running, that a server can fail, and the system keeps running. It’s a fault-tolerant system. That’s why we’re at least 25 times more reliable than Amazon. I think I might change it next year to 100 times. Oracle Autonomous database is much, much, much faster than Redshift. Now, we showed actually, the Oracle Autonomous Database being seven, eight times faster than Redshift when you are doing analytics. Aurora is their best transactional database. We were, again around eight or nine times faster. They’re 7x slower. That means they’re 7x more expensive. That’s why it's so easy for us to guarantee. You take any application off an Amazon database, move it to Oracle we'll guarantee bringing your Amazon bill, we'll guarantee that bill will go in half.” You can find this at 19:30 in the session here.

Pat’s Summary: The claim says that it costs a lot less to run Oracle Autonomous Database than to run Redshift, Aurora, or any Amazon database and that’s why Oracle is 25X more reliable than Amazon, and next year could be 100X. Amazon is 7X slower, equating to 7X more cost. Oracle doubles down and says customers can bring Oracle their Amazon contract and will guarantee the bill will be half if the customer goes with Oracle.

Pat’s Take: Ellison is infamous in the industry for making boastful claims. Therefore it was important to look at the fine print which says the claim applies to database and data warehouse only. The expense claims don’t cover other services including compute, storage, or any of the hundreds of AWS services. The head-scratcher for me is that AWS databases like Amazon Aurora can be 10% of the price of Oracle databases and AWS says it has reduced prices 73 times since it launched in 2006. The other thing I just realized in the past year about AWS is that it tries its best to drive trust with customers via “downshifting”, or recommending to the customer how to lower it costs. The “AWS Trusted Advisor” looks how a customer is utilizing services and will make recommendations on how to spend differently. I think Oracle would be best served to have large enterprises give testimonials on cutting their cloud bills in half by shifting from AWS to Oracle.

4/ Oracle claim: Oracle is the only cloud that offers secure data isolation.

Ellison’s quote: “All the other clouds have a shared a shared Intel computer. Who shares it? Well, Amazon has code in that computer and you have code in that computer. You might be the only tenant in that computer but you share that computer with Amazon. Amazon also has code in there. That’s not how ours work. In our case, you’re the only tenant and our network control code is in a separate computer with separate memory and that forms these secure isolation zones. Threats can’t get into (our) cloud. Gen 1 cloud, one shared Intel computer. Amazon can see your data and you can see Amazon’s code. Both really bad ideas. You should not be able to access cloud control code.” You can find this at 27:30 in the session here.

Pat’s Summary: The premise here is that all the other clouds share an Intel-based server and that the customer and Amazon has code in that server, even if it’s single-tenant. According to Oracle, the customer is the only one with access and the network control code is in a separate server with separate memory which creates “secure isolation zones.” With this, threats can’t get into the cloud. It goes on to say that AWS can see customer data and the customer can see AWS’s code, both which are bad ideas. Customers shouldn’t have access cloud control code and AWS shouldn’t have data access. 

Pat’s Take: Ellison is likely referring to the fact that in AWS’s previous architecture, when it used the Xen hypervisor, AWS had system code running in the main system. Oracle’s first-gen cloud worked like this too. This is theoretically more vulnerable than when virtualization code is run off the main system, as AWS does in its more recent Nitro architecture, and Oracle does on its second-generation cloud. I don't believe there's anything here.

5/ Oracle Claim: AWS Cloud Databases are not serverless or elastic. 

Ellison’s quote: “Most people don’t use DynamoDB. Most people use Aurora, Redshift, RDS and a bunch of the others. None of those are serverless and none of those are elastic. You want to scale up? Take the system down. The system is not running? You still have to pay for it. No servers are running? Too bad. You have to pick a shape. 10 cores and what happens when the application stops running? You pay for it. AWS Redshift not serverless. Amazon you want to scale up or down? That’s downtime. Amazon, you patch? More downtime. Regarding benefits of Oracle: We’re talking about basic compute. Basic storage. Serverless when not running. Dynamically scale itself up in a number of cores and amount of memory while it is running. That’s what we’re doing. No downtime. Scales up while it’s running. What about storage? Pick your starting amount of storage. As you need more storage it will automatically scale up while it is running. No downtime.” You can find this at 43:40 in the session here.

Pat’s Summary: AWS DynamoDB is a serverless, functionally limited database that few customers use, rather opting to use Aurora or Redshift which are neither serverless or elastic. Therefore, with Aurora or Redshift, you have to shut down the database to manually scale up or down and pay for larger configurations than you need.   

Pat’s Take: Three AWS databases (Amazon Aurora, Amazon DynamoDB, Amazon Neptune) are serverless and elastic. AWS says that more than a hundred thousand customers use DynamoDB including Lyft, Airbnb, Samsung, Toyota, and Capital One to support mission-critical workloads. Amazon Aurora is serverless and elastic, offering features such as read replicas, serverless, and global databases for single region and cross-region failover. AWS says instance failover typically takes less than 30 seconds. I'd like to see some sort of Oracle and AWS cloud performance and reliability "shoot-out".

6/ Oracle Claim: Oracle has a larger footprint than AWS. 

Ellison’s quote: “We have 16 hyperscale regions around the world today. All the Oracle regions run all the Oracle services. All (our) services are available in all of the clouds and that’s our policy. Amazon doesn’t do that. Amazon has some services some place, some services elsewhere... When we meet next year, we’ll have more regions than AWS.” You can find this at 57:25 in the session here.

Pat’s Summary: The argument says that the policy at Oracle is that services are available in all regions, which Amazon does not do as it has some services in some regions. As of next year, Oracle said it will have more regions (36) than AWS (25). It goes on to say that enterprise customers worldwide require geographically distributed regions for true business continuity, disaster protection, and regional compliance requirements and that multiple availability domains within a region will not address this issue.

Pat’s Take: I believe Oracle is comparing apples and oranges here because of the differing definition of a “Region”. AWS has 69 Availability Zones (AZ’s) in 22 Regions, and unlike Oracle, each AZ has a datacenter.  I believe AWS’s AZ architecture is unique as it provides elasticity (scaling and disaster) in a scale that Oracle doesn’t have. Also of note is that Oracle included Azure and Azure datacenters under construction in its analysis, kind of like if AWS were to partner with GCP and add GCP’s capacity to its own figures.  Oracle isn’t even in the top 10 of IaaS players where AWS is ranked #1.

Wrapping up

Oracle spent an incredible amount of time talking about AWS in its OpenWorld 2019 keynotes. I don’t believe Oracle made the case on its boastful claims against AWS, but I urge you to watch the Oracle keynote here and the AWS re:Invent 2018 keynote here.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including Amazon.com, Advanced Micro Devices, Apstra, ARM Holdings, Aruba Networks, AWS, A-10 Strategies, Bitfusion, Cisco Systems, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Foxconn, Frame, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google, HP Inc., Hewlett Packard Enterprise,  Huawei Technologies, IBM,  Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MACOM (Applied Micro), MapBox, Marvell, Mavenir, Mesosphere, Microsoft, National Instruments, NetApp, NOKIA, Nortek, NVIDIA, ON Semiconductor, ONUG, OpenStack Foundation, Panasas, Peraso, Pixelworks, Plume Design, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Samsung Electronics, Silver Peak, SONY, Springpath, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, TensTorrent, Tobii Technology, Twitter, Unity Technologies, Verizon Communications, Vidyo, Wave Computing, Wellsmith, Xilinx, Zebra, which may be cited in this article. 

 

Follow me on Twitter or LinkedInCheck out my website or some of my other work here