Press Releases

AWS Announces New Capabilities for Amazon Aurora and Amazon DynamoDB, Introduces Amazon Neptune Graph Database

SEATTLE–(BUSINESS WIRE)–Nov. 29, 2017– Today at AWS re:Invent, Amazon Web Services Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced new database capabilities for AmazonAurora and Amazon DynamoDB, and introduced Amazon Neptune, a new fully managed graph database service. Amazon Aurora now includes the ability to scale out database reads and writes across multiple data centers for even higher performance and availability. Amazon Aurora Serverless is a new deployment option that makes it easy and cost-effective to run applications with unpredictable or cyclical workloads by auto-scaling capacity with per-second billing. With Global Tables, Amazon DynamoDB is now the first fully managed database service that provides true multi-master, multi-region read and writes, offering high-performance and low-latency for globally distributed applications and users. Amazon Neptune is AWS’s new fast, reliable, and fully managed graph database service that makes it easy for developers to build and run applications that work with highly connected datasets. To get started with Amazon Aurora and Amazon DynamoDB, and to learn more about Amazon Neptune, visit: https://aws.amazon.com/products/databases.

The days of the one-size-fits-all database are over. For many years, the relational database was the only option available to application developers. And, while relational databases are great for applications that log transactions and store up to terabytes of structured data, today’s developers need a variety of databases to serve the needs of modern applications. These applications need to store petabytes of unstructured data, access it with sub-millisecond latency, process millions of requests per second, and scale to support millions of users all around the world. It’s not only common for modern companies to use multiple database types across their various applications, but also to use multiple database types within a single application. Since introducing Amazon Relational Database Service (Amazon RDS) in 2009, AWS has expanded its database offerings to provide customers the right database for the right job. This includes the ability to run six relational database engines with Amazon RDS (including Amazon Aurora, a fully MySQL/PostgreSQL compatible database engine with at least as strong durability and availability as commercial grade databases but at 1/10th of the cost); a highly scalable and fully managed NoSQL database service with DynamoDB; and a fully managed in-memory data store and cache in Amazon ElastiCache. Now, with the introduction of Amazon Neptune, developers can extend their applications to work with highly connected data such as social feeds, recommendations, drug discovery, or fraud detection.

“Nobody provides a better, more varied selection of databases than AWS, and it’s part of why hundreds of thousands of customers have embraced AWS database services, with hundreds more migrating every day,” said Raju Gulabani, Vice President, Databases, Analytics, and Machine Learning, AWS. “These customers are moving to our built-for-the-cloud database services because they scale better, are more cost-effective, are well integrated with AWS’s other services, provide customers relief (and freedom) from onerous old guard database providers, and free them from the constraints of a one-database-for-every-workload model. We will continue to listen to what customers tell us they want to solve, and relentlessly innovate and iterate on their behalf so they have the right tool for each job.”

Amazon Aurora Multi-Master scales reads and writes across multiple data centers for applications with stringent performance and availability needs

Tens of thousands of customers are using Amazon Aurora because it delivers the performance and availability of the highest-grade commercial databases at a cost more commonly associated with open source, making it the fastest-growing service in AWS history. Amazon Aurora’s scale-out architecture lets customers seamlessly add up to 15 low-latency read replicas across three Availability Zones (AZs), achieving millions of reads per second. With its new Multi-Master capability, Amazon Aurora now supports multiple write master nodes across multiple Availability Zones (AZs). Amazon Aurora Multi-Master is designed to allow applications to transparently tolerate failures of any master–or even a service level disruption in a single AZ–with zero application downtime and sub-second failovers. This means customers can scale out performance and minimize downtime for applications with the most demanding throughput and availability requirements. Amazon Aurora Multi-Master will add multi-region support for globally distributed database deployments in 2018.

Expedia.com is one of the world’s largest full service travel sites, helping millions of travelers per month easily plan and book travel. “Expedia’s high-volume data needs were met easily with Amazon Aurora by scaling out while maintaining high performance,” said Gurmit Singh Ghatore, Principal Database Engineer, Expedia. “Amazon Aurora Multi-Master will take its scale and uptime even further, which is really exciting. Amazon Aurora is now the first choice database for most of our relational database needs.”

Amazon Aurora Serverless provides database capacity that starts, scales, and shuts down with application workload

Many AWS customers have applications with unpredictable, intermittent, or cyclical usage patterns that may not need the power and performance of Amazon Aurora all of the time. For example, dev/test environments run only a portion of each day, and blogs spike usage with new posts. With Amazon Aurora Serverless, customers no longer have to provision or manage database capacity. The database automatically starts, scales, and shuts down based on application workload. Customers simply create an endpoint through the AWS Management Console, specify the minimum and maximum capacity needs of their application, and Amazon Aurora handles the rest. Customers pay by the second for database capacity when the database is in use.

Zendesk builds software for better customer relationships. It empowers organizations to improve customer engagement and better understand their customers. “Responsiveness and reliability are incredibly important to the organizations around the world who use Zendesk to engage with their customers. We’ve designed our enterprise-level operations and technology architecture to exacting standards, and we’re big fans of Amazon Aurora because it provides the high performance and availability we need in a database,” said David Bernstein, Director of Operations Services Management at Zendesk. “We’re excited about the introduction of Amazon Aurora Serverless because it means we can more efficiently apply that same high performance and availability to our less predictable workloads, without requiring granular management of database capacity to do so.”

Amazon DynamoDB adds multi-master, multi-region and backup/restore capabilities

Amazon DynamoDB is a fully managed, seamlessly scalable NoSQL database service. More than a hundred thousand AWS customers use Amazon DynamoDB to deliver consistent, single-digit millisecond latency for some of the world’s largest mobile, web, gaming, ad tech, and Internet of Things (IoT) applications. As customers build geographically distributed applications, they find they need the same low latency and scalability for their users around the world. With Global Tables, Amazon DynamoDB now supports multi-master capability across multiple regions. This allows applications to perform low-latency reads and writes to local Amazon DynamoDB tables in the same region where the application is being used. This means a consumer using a mobile app in North America experiences the same response times when they travel to Europe or Asia without requiring developers to add complex application logic. Amazon DynamoDB Global Tables also provide redundancy across multiple regions, so databases remain available to the application even in the unlikely event of a service level disruption in a single AZ or single region. Developers can set up Amazon DynamoDB Global Tables with just a few clicks in the AWS Management Console, simply selecting the regions where they want their tables to be replicated. Amazon DynamoDB handles the rest.

Customers also need a quick, easy, and cost-effective way to back up their Amazon DynamoDB tables – whether just a few gigabytes or hundreds of terabytes – for long-term archival and compliance, and for short-term retention and data protection. With On-demand backup, Amazon DynamoDB customers can now instantly create full backups of their data in just one click, with no performance impact on their production applications. And, Point in Time Restore (PITR) allows customers to restore their data up to the minute for the past 35 days, providing protection from data loss due to application errors. On-demand backup is generally available today, with point-in-time restore coming in early 2018.

“Customers around the world use Amazon retail websites every day to shop online. To provide the best possible discovery, purchasing, and delivery experience to every customer no matter where they live, Amazon increasingly needs databases capable of millisecond read/write latency with data that’s available globally,” said Dave Treadwell, VP of eCommerce Foundation, Amazon.com. “We already use Amazon DynamoDB for its scalability and speed, and we need that same performance with globally synchronized data. Global Tables enable us to process Amazon.com customer requests in the nearest AWS region for optimal performance, and provides peace of mind by keeping data in sync across each of our application stacks, all without having to write complex failover logic.”

Customers can build powerful applications over highly connected data with Amazon Neptune

Many applications being built today need to understand and navigate relationships between highly connected data to enable use cases like social applications, recommendation engines, and fraud detection. For example, a developer building a news feed into a social app will want the feed to prioritize showing users the latest updates from their family, from friends whose updates they “like” a lot, and from friends who live close to them. Amazon Neptune efficiently stores and navigates highly connected data, allowing developers to create sophisticated, interactive graph applications that can query billions of relationships with millisecond latency. Amazon Neptune’s query processing engine is optimized for both of the leading graph models, Property Graph and W3C’s Resource Description Framework (RDF), and their associated query languages, Apache TinkerPop Gremlin and RDF SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case.

Amazon Neptune storage scales automatically, with no downtime or performance degradation. Amazon Neptune is highly available and durable, automatically replicating data across multiple AZs and continuously backing up data to Amazon Simple Storage Service (Amazon S3). Amazon Neptune is designed to offer greater than 99.99 percent availability and automatically detect and recover from most database failures in less than 30 seconds. Amazon Neptune also provides advanced security capabilities, including network security through Amazon Virtual Private Cloud (VPC), encryption at rest using AWS Key Management Service (KMS), and encryption in transit using Transport Layer Security (TLS).

Thomson Reuters is the world’s leading source of news and information for professional markets. “Our customers are increasingly required to navigate a complex web of global tax policies and regulations. We needed an approach to model the sophisticated corporate structures of our largest clients to deliver an end-to-end tax solution,” said Tim Vanderham, Chief Technology Officer, Thomson Reuters Tax and Accounting. “We use a microservices architecture approach for our platforms and are beginning to leverage Amazon Neptune as a graph-based system to quickly create links within the data.”

Siemens is a global technology powerhouse that has stood for engineering excellence, innovation, quality, reliability and internationality for 170 years. “At Siemens, we need to manage data, make it available, and enable users to rapidly innovate,” said Thomas Hubauer, Portfolio Project Manager for Knowledge Graph and Semantics at Siemens Corporate Technology. “Siemens utilizes knowledge graph technology for applications ranging from semantic master data management and production monitoring, to finance and risk management. We are looking forward to investigating how Amazon Neptune can drive a range of knowledge graph use cases for our business and for our customers.”

About Amazon Web Services

For more than 11 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 100 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, and application development, deployment, and management from 44 Availability Zones (AZs) across 16 geographic regions in the U.S., AustraliaBrazilCanadaChinaGermanyIndiaIrelandJapanKoreaSingapore, and the UK. AWS services are trusted by millions of active customers around the world–including the fastest-growing startups, largest enterprises, and leading government agencies–to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit https://aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit www.amazon.com/about and follow @AmazonNews.

Ben

I am the owner of Cerebral-overload.com and the Verizon Wireless Reviewer for Techburgh.com. My love of gadgets came from his lack of a Nintendo Game Boy when he was a child . I vowed from that day on to get his hands on as many tech products as possible. My approach to a review is to make it informative for the technofile while still making it understandable to everyone. Ben is a new voice in the tech industry and is looking to make a mark wherever he goes. When not reviewing products, I is also a 911 Telecommunicator just outside of Pittsburgh PA. Twitter: @gizmoboaks

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button