본문 바로가기

AWS Database/AWS DynamoDB

[AWS Certificate]-DynamoDB Accelerator (DAX)

DAX(DynamoDB Accelerator)란

 

  • In-Memory Caching, microsecond latency
  • Sits between DynamoDB and Client Application (acts a proxy)
  • Saves costs due to reduced read load on DynamoDB
  • Helps prevent hot partitions
  • Minimal code changes required to add DAX to your existing DynamoDB app
  • Supports only eventual consistency (strong consistency requests pass-through to DynamoDB)
  • Not for write-heavy applications
  • Runs inside the VPC
  • Multi AZ (3 nodes minimum recommended for production)
  • Secure (Encryption at rest with KMS, VPC, IAM, CloudTrail ...)

 


 

DAX Architecture

  • DAX has two types of caches (internally)
    • Item Cache
    • Query Cache
  • Item cache stores results of index reads (=GetItem and BatchGetItem)
    • Default TTL of 5 min (specified while creating DAX cluster)
    • When cache becomes full, older and less popular items get removed
  • Query cache stores results of Query and Scan operations 
    • Default TTL of 5 min
    • Updates to the Item cache or to the underlying DynamoDB table do not invalidate the query cache. So, TTL value of the query cache should be chosen accordingly.

DAX Operations

 

 

  • Only for item level operations
  • Table level operations must be sent directly to DynamoDB

 


 

DAX Operations

 

  • Only for item level operations
  • Table level operations must be sent directly to DynamoDB
  • Write Operations use write-through approach
  • Data is first written to DynamoDB and then to DAX, snd write operation is considered as successful only if both writes are successful
  • You can use write-around approach to bypass DAX, e.g. for writing large amount of data, you can write directly to DynamoDB (Item cache goes out of sync)

 


 

DAX Operations

  • Only for item level operations
  • Table level operations must be sent directly to DynamoDB
  • Write Operations use write-through approach
  • Data is first written to DynamoDB and then to DAX, and write operation is considered as successful only if both writes are successful
  • You can use write-around approach to bypass DAX, e.g. for writing large amount of data, you can write directly to DynamoDB (Item cache goes out of sync)
  • For reads, if DAX has the data (=Cache hit), it's simply returned without going through DynamoDB

 


DAX Operations

 

  • Only for item level operations
  • Table level operations must be sent directly to DynamoDB
  • Write Operations use write-through approach
  • Data is first written to DynamoDB and then to DAX, and write operation is considered as successful only if both writes are successful
  • You can use write-around approach to bypass DAX, e.g. for writing large amount of data, you can write directly to DynamoDB (Item cache goes out of sync)
  • For reads, if DAX has the data (=Cache hit), it's simply returned without going through DynamoDB
  • If DAX doesn't have the data (=Cache miss), it's returned from DynamoDB and updated in DAX on the master node

DAX Operations

  • Only for item level operations
  • Table level operations must be sent directly to DynamoDB
  • Write Operations use write-thorugh appraoch
  • Data is first written to DynamoDB and then to DAX, and write operation is consider as successful only if both writes are successful
  • You can use write-around approach to bypass DAX, e.g. for writing large amount of data, you can w rite directly to DynamoDB (Item cache goes out of sync)
  • For reads, if DAX has the data (=Cache hit), it's simply returned without going through DynamoDB
  • If DAX doesn't have the d ata (=Cache miss), it's returned from DynamoDB and updated in DAX on the master node
  • Strongly consistent reads are served directly from DynamoDB and will not be updated in DAX

 

DynamoDB - DAX vs ElastiCache

 

 


 

Implementing DAX

 

  • To implement DAX, we create a DAX Cluster
  • DAX Cluster consists of one or more nodes (up to 10 nodes per cluster)
  • Each node is an instance of DAX
  • One node is the master node or primary node 
  • Remaining nodes act as read replicas
  • DAX internally handles load balancing between these nodes
  • 3 nodes minimum recommended for production

Demo