AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff Kinesis: Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application fogs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Data will be available for your Amazon Kinesis Applications to read and process from the stream. . An Amazon Kinesis Application is a data consumer that reads and processes data from an Amazon Kinesis data stream. You can build your applications using either Amazon Kinesis API or Amazon Kinesis Client Library (KCL). Amazon Kinesis Data Streams manages the infrastructure, storage, networking, and configuration needed to stream your data at the level of your data throughput. You do not have to worry about provisioning, deployment, ongoing maintenance of hardware, software, or other services for your data streams. In addition, Amazon Kinesis Data Streams synchronously replicates data across three availability zones, providing high availability and data durability. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream. By default, Records of a stream are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention. The maximum size of a data blob (the data payload before Base64 encoding! within one record a 1 megabyte (MB). Each shard can support up to 1000 PUT records per second. Kinesis vs SOS: Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. A record is the unit of data stored in an Amazon Kinesis data stream. A record is composed of a sequence number, partition key, and data blob. Data blob is the data of interest your data producer adds to a data stream. The maximum size of a data blob (the data payload before Base64-encoding) is 1 megabyte (MB). Partition key is used to segregate and route records to different shards of a data stream. A partition key is specified by your data producer while adding data to an Amazon Kinesis data stream. For example, assuming you have a data stream with two shards (shard 1 and shard 2). You can configure your data producer to use two partition keys (key A and key 8) so that all records with key A are added to shard 1 and all records with key B are added to shard 2. A sequence number is a unique identifier for each record. Sequence number is assigned by Amazon Kinesis when a data producer calls PutRecord or PutRecords operation to add data to an Amazon Kinesis data stream. Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers became. Amazon Kinesis Data Firehouse is the easiest way to load streaming data into data stores and analytic took it can capture, transform, and load streaming data into Amazon S3, Amazon Redshift Amazon Elastic search Service, and Splunk, enabling March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff RDS SQL & NOSQL RDS/ Elasticache: On a MySQL DB instance, avoid tables in your database growing too large. Provisioned storage limits restrict the maximum size of a MySQL table file to 16 TB. Instead, partition your large tables so that file sizes are well under the 16 TB limit. Read Replicas are supported by Amazon Aurora, Amazon RDS for MySQL, MariaDB and PostgreSQL Unlike Multi-AZ deployments, Read Replicas for these engines use each’s built-in replication technology and are subject to its strengths and limitations. In particular, updates are applied to your Read Replica(s) after they occur on the source DB instance (“asynchronous” replication), and replication lag can vary significantly. Multi AZ deployments for the MySQL, MariaDB, Orade, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically. Amazon Aurora Replicas share the same underlying storage as the primary instance. Any Amazon Aurora Replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary DB Instance failure. To increase database availabday, simply create 1 to 15 replicas, in any of 3 AZs, and Amazon RDS will automatically include them in failover primary selection in the event of a database outage. Dynamo DB: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability. Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item. Eventually Consistent Reads (Default)-also supports Strongly Consistent Reads. A table is a collection of data items – just like a table in a relational @tabase is a collection of rows. Each table must have a primary key. The primary key can be a single attribute key or a “composite” attribute key that combines two attributes. The attribute(s) you designate as a primary key must exist for every item as March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff VPC: In order for you to ping between instances, you need to allow ICMP traffic in the security group. A security group is a virtual firewall and it is a stateful firewall. When you launch an instance in VPC, you can assign upto a maximum of 5 security groups Security groups are at the instance level and not at the subnet level. If you don’t specify a security group at the instance launch, it is automatically assigned a default security group for VPC. You can specify “allow rules” in security groups but not “deny” rules. You can specify separate rules for inbound and outbound traffic. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff DNS/Route 53 & Lambda DNS/Route 53: Route 53 Weighted routing policy is a way to add elasticity to an application deployments. Lambda: AWS Lambda function memory allocation range is a minimum of 128 MB and a maximum of 1536 MB recently increased to 300K in 11/2017). AWS Lambda ephemeral disk (temporary) size default is 512 MB. AWS Lambda maximum execution duration for a function is 300 sec (5 minutes). AWS Lambda maximum request payload sire is 6 MB for synchronous invocation. AWS Lambda maximum request payload size is 128KB for asynchronous invocation. AWS Lambda maximum concurrent invocations per region are 5000. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff API Gateway: API built on Amazon API Gateway can accept any payloads sent over HTTP Typical data formats include ISON, XML, query string parameters, and request headers. All of the APIs created with Amazon AP Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints. Amazon AP Gateway is integrated with AWS CloudTrail to give you a full auditable history of the changes to your REST API All API calls made to the Amazon API Gateway APts to create, modify, delete, or deploy REST APs are logged to CloudTrail in your AWS account. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff AWS Import/Export: AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into (TB) or Amazon Simple Storage Service or Amazon Elastic Block Store (Amazon EBS) You can also export data from Amazon 53 with AWS Import/Export. AWS Import/Export doesn’t support export jobs from Amazon Elastic Block Store (EBS). An Amazon $3 export transfers individual objects from Amazon S3 buckets to your device, creating one tile for each object. You can export from more than one bucket and you can specify which files to export using manifest file options. You cannot export Amazon 53 objects that have been transitioned to Amazon Glacier Storage Class using Amazon 53-Object Lifecycle Management. Network Load Balancer provides TCP (Layer 4) load balancing. It is architected to handle millions of requests/sec, sudden volatile traffic patterns and provides extremely low latencies. In addition Network Load Balancer also preserves the source IP of the clients, provides stable IP support and Zonal isolation. It also supports long-running connections that are very useful for Web Socket type applications. . Network Load Balancer preserves the source IP of the client which in the Classic Load Balancer is not preserved. Customers can use proxy protocol with Classic Load Balancer to get the source IP. Network Load Balancer automatically provides a static IP per Availability Zone to the load balancer and also enables assigning an Elastic IP to the load balancer per Availability Zone. This is not supported with Classic Load Balancer. . Application Load Balancer supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols. An Auto Scaling group contains a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine image (AMI), the instance type. Key Pair, Security Groups and Block Storage devices are optional. An auto scaling policy is a policy used by Auto Scaling that uses CloudWatch alarms to determine when your Auto Scaling group should scale out or scale in. Each CloudWatch alarm watches a single metric and messages to Auto scalling when the metric breaches a threshold thar you specify in the policy. The instance that you want to attach must meet the following criteria: The instance is in the running state. The AMI used to launch the instance must still exist. The instance is not a member of another Auto Scaling group. The instance is in the same Availability Zone as the Auto Scaling group. If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC. Elastic Load Balancing provides access logs that capture deaded information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. . Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. There is no additional charge for access logs. You are charged storage costs for Amazon S3, but not charged for the bandwidth used by Elastic Load Balancing to send log files to Amazon S3. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff Cloud formation: Cloud formation templates have the following sections-metadata, parameters, mappings, conditions, transform, resources and outputs Only resources section is required and all other sections are optional. Cloud Former is a template creation beta tool that creates an AWS Cloud Formation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and Cloud former creates a template in an Amazon S3 bucket. Regular expressions (commonly known as regexes) can be specified in a number of places within an AWS Cloud formation template, such as for the Allowed Pattern property when creating a template parameter. You can also configure your AWS CloudFormation template so that the logs are published to Amazon Cloud Watch, which displays logs in the AWS Management Console so you don’t have to connect to your Amazon EC2 instance. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff SWF Amazon SWF enables applications for a range of use cases, including media processing, web application back ends, business process workflows, and analytic pipelines, to be designed as a coordination of tacks. Tasks are processed by workers which are programs that interact with Amazon SWF to get tasks, process them, and return the results. A worker implements an application processing step You can build workers in different programming languages and even recur existing components to quickly create the worker. SWF ensures that a task is assigned only once and is never duplicated. The maximum duration for a workflow within SWF is 1 year. AWS Flow Framework is a programming framework that enables you to develop Amazon SWF-based applications quickly and easily. It abstracts the details of task-level coordination and asynchronous interaction with simple programming constructs. Amazon SWF provides long-polling. Long-polling significantly reduces the number of polls that return without any tasks. When workers and deciders poll Amazon SWF for tasks, the connection is retained for a minute if no task is available. If a task does become available during that period, it is returned in response to the long-poll request. With AWS SWF you can use any programming language to write a worker or a decider, as long as you can communicate with Amazon SWF using web service APIs SWF Limits: 100 SWF Domains per account: 10,000 workflow activity types per domain. At any given time you can have 100,000 open executions in a domain. Near real-time analytic with existing business intelligence tools and dashboards you’re already using today. Amazon Kinesis Data Firehose synchronously replicates data across three facilities in an AWS Region, providing high availability and durability for the data as it is transported to the destinations. A source is where your streaming data is continuously generated and captured. For example, a source can be a logging server running on Amazon EC2 instances, an application running on mobile devices, a sensor on an loT device, or a Kinesis stream. A shard is a uniquely identified group of data records in a streams. A stream is composed of one or more shards, each of which a fixed unit of capacity. Each shard can support up to 5 transactions per second for reads, up to a maximum total provides a f data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 Mil per second (including partition keys). The data capacity of your stream is a function of the number of shards that you specify for the stream. The total capacity of the stream is the sum of the capacities of its shards. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff SNS SNA is a cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. It is designed to make web-scale computing easier for developers. Amazon SNS follows the “publish subscribe” (pub subl messaging paradigm, with notifications being delivered to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates. SNS service can support a wide variety of needs including event notification, monitoring applications, workflow systems, time-sensitive information updates, mobile applications, and any other application that generates or consumes notifications. A common pattern is to use SNS to publish messages to Amazon SQS message queues to reliably send messages to one or many system components asynchronously. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail
AWS Cloud AWS CLOUD by Editorial Staff March 4, 2023 written by Editorial Staff IAM/Security Cloud watch/Monitoring: AWS Cloud watch metrics are retained as: < 1 minute for 3 hrs; = 1 min for 15 days; 5 minutes for 63 days; 1 hr. for 455 days AWS Cloud watch data points published for higher resolution are still available after expiration but will be aggregated to a lower resolution AWS Cloud watch metrics cannot be deleted but will expire. . AWS Cloud watch Logs agents will send log data every 5 seconds from EC2 instance to Cloud watch by default and it can be changed. Minute frequency and three status check metrics at 1-minute frequency, for no additional charge. Detailed Monitoring for Amazon EC2 instances AB metrics available to Basic Monitoring at 1 minute frequency for an additional charge. March 4, 2023 0 comment 0 FacebookTwitterPinterestEmail