29 November 2020

AWS-Database-index

AWS Database Types

RDS - OTLP (Online Transaction Processing)

  - SQL Server

  - Oracle

  - MySQL

  - PostgreSQL

  - Amazon Aurora

  - MariaDB

- DynamoDB

- RedShift OLAP (Online Analytics Processing, Datawarehousing)

- Elasticache


Non-Relational Database Structure

- Database

- Collection (table)

 - Document (row)

   - Key/Value Pairs (fields)


Data Warehousing

- Used for Business Intelligence (Cognos, Jaspersoft, etc.)

- OLTP Vs. OLAP


OLTP (Online Transaction Processing)

- Order number 2120121

- Pulls up a row of data (name, date, address, status)


OLAP (Online Analytics Processing, used for Data warehousing)

- Pull in large number of records

- Uses different type of architecture for database and infrastructure


Elasticache

- Web service that makes it easy to deploy, operate, and scale in-memory cache in the cloud

- Types - Memcached, Redis


Multi-AZ

- Used for DR

- Not used for performance gains


Read Replicas

- Used for scaling, performance gains

- You can have up to five Read Replicas


Aurora scaling

- 2 copies of data in each AZ, 3 AZ's minimum (total of 6 copies)

- Designed to handle losses transparently

- Self-healing storage


Aurora Replicas - Up to 15 Replicas

MySQL Replicas - Up to 5 Replicas


DynamoDB vs RDS

- DynamoDB offers "push button" scaling

- RDS requires bigger instance size or to add Read Replica


DynamoDB

- Stored on SSD storage

- Spread across 3 geographically distinct data centers

- Types

  - eventually consistent reads (default)

  - strongly consistent reads


Redshift Configuration

Single Node (160GB)

Multi-Node

- Leader Node (manages client connections)

- Compute Node (stores data, performs queries, up to 128 nodes)


Elasticache

Memcached - Multi-AZ NOT available

Redis - Multi-AZ available

AWS-EC2


  • A Linux-based/Windows-based virtual server that you can provision.
  • You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region.

  • Server environments called instances.

  • Package OS and additional installations in a reusable template called Amazon Machine Images.
  • Various configurations of CPU, memory, storage, and networking capacity for your instances, known as instance types.

    1. t-type and m-type for general purpose
    2. c-type for compute optimized
    3. r-type, x-type and z-type for memory optimized
    4. d-type, h-type and i-type for storage optimized
    5. f-type, g-type and p-type for accelerated computing

    • Secure login information for your instances using key pairs.
    • Storage volumes for temporary data that are deleted when you STOP or TERMINATE your instance, known as instance store volumes.
    • Multiple physical locations for deploying your resources, such as instances and EBS volumes, known as regions and Availability Zones.

    AWS-DNS

    - Name of server that supplied data for the zone

    - Admin of the zone

    - Current version of the data file

    - Number of seconds a secondary name server should wait before checking for updates

    - Number of seconds a secondary name server should wait before retrying a failed zone transfer

    - Maximum number of seconds a secondary name server can use data before it must refresh or expire

    - Default number of seconds for the time-to-live (TTL) file on resource records


    NS Records (Name Server Record) - used by top level domain server to direct traffic to the Content DNS server which contains authoritative DNS records

    A Records (Address Record) - used to translate domain name to IP address

    TTL Record (Time-To-Live Record) - The Length that a record is cached on either the Resolving SErver or the users local PC

    CName Record (Canonical Name Record) - Can be used to resolve one domain name to another (jordanviolet.com points to violetfamily.com)


    Alias Record

    - Works like CName record in that you can map one DNS name to another

    - CName can't be used for naked domain names, can't have CName for violetfamily.com, it must be either A Record or Alias


    Exam Tips

    - ELB do not have pre-defined IPv4 addresses, must resolve using DNS name

    - Understand the difference between Alias Record and CName Record

    - Given the choice, always choose an Alias Record over CName

    28 November 2020

    AWS-IAM (Identity Access Management)

    • Allows you to manage users and their level of access to the AWS Console. 
    • It is used to set users, permissions and roles. It allows you to grant access to the different parts of the aws platform.
    • PAM and IAM are not the same. Whilst PAM protects users with privileged access to sensitive data, IAM deals with a business's everyday users.

      • Users - End users
      • Groups - Collection of users under one set of permissions (Admins, HR, etc.)
      • Roles - Create roles and assign them to AWS resources (i.e. giving EC2 instance role for writing to EC2)
      • Policies - Document that defines one or more permissions. Apply policies to users, groups, and roles
    • Centralized control of AWS account
    • Shared access to AWS account
    • Granular permissions
    • Identify Federation (AD, FB, LinkedIn, etc.)
    • Multifactor Authentication
    • Provide temporary access for users/devices/services
    • Allows you to setup password rotation policy
    • Integrates with many services
    • Supports PCI DSS Compliance

    AWS-CloudFront

    • CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as . html, . css, . js, and image files, to your users. 
    • It delivers your content through a worldwide network of data centers called edge locations.

    AWS-Lightsail

    • Amazon Lightsail offers bundles of cloud compute power and memory for new or less experienced cloud users. 
    • It derives its compute power from an EC2 instance and repackages it for customers who are new or inexperienced with cloud
    • Virtual Private Server, dumb server with fixed-IP with SSH/RDP access not fully utilizing AWS services

    AWS-WorkMail

    • It is a secure, managed business email and calendar service with support for existing desktop and mobile email client applications. 
    • You can also set up interoperability with Microsoft Exchange Server, and programmatically manage users, groups, and resources using the Amazon WorkMail SDK

    AWS-CLB

    CLB - Classic Load Balancer

    Load balancer routes traffic between clients and backend servers based on IP address and TCP port

    AWS-SNS

    • It is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
    • And makes it easy to set up, operate, and send a notification from the cloud.
    • It follows the publish-subscribe(pub-sub) messaging paradigm with notification being delivered to the client using a push mechanism. 
    • To prevent the message from being lost, all messages published to Amazon SNS are stored redundantly across multiple Availability Zones.
    • SNS is push and SQS is poll.
    • SNS has three major components - Publisher, Topic and Subscriber

    Publisher

    • The entity that triggers the sending of a message(e.g. CloudWatch Alarm, Any application or S3 events)
    • Publishers are also known as producers that produce and send the message to the SNS which is a logical access point

    Topic

    • Object to which you publish your message
    • Subscriber subscribe to the topic to receive the message
    • By default, SNS offers 100,000 topics per account (Soft limit)
    • With the exception of SMS messages, Amazon SNS messages can contain up to 256 KB of text data, including XML, JSON and unformatted text

    Subscriber

    • An endpoint to a message is sent. Message are simultaneously pushed to the subscriber.
    • Subscribers such as web servers, email addresses, Amazon SQS queues, AWS Lambda functions receive the message or notification from the SNS over one of the supported protocols (Amazon SQS, email, Lambda, HTTP, SMS).
    • Subscriber subscribe to the topic to receive the message.
    • By default, SNS offers 10 million subscriptions per topic (Soft limit).
    • SQS and SNS are lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. 
    • You can use Amazon SQS and SNS to decouple and scale micro services, distributed systems, and serverless applications, and improve reliability.

    Benefits of SNS

    • Instantaneous delivery - SNS is based on push-based delivery. This is the key difference between SNS and SQS. SNS is pushed once you publish the message in a topic and the message is delivered to multiple subscribers.
    • Flexible - SNS supports multiple endpoint types. Multiple endpoint types can receive the message over multiple transport protocols such as email, SMS, Lambda, Amazon SQS, HTTP, etc.
    • Inexpensive - SNS service is quite inexpensive as it is based on pay-as-you-go model, i.e., you need to pay only when you are using the resources with no up-front costs.
    • Ease of use - SNS service is very simple to use as Web-based AWS Management Console offers the simplicity of the point-and-click interface.
    • Simple Architecture - SNS is used to simplify the messaging architecture by offloading the message filtering logic from the subscribers and message routing logic from the publishers. Instead of receiving all the messages from the topic, SNS sends the message to subscriber-only of their interest.

    27 November 2020

    Solr

    #Apache mahout

    Apache Mahout
    What is Apache Mahout?
    What does Apache Mahout do?
    What is the History of Apache Mahout? When did it start?
    What are the features of Apache Mahout?
    What is the Roadmap for Apache Mahout version 1.0?
    What is the difference between Apache Mahout and Apache Spark’s MLlib?
    What motivated you to work on Apache Mahout? How do you compare Mahout with Spark and H2O?
    Explain about Collaborative Filtering
    Explain about Item-based Collaborative Filtering
    Explain about Matrix Factorization with Alternating Least Squares
    Explain about Matrix Factorization with Alternating Least Squares on Implicit Feedback
    Explain about Classification
    Explain about Naive Bayes
    Explain about Complementary Naive Bayes
    Explain about Random Forest
    Explain about Clustering
    Explain about Canopy Clustering
    Explain about k-Means Clustering
    Explain about Fuzzy k-Means
    Explain about Streaming k-Means
    Explain about Spectral Clustering
    Explain about Dimensionality Reduction
    Explain about Lanczos Algorithm
    Explain about Stochastic SVD
    Explain about Principal Component Analysis
    Explain about Topic Models
    Explain about Latent Dirichlet Allocation
    Explain about Miscellaneous
    Explain about Frequent Pattern Matching
    Explain about RowSimilarityJob
    Explain about ConcatMatrices
    Explain about Colocations
    How is it different from doing machine learning in R or SAS?
    How can we scale Apache Mahout in Cloud?
    Can I have html form property without associated getter and setter formbean methods?
    Can we handle exceptions in Struts programmatically?
    Compare Mahout & MLlib
    Does Struts2 action and interceptors are thread safe?
    For a single Struts application, can we have multiple struts-config.xml files?
    Is “talent crunch” a real problem in Big Data? What has been your personal experience around it?
    Is Struts Framework part of J2EE?
    Is Struts thread safe?
    Mention some machine learning algorithms exposed by Mahout?
    Mention some use cases of Apache Mahout?

    #Apache Drill

    Apache Drill

    #Oozie

    Oozie
    What Is Apache Oozie?
    What Are The Alternatives To Oozie Workflow Scheduler?
    What Is Oozie Workflow Application?
    What Are The Properties That We Have To Mention In .properties?
    What Are The Extra Files We Need When We Run A Hive Action In Oozie?
    What Is Decision Node In Oozie?
    What Is Application Pipeline In Oozie?
    What Are All The Actions Can Be Performed In Oozie?
    What are the types Of Oozie Jobs?
    What is Oozie Workflow?
    What is Oozie Coordinator?
    What is Oozie Bundle ?
    Why we need For Oozie?
    Why We Use Fork And Join Nodes Of Oozie?
    Why Oozie Security?
    How Does Oozie Work?
    How To Deploy Application?
    How To Execute Job?
    Mention Some Features Of Oozie?
    Mention Workflow Job Parameters?

    #Zookeeper

    Zookeeper
    What is ZooKeeper?
    What are the Benefits Of Distributed Applications?
    What are the challenges Of Distributed Applications?
    What are the possible Job roles?
    What must we know to work on Zookeeper well?
    What is Apache Zookeeper Meant For?
    What are the Benefits Of Zookeeper?
    What do you mean by ZNode?
    What is the model of a ZooKeeper cluster?
    What is the zookeeper daemon name?
    What is the ZooKeeper ensemble?
    What is ZooKeeper quorum?
    What is the difference between the ZooKeeper ensemble and ZooKeeper quorum?
    What is ZooKeeper Atomic Broadcast (ZAB) protocol?
    What are the key elements in ZooKeeper Architecture?
    What is the Data model, and the hierarchical namespace?
    What are Watches in ZooKeeper?
    What is org.apache.jute package?
    What are the barriers?
    What is ZooKeeper Client?
    What is Zookeeper Cluster?
    What are the applications of Apache ZooKeeper?
    What is CLI In Zookeeper?
    What is Zookeeper Queues?
    What is Zookeeper Leader election.
    Explain the types Of Znodes?
    Explain the Methods Of ZooKeeper class?
    Constituents of Apache ZooKeeper Architecture?
    Containerizing ZooKeeper With Docker?
    State about ZooKeeper WebUI?

    #Apache Flink

    Apache Flink

    20 November 2020

    AWS-CodeArtifact

    • CodeArtifact makes easy for organizations of any size to securely store, publish & share software packages used in software development process.
    • Users can configure CodeArtifact to fetch software packages from public repositories such as npm registry, maven central & PyPI with just a few clicks.
    • Users can use existing package managers such as npm, pip, yarn, twine & Maven to publish developed packages.
    • Users can approve packages for use by building automated workflows using CodeArtifact APIs and AWS EventBridge.
    • It operates in multiple Availability Zones and stores artifact data and metadata in S3 and DynamoDB.
    • It is a highly available service that scales to meet the needs of any software development team.
    • It integrates with IAM & CloudTrail, offering control over who can access software packages & visibility into who has access to software packages.
    • It integrates with AWS Key Management Service for package encryption.
    • Users can increase the security of their repositories by configuring CodeArtifact to use PrivateLink endpoints.
    • It repositories support resource policies to enable cross-account access.

    AWS-Device-Farm

    Amazon-Corretto

    AWS -Command-Line-Interface

    AWS-X-Ray

    AWS-Cloud9

    • Cloud9 is a cloud-based IDE that lets users write, run & debug their code with just a browser.
    • It supports over 40 programming languages, including Node.js, Python, PHP, Ruby, Go & C++.
    • It is fully supported on the recent versions of Google Chrome, Safari, Firefox & Microsoft Edge.
    • It development environment is where the project code files are stored and the tools used to develop the application are run.
    • AWS Cloud9 environments - AWS Cloud9 EC2 environment and AWS Cloud9 SSH environment.
    • Users can use SSH environments to connect an existing Linux-based EC2 or Lightsail instance with AWS Cloud9.
    • It EC2 environments come preinstalled with commonly used development tools such as Git and Docker.
    • It IDE has a run button in the toolbar and built-in runners for over 10 different languages that will automatically start user's application with the latest code changes.
    • It IDE has a built-in terminal window that can interactively run CLI commands.
    • It provides a default auto-hibernation setting of 30 minutes for user's EC2 instances created through Cloud9.

    AWS-CodePipeline

    AWS-CodeDeploy

    AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services. Such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. 

    The service scales to match your deployment needs.

    AWS-CodeBuild

    AWS-CodeCommit

    AWS-CodeStar

    AWS-DocumentDB

    Amazon-Keyspaces

    Amazon-Timestream

    AWS-ElastiCache

    Amazon-Neptune

    AWS-Data-Exchange

    • Data Exchange makes it easy for customers to securely exchange and use third-party data in AWS.
    • It has a single, globally available product catalog offered by providers.
    • It scans data published by providers before making it available to subscribers.
    • AWS explicitly prohibits the use of AWS Data Exchange for any illegal or fraudulent activities.
    • It catalog includes a large selection of free and open data products sourced from academic institutions, government entities, research institutions & private companies.
    • It has step-by-step registration wizards to help data providers create a profile page & complete registration in minutes.
    • Data in transit is secured using SSL/TLS and data at rest is protected using server-side encryption.
    • It integrates with AWS CloudTrail to enable providers and subscribers to audit all AWS Data Exchange API calls made by a user, role or any AWS service in their AWS account.
    • Data providers can create a Bring-Your-Own-Subscription (BYOS) offer specifying existing agreement and subscriber details for no additional cost.

    AWS-Lake-Formation

    Amazon Managed Streaming for Kafka

    AWS-Glue

    • Glue is a fully-managed, pay-as-you-go, extract, transform & load (ETL) service that automates the time-consuming steps of data preparation for analytics.
    • It also allows users to setup, orchestrate & monitor complex data flows.
    • Users should use Glue to discover properties of the data their own, transform it & prepare it for analytics.
    • It automatically generates Scala or Python code for users ETL jobs that they can further customize using tools they are already familiar with.
    • The metadata stored in the Glue Data Catalog can be readily accessed from Athena, EMR & Redshift Spectrum.
    • The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all data assets.
    • Users can import custom Python libraries and Jar files into AWS Glue ETL job.
    • It provides a robust set of orchestration features that allow users to manage dependencies between multiple jobs to build end-to-end ETL workflows.
    • It ETL jobs can either be triggered on a schedule or on a job completion event.
    • It manages dependencies between two or more jobs or dependencies on external events using triggers.
    • It consists of a Data Catalog which is a central metadata repository, an ETL engine that can automatically generate Scala or Python code, and a flexible scheduler that handles dependency resolution, job monitoring & retries.
    • It can automatically discover both structured and semi-structured data stored in data lake on S3, data warehouse in Redshift & various databases running on AWS.
    • The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop-in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR.
    • It crawlers scan various data stores users own to automatically infer schemas and partition structure and populate the Glue Data Catalog with corresponding table definitions and statistics.
    • It monitors job event metrics and errors & pushes all notifications to CloudWatch.
    • It ETL is batch oriented & users can schedule their ETL jobs at a minimum of 5 min intervals.
    • Glue's FindMatches ML Transform makes it easy to find and link records that refer to the same entity but don’t share a reliable identifier.
    • It works on top of the Apache Spark environment to provide a scale-out execution environment for data transformation jobs.
    • It SLA guarantees a Monthly Uptime Percentage of at least 99.9% for AWS Glue.
    • It takes a data first approach and allows users to focus on the data properties and data manipulation to transform the data to a form where they can derive business insights.
    • Users can use AWS Glue to build a data warehouse to organize, cleanse, validate & format data.
    • It supports data encryption at rest for Authoring Jobs in AWS Glue and Developing Scripts Using Development Endpoints.
    • AWS provides Secure Sockets Layer (SSL) encryption for data in motion. Users can configure encryption settings for crawlers, ETL jobs & development endpoints using security configurations in AWS Glue.
    • A development endpoint is an environment that users can use to develop and test their AWS Glue scripts.
    • It tags Amazon EC2 instances with a name that is prefixed with aws-glue-dev-endpoint.

    AWS-ElasticSearch

    AWS-Data-Pipeline

    • Data Pipeline is a web service that makes it easy to schedule regular data movement and data processing activities in the AWS cloud.
    • It provides built-in support for the following activities: CopyActivity, HiveActivity, EMRActivity & ShellCommandActivity.
    • It provides built-in support for the following preconditions: DynamoDBDataExists, DynamoDBTableExists, S3KeyExists, S3PrefixExists and ShellCommandPrecondition.
    • Types of compute resources: AWS Data Pipeline–managed and self-managed.
    • It handles running and monitoring user's processing activities on a highly reliable, fault-tolerant infrastructure.
    • It is specifically designed to facilitate the specific steps that are common across a majority of data-driven workflows.
    • To enable running activities using on-premise resources, It supplies a Task Runner package that can be installed on on-premise hosts.
    • If failures occur in activity logic or data sources, It automatically retries the activity.
    • It provides a library of pipeline templates.
    • It is inexpensive to use & is billed at a low monthly rate.

    AWS-Kinesis

    - Kinesis Stream
    - Kinesis Firehose
    - Kinesis Analytics

    Kinesis Streams
    - data stored for 24 hours by default
    - data stored in shards
    - data consumers (ec2 instances) turn shards into data to analyze
    - 5 transactions per second for reads, maximum total rate of 2 MB/second up to 1,000 records for writes

    Kinesis Firehose
    - Automated
    - no dealing with shards

    Kinesis Analytics
    - Way of analyzing data in Kinesis using SQL-like queries

    SSL

    • SSL, or Secure Sockets Layer, is an encryption-based Internet security protocol.
    • It more commonly called TLS, is a protocol for encrypting Internet traffic and verifying server identity. 

    • Any website with an HTTPS web address uses SSL/TLS.
    • Types of SSL

      • Single-domain - A single-domain SSL certificate applies to only one domain (a "domain" is the name of a website, like www.cloudflare.com).
      • Wildcard - Like a single-domain certificate, a wildcard SSL certificate applies to only one domain. However, it also includes that domain's subdomains. 
      • Multi-domain -  As the name indicates, multi-domain SSL certificates can apply to multiple unrelated domains.

    TLS

    • TLS is a cryptographic protocol for providing secure communication.

    • TLS is an improved version of SSL. 
    • It works in much the same way as the SSL, using encryption to protect the transfer of data and information
    • The handshake establishes a shared session key that is then used to secure messages and provide message integrity.

    • Sessions are temporary, and once ended, must be re-established or resumed.

    • Digital certificates are provided and verified by trusted third parties known as Certificate Authorities (CA)

    AWS-Chatbot

    • Chatbot makes easy to securely integrate multiple AWS services with Slack channels and Amazon Chime chat rooms for ChatOps.
    • Users can run commands from Slack to retrieve diagnostic information, invoke AWS Lambda functions or create AWS Support cases.
    • Users can use It to receive notifications from AWS services, like CloudWatch alarms, Health events, Security Hub findings, Budgets alerts & CloudFormation stack events.
    • It supports read-only commands for most AWS services.
    • It commands use the already-familiar AWS Command Line Interface syntax.
    • It is available at no additional charge. Users will only pay for the AWS resources that are used with It.
    • It is a global service and can be used in all commercial AWS regions.
    • Users can provision Slack channel configurations using AWS CloudFormation.
    • It integrates with Slack via an AWS Chatbot Slack app that users can install to their Slack workspace from the AWS Chatbot console.
    • An AWS Chatbot configuration is a mapping of a Slack channel or an Amazon Chime chat room with SNS topics and an IAM role.
    • Users can use SNS topics from multiple public AWS regions in the same AWS Chatbot configuration.
    • It notifications formatting is not customizable.
    • It configurations use IAM roles that the AWS Chatbot service assumes when making API calls and running commands on behalf of AWS Chatbot users.
    • The AWS Chatbot command syntax is the same as users would use in a terminal: _@aws service command --options _
    • It does not support commands to create, delete or configure AWS resources.
    • Users may experience some latency when invoking CLI commands through It.
    • Users cannot display or decrypt secret keys or key pairs for any AWS service, or pass IAM credentials.
    • Users cannot add attachments to support cases from the Slack channel.
    • Slack channels do not support standard AWS CLI pagination.
    • It Lambda-Invoke Command Permissions policy allows users to invoke AWS Lambda functions in Slack channels.
    • It tracks users use of command options and prompts them for any missing parameters before it runs the command they want.
    • Third-party auditors assess the security and compliance of AWS Chatbot as part of multiple AWS compliance programs.
    • Users compliance responsibility when using AWS Chatbot is determined by the sensitivity of their data, their company's compliance objectives & applicable laws and regulations.
    • It is protected by the AWS global network security procedures.


    AWS-Textract

    • Textract is a document analysis service that detects and extracts text, structured data and tables from images and scans of documents.
    • It's ML models have been trained on millions of documents so that virtually any document type users upload is automatically recognized & processed for text extraction.
    • It can detect Latin-script characters from the standard English alphabet and ASCII symbols.
    • It currently supports PNG, JPEG, and PDF formats.
    • It supports logging of the following actions as CloudTrail events - DetectDocumentText, AnalyzeDocument, StartDocumentTextDetection, StartDocumentAnalysis, GetDocumentTextDetection & GetDocumentAnalysis.
    • It charges users based on the number of pages and images processed.
    • Data from Textract is encrypted and stored at rest in the AWS region where users are using Textract.
    • It is compliant with SOC-1, SOC-2, SOC-3, ISO 9001, ISO 27001, ISO 27017 and ISO 27018.
    • It uses Optical Character Recognition (OCR) technology to automatically detect printed text and numbers in a scan or rendering of a document, such as a legal document or a scan of a book.
    • It enables users to detect key-value pairs in document images automatically so that they can retain the inherent context of the document without any manual intervention.
    • It preserves the composition of data stored in tables during extraction.
    • It is directly integrated with Amazon A2I so users can easily implement human review of text extracted from documents.
    • Users can easily process millions of documents using Textract's text extraction APIs.
    • With synchronous processing, Textract can analyze single-page documents for applications where latency is critical.
    • It provides asynchronous operations to extend support to multipage documents.
    • With AWS Batch, Textract is able to process multiple document images in a single operation.
    • To detect text asynchronously, use StartDocumentTextDetection to start processing an input document file.
    • To detect text synchronously, use the DetectDocumentText API operation and pass a document file as input.
    • It analyzes documents and forms for relationships between detected text.
    • It analysis operations return three categories of text extraction: text, forms and tables
    • For Textract synchronous operations, users can use input documents that are stored in S3 bucket, or they can pass base64-encoded image bytes.
    • It can detect selection elements such as option buttons and check boxes on a document page.
    • It conforms to the AWS shared responsibility model, which includes regulations and guidelines for data protection.
    • It communicates exclusively via HTTPS endpoints, which are supported in all Regions supported by It.
    • It is protected by the AWS global network security procedures.

    #Heat

    Heat

    It allows developers to store the requirements of a cloud application in a file so that all the resources necessary for a program are available at hand. 

    It thus provides an infrastructure to manage a cloud application. 

    It is an orchestration instrument of OpenStack.


      Related Post :



    Ceilometer

    • Ceilometer provides telemetry services to its users. 
    • It performs a close regulation of each user's cloud components' usage and provides a bill for the services used. 
    • Think of Ceilometer as a component to meter the usage and report the same to individual users.

    #Cinder

    Cinder

    • Cinder is known as the block storage component of OpenStack. 
    • This functions in a way, analogous to the traditional ways of locating and accessing specific locations on a disk or a drive.

      Related Post :



    #Neutron

    Neutron

    • It allows manual or automatic management of networks or IP addresses.
    • There is a flat network or VLAN to separate traffic or servers.
    • It offers management of intrusion detection systems, firewalls, load balancing, or virtual private networks etc.

      Related Post :



    #Keystone

    Keystone

    • It offers a unified authentication system across the cloud OS.
    • It can be quickly integrated with existing backend directory such as LDAP.
    • The service has various authentication methods like token-based authentication, username/password authentication, and AWS style logins.
    • It offers a single repository of all deployed services with a programmatic determination of access for users and third-party tools.

      Related Post :



    19 November 2020

    #Heat

    Heat

    • It offers a GUI to access and automate cloud-based resources for administrators and users.
    • It allows third-party billing, monitoring, and management tool integration.
    • It offers a customized dashboard with EC2 compatibility.


      Related Post :



    #Swift

    Swift
    What is Swift Messages?
    What is IOS Swift?
    What is Dictionary in Swift?
    What is difference between single and double ? in Swift?
    What are the important data types found in Objective-C?
    What are the control transfer statements in swift?
    How to post an HTTP request with a JSON body in Swift?
    Can you explain Regular expression and Responder chain?
    Can you explain any three-shift pattern matching techniques?
    Can you explain completion handler?
    Distinguish between @synthesize and @dynamic in Objective –C?
    Explain Enum in Swift?
    Explain Functions in Swift Programming?
    Explain some common execution states in iOS?
    Explain the Adapter and Memento Pattern?
    Explain the difference between let and var in Swift Programming?
    How can you make a property optional in Swift?
    How can you write a multiple line comment Swift?
    How can you define a base class in swift?
    How can you prioritize the usability of the demand Process?
    List some control transfer statements used in Swift?
    List the features of Swift Programming?
    What do you mean by the term “defer”?
    What do you do when you realize that your App is prone to crashing?
    Why do we use swift? Mention some advantages of Swift?

    #Glance

    Glance

    • This service is applicable to discover, register, and deliver services for disk or server images.
    • It allows template building with stored images.
    • It facilitates unlimited backups and chances of failure are very less.
    • There is REST interface for querying disk image information.
    • It helps to streamline images with servers.
    • It helps to maintain image metadata.
    • It creates, deletes, and identifies duplicate images.

      Related Post :



    #Nova

    Openstack

      Related Post :



    18 November 2020

    #Serverless

    • Serverless allows developers to build and run applications and services without thinking about the servers actually running the code. 
    • It can help create an environment that allows DevOps teams to focus on improving code, processes and upgrade procedures, instead of on provisioning, scaling and maintaining servers.
    • Serverless platforms, provided by different CSP:

    1. Amazon: Lambda
    2. Google: Cloud Functions
    3. Microsoft: Azure Functions

    • AWS Lambda, let you run code without provisioning or managing servers.
    • AWS Lambda executes code only when needed and scales automatically, from a few requests per day to thousands per second.
    • Google Cloud Functions is a lightweight compute solution for developers to create single-purpose, stand-alone functions that respond to Cloud events without the need to manage a server or run time environment.
    • Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems.


    AWS-SageMaker

    • SageMaker Autopilot is the industry’s first automated machine learning capability that gives complete control and visibility into ML models.
    • Autopilot automatically inspects raw data, applies feature processors, picks the best set of algorithms, trains & tunes multiple models.
    • Users get full visibility into how the model was created and what’s in it & SageMaker Autopilot integrates with SageMaker Studio.
    • It  Autopilot can be used by people without machine learning experience to easily produce a model.
    • It  Studio provides a single, web-based visual interface where users can perform all ML development steps.
    • It Studio gives users complete access, control & visibility into each step required to build, train & deploy models.
    • It Autopilot is a generic automatic ML solution for classification and regression problems, such as fraud detection, churn analysis & targeted marketing.
    • Users can train models using SageMaker Autopilot and get full access to the models as well as the pipelines that generated the models.
    • It Autopilot supports 2 built-in algorithms at launch: XGBoost and Linear Learner.
    • It Autopilot built-in algorithms support distributed training out of the box.
    • It Notebooks provide one-click Jupyter notebooks that users can start working with in seconds.
    • With SageMaker Notebooks users can sign in with their corporate credentials using SSO and start working with notebooks within seconds.
    • It Notebooks give users access to all SageMaker features, such as distributed training, batch transform, hosting & experiment management.
    • It Ground Truth provides automated data labeling using machine learning.
    • It Ground Truth will first select a random sample of data and send it to Mechanical Turk to be labeled.
    • It Experiments helps users organize and track iterations to machine learning models.
    • It Experiments helps users manage iterations by automatically capturing the input parameters, configurations and results, and storing them as experiments.
    • It Debugger makes the training process more transparent by automatically capturing real-time metrics during training such as training and validation, confusion matrices & learning gradients to help improve model accuracy.
    • The metrics from SageMaker Debugger can be visualized in SageMaker Studio for easy understanding.
    • It Debugger can also generate warnings and remediation advice when common training problems are detected.
    • It is a fully-managed service that enables data scientists and developers to quickly and easily build, train & deploy machine learning models.
    • It enables developers and scientists to build machine learning models for use in intelligent, predictive apps.
    • It is designed for high availability. There are no maintenance windows or scheduled downtimes.
    • It APIs run in Amazon’s proven, high-availability data centers, with service stack replication configured across three facilities in each AWS region to provide fault tolerance in the event of a server failure or AZ outage.
    • It ensures that ML model artifacts and other system artifacts are encrypted in transit and at rest.
    • Requests to the SageMaker API and console are made over a secure (SSL) connection.
    • It stores code in ML storage volumes, secured by security groups and optionally encrypted at rest.
    • It allows users to select the number and type of instance used for the hosted notebook, training & model hosting.
    • It provides a full end-to-end workflow, but users can continue to use their existing tools with it.
    • Users pay for ML compute, storage and data processing resources their use for hosting the notebook, training the model, performing predictions & logging the outputs.
    • It supports Jupyter notebooks.
    • Users can persist their notebook files on the attached ML storage volume.
    • Users can modify the notebook instance and select a larger profile through the SageMaker console, after saving their files and data on the attached ML storage volume.
    • Managed Spot Training with SageMaker lets users train their machine learning models using EC2 Spot instances, while reducing the cost of training their models by up to 90%.
    • Managed Spot Training is supported on all AWS regions where Amazon SageMaker is currently available.
    • There are no fixed limits to the size of the dataset users can use for training models with Amazon SageMaker.
    • It includes built-in algorithms for linear regression, logistic regression, k-means clustering, principal component analysis, factorization machines, neural topic modeling, latent dirichlet allocation, gradient boosted trees, sequence2sequence, time series forecasting, word2vec & image classification.
    • It also provides optimized Apache MXNet, Tensorflow, Chainer & PyTorch containers.
    • It supports users custom training algorithms provided through a Docker image adhering to the documented specification.
    • User can train reinforcement learning models in SageMaker in addition to supervised and unsupervised learning models.
    • It RL supports a number of different environments for training reinforcement learning models.
    • It RL includes RL toolkits such as Coach and Ray RLLib that offer implementations of RL agent algorithms such as DQN, PPO, A3C & many more.
    • Users can bring their own RL libraries and algorithm implementations in Docker Containers and run those in SageMaker RL.
    • It Neo is a new capability that enables machine learning models to train once and run anywhere in the cloud and at the edge.
    • It Neo contains two major components – a compiler and a runtime.

    AWS-Route 53

    • It provides highly available and scalable DNS, domain name registration and health-checking web services.
    • Users can combine their DNS with health-checking services to route traffic to healthy endpoints or to independently monitor or alarm on endpoints.
    • With Route 53, users can create and manage their public DNS records.
    • It is designed to automatically answer queries from the optimal location depending on network conditions.
    • Each Route 53 hosted zone is served by its own set of virtual DNS servers.
    • A hosted zone is analogous to a traditional DNS zone file, it represents a collection of records that can be managed together, belonging to a single parent domain name.
    • It charges are based on actual usage of the service for Hosted Zones, Queries, Health Checks and Domain Names.
    • It SLA provides for a service credit if a customer’s monthly uptime percentage is below the service commitment in any billing cycle.
    • Hosted zones are billed once when they are created and then on the first day of each month.
    • Anycast is a networking and routing technology that helps end users’ DNS queries get answered from the optimal Route 53 location given network conditions.
    • It supports importing standard DNS zone files which can be exported from many DNS providers as well as standard DNS server software such as BIND.
    • It also offers alias records, which are an Amazon Route 53-specific extension to DNS.
    • A wildcard entry is a record in a DNS zone that will match requests for any domain name based on the configuration user set.
    • Users can also use Alias records to map their sub-domains to their ELB load balancers, CloudFront distributions or S3 website buckets.
    • It allows users to list multiple IP addresses for an A record and responds to DNS requests with the list of all configured IP addresses.
    • It allows DNSSEC on domain registration.
    • It supports both forward (AAAA) and reverse (PTR) IPv6 records.
    • Weighted Round Robin allows users to assign weights to resource record sets in order to specify the frequency with which different responses are served.
    • It Geo DNS lets users balance load by directing requests to specific endpoints based on the geographic location from which the request originates.
    • It supports multivalue answers in response to DNS queries.
    • A traffic policy is the set of rules that users define to route end users’ requests to one of their application’s endpoints.
    • Traffic Flow supports all Route 53 DNS Routing policies including latency, endpoint health, multivalue; answers, weighted round robin and geo.
    • Users can manage private IP addresses within VPCs using Route 53’s Private DNS feature.
    • Users can resolve internal DNS names from resources within their VPC that do not have Internet connectivity.
    • DNS Failover consists of two components: health checks and failover.

    AWS-QuickSight

    • QuickSight is a very fast, easy-to-use, cloud-powered business analytics service.
    • It makes easy to build visualizations, perform ad-hoc analysis & quickly get business insights from their data, anytime, on any device.
    • It enables organizations to scale their business analytics capabilities to hundreds of thousands of users & delivers fast & responsive query performance by using a robust in-memory engine.
    • It is built with 'SPICE' – a Super-fast, Parallel, In-memory Calculation Engine.
    • A QuickSight Author is a user who can connect to data sources, create visuals & analyze data.
    • A QuickSight Reader is a user who consumes interactive dashboards.
    • Individual end-users can be provisioned to access QuickSight as Readers. Reader pricing applies to manual session interactions only.
    • Readers can be easily upgraded to authors via the QuickSight user management options.
    • A QuickSight Admin is a user who can manage QuickSight users and account-level preferences, as well as purchase SPICE capacity and annual subscriptions for the account.
    • It Authors and Readers can be upgraded to Admins at any time.
    • It Reader sessions are of 30-minute duration each. Each session is charged at $0.30 with maximum charges of $5 per Reader in a month.
    • It admins can also upgrade Standard Edition accounts to Enterprise Edition if needed.
    • The iPhone app for QuickSight lets users access their data anywhere & explore analyses, stories & dashboards.
    • It supports the latest versions of Mozilla Firefox, Chrome, Safari, Internet Explorer version 10 & above and Edge.
    • Users can also upload Excel spreadsheets or flat files (CSV, TSV, CLF, and ELF), connect to on-premises databases like SQL Server, MySQL and PostgreSQL and import data from SaaS applications like Salesforce.
    • It has an innovative technology called AutoGraph that allows it to select the most appropriate visualizations based on the properties of the data, such as cardinality and data type.
    • Dashboards are a collection of visualizations, tables & other visual displays arranged and visible together.
    • Users can perform typical arithmetic and comparison functions; conditional functions such as if,then; and date, numeric, and string calculations.
    • Users have several options to get their data into QuickSight: file upload, connect to AWS data sources, connect to external data stores over JDBC/ODBC, or through other API-based data store connectors.
    • Row-level security (RLS) enables QuickSight dataset owners to control access to data at row granularity based on permissions associated with the user interacting with the data.
    • Users can share an analysis, dashboard, or story using the share icon from the QuickSight service interface.
    • Users will not be able to downgrade from QuickSight Enterprise Edition to Standard Edition.
    • Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC.
    • It auto discovery feature detects data sources only within the AWS region of the QuickSight endpoint to which users are connected.

    AWS-QLDB

    • Quantum Ledger Database is a purpose-built ledger database that provides a complete and cryptographically verifiable history of all changes made to application data.
    • Data in Amazon QLDB is written to an append-only journal, providing the developer with full data lineage.
    • Data in Amazon QLDB's journal is immutable and verifiable, meaning users can trust the data in their ledger.
    • It is not a blockchain or distributed ledger technology.
    • It supports transactions with ACID semantics, a flexible document data model & a familiar SQL-like API.
    • It is fully managed and automatically scales to meet the needs of user's application with no provisioning required.
    • To connect to QLDB and transact with the data in the ledger, users need to use the AWS-provided QLDB driver.
    • It can execute 2-3X as many transactions than ledgers in common blockchain frameworks.
    • It has a centralized design, allowing its transactions to execute without the need for multi-party consensus.
    • It allows to access and manipulate data using PartiQL, which is a new open standard query language.
    • It 's ledger is deployed across multiple AZs with multiple copies per AZ.
    • It is strongly durable.
    • It does not support cross-region replication.
    • It is integrated with AWS Private Link.
    • It uses AWS-owned keys to encrypt customer data. By default, all data in transit and at rest is encrypted.
    • It Streaming capability provide at-least-once delivery guarantee.
    • It stores data using a document-oriented data model, which provides users the flexibility to store structured and semi-structured data.
    • QLDB’s data model supports nested data structures.
    • It transactions have full serializability - the highest level of isolation.
    • It backs up data continuously while maintaining consistent performance, allowing it to transparently recover from any instance or physical storage failures.
    • It uses a cryptographic hash function (SHA-256) to generate a secure output file of user data’s change history, known as a digest.
    • Manufacturers can use QLDB to easily trace the history of the entire production and distribution lifecycle of a product.
    • With QLDB, retail companies can look back and track the full history of inventory and supply chain transactions at every logistical stage of their products.
    • With QLDB, customers can easily maintain a trusted and complete record of the digital history of their employees, in a single place.

    AWS-Polly

    • Polly is a service that turns text into lifelike speech.
    • It supports Speech Synthesis Markup Language (SSML) tags like prosody so users can adjust the speech rate, pitch or volume.
    • It is a secure service that delivers benefits at high scale and at low latency.
    • Users can cache and replay Amazon Polly’s generated speech at no additional cost.
    • Users can use Polly to power their application with high-quality spoken output.
    • Users can synthesize speech for certain Neural voices using the Newscaster style, to make them sound like a TV or Radio newscaster.
    • Users can detect when specific words or sentences in the text are being spoken to the user based on the metadata included in the audio stream.
    • It generates Speech Marks using the following four elements: Sentence, Word, Viseme and SSML.
    • It can be used in announcement systems in public transportation and industrial control systems for notifications and emergency announcements.
    • Applications such as quiz games, animations, avatars or narration generation are common use-cases for cloud-based Text-to-speech solution like Polly.
    • Cloud-based text-to-speech (Polly) is platform independent, so it minimizes development time and effort.
    • It supports all the programming languages included in the AWS SDK (Java, Node.js, .NET, PHP, Python, Ruby, Go and C++) and AWS Mobile SDK (iOS/Android).
    • It supports an HTTP API so users can implement their own access layer.
    • It supports MP3, Vorbis and raw PCM audio stream formats.
    • It is a HIPAA Eligible Service covered under the AWS Business Associate Addendum (AWS BAA).
    • It makes it easy to request an additional stream of metadata with information about when particular sentences, words and sounds are being pronounced.
    • It 's pay-per-use model means there are no setup costs. User can start small and scale up as their application grows.
    • It provides simple API operations that users can easily integrate with their existing applications.
    • It has a Neural TTS (NTTS) system that can produce even higher quality voices than its standard voices. The NTTS system produces the most natural and human-like text-to-speech voices possible.
    • Neural voices aren't available in all AWS Regions, nor do they support all It features.
    • It provides API operations that users can use to store lexicons in an AWS region.
    • Lexicons give additional control over how Polly pronounces words uncommon to the selected language.
    • The SynthesizeSpeech operation produces audio in near-real time, with relatively little latency in most cases.
    • Polly's Asynchronous Synthesis feature overcomes the challenge of processing a larger text document by changing the way the document is both synthesized and returned.
    • With the Polly plugin for WordPress, users can provide visitors to their WordPress website audio recordings of their content.

    AWS Pinpoint

    • Pinpoint is AWS’s Digital User Engagement Service that enables AWS customers to effectively communicate with their end users and measure user engagement across multiple channels including email, Text Messaging (SMS) and Mobile Push Notifications.
    • It  is built on a service-based architecture.
    • It  provides tools that enables audience management & segmentation, campaign management, scheduling, template management, A/B testing, analytics and data integration.
    • It  offers developers a single API layer, CLI support & client-side SDK support to be able to extend the communication channels through which their applications engage users.
    • It  allows Marketers to create and execute a unified messaging strategy across all engagement channels relevant to their end-users.
    • Enterprises can use Pinpoint as their Digital User Engagement Service.
    • It  helps users to understand user behavior, define which users to target, determine which messages to send, schedule the best time to deliver the messages & then track the results of their campaign.
    • It  has no upfront costs, no minimum charges and no subscription fees. (Pay-as-you-go)
    • With Pinpoint, users can create message templates, delivery schedules, highly-targeted segments and full campaigns.
    • With Pinpoint Voice, users can engage with their customers by delivering voice messages over the phone.
    • It  can store four different types of data: Configuration Data, User Data, User Engagement Data and External Data.
    • It  automatically stores user's analytics data for 90 days.
    • In Pinpoint, journeys are fully automated, end-to-end messaging solutions for engaging with user's customers.
    • It  stores user, endpoint and event data.
    • In Pinpoint, data is encrypted at rest and during transit.
    • Users can use Amazon QuickSight to create custom visualizations that combine their Pinpoint engagement metrics with data from external systems.
    • In Pinpoint, users can connect to a certain type of ML model, referred to as a recommender model, to predict which items a user will interact with and to send those items to message recipients as personalized recommendations.
    • There are two types of segments that users can create in Pinpoint: Dynamic segments and Imported segments.
    • When users create a new Pinpoint account, their emails are sent from IP addresses that are shared with other Pinpoint users.
    • In Pinpoint, an originating number or originating ID is the phone number or sender ID that appears on customer's devices when they receive messages from users.
    • Users can use CloudWatch to collect, view & analyze several important metrics related to their Pinpoint account and projects.
    • It  helps to design consistent messages and reuse content more effectively by creating and using message templates.
    • It  is available in several AWS Regions in North America, Europe, Asia & Oceania.
    • It  API is available in several AWS Regions and it provides an endpoint for each of these Regions.
    • It  provides a resource-based API that uses Hypertext Application Language (HAL).

    AWS Lex

    • Lex is a service for building conversational interfaces using voice and text.
    • The most common use-cases of Lex include: Informational bot, Application/Transactional bot, Enterprise Productivity bot and Device Control bot.
    • It leverages Lambda for Intent fulfillment, Cognito for user authentication & Polly for text to speech.
    • It scales to customers needs and does not impose bandwidth constraints.
    • It is a completely managed service so users don’t have to manage scaling of resources or maintenance of code.
    • It uses deep learning to improve over time.
    • It bot can be created both via Console and REST APIs.
    • It provides the option of returning parsed intent and slots back to the client for business logic implementation.
    • Users can track metrics for their bot on the ‘Monitoring’ dashboard in the Lex Console.
    • It provides SDKs for iOS and Android.
    • Users can use AWS Mobile Hub to build, test & monitor bots for their mobile platforms.
    • It bots can be published to messaging platforms like Facebook Messenger, Slack, Kik & Twilio SMS.
    • Every version of an Amazon Lex bot will have an ARN.
    • It supports up to 15 seconds of speech input.
    • It supports the following formats for input audio: LPCM and Opus; Supported output audio formats: MPEG, OGG, PCM.
    • It can be accessed from VPC via public endpoints for building and running a bot.
    • It does not support wake word functionality.
    • It provides the ability for users to export their Lex bot schema into a JSON file that is compatible with Amazon Alexa.
    • Any content processed by Lex is encrypted and stored at rest in the AWS region where users are using Lex.
    • Users can build bots using SDKs: Java, JavaScript, Python, CLI, .NET, Ruby on Rails, PHP, Go & CPP.
    • It is supported under Developer Support, Business Support and Enterprise Support plans.
    • Every input to Lex bot is counted as a request.
    • To build an Lex bot, user will need to identify a set of actions - known as 'intents’.
    • To fulfill an intent, the Lex bot needs information from the user. This information is captured in ‘slots’.
    • It is capable of eliciting multiple slot values via a multi-turn conversation.

    AWS-IoT-Greengrass

    • AWS IoT Greengrass is software that lets users run local compute, messaging, data caching, sync & ML inference capabilities on connected devices in a secure way.
    • It  consists of three software distributions: AWS IoT Greengrass Core, AWS IoT Device SDK & the AWS IoT Greengrass SDK.
    • It  also works together with Amazon FreeRTOS.
    • It  supports Lambda functions authored in the following languages: Python 3.7, Node v8.10.0, Java 8, C, C++ or Any language that supports importing C libraries.
    • It  local resource allows users Lambda functions to securely interact with hardware such as sensors and actuators.
    • Secure element vendors have configured their secure elements to use a set of PKCS#11 standard APIs to integrate with AWS IoT Greengrass.
    • It  supports OPC-UA, a popular information exchange standard for industrial communication.
    • AWS IoT Device Tester for AWS IoT Greengrass is free to use.
    • It  Core software runs on a hub, gateway or other device to automatically sync and interact with the cloud.
    • Users can connect devices locally to AWS IoT Greengrass Core using Amazon FreeRTOS or the AWS IoT Device SDK.
    • It  Connectors allow users to easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials or interacting with external APIs.
    • With AWS IoT Greengrass Over the Air Updates (OTA), customers can get all these benefits without having to manually download and reinstall the AWS IoT Greengrass Core software.
    • AWS IoT Device Tester for AWS IoT Greengrass is a test automation tool that lets users self-test and qualify AWS IoT Greengrass on their Linux-based devices.
    • It  authenticates and encrypts device data for both local and cloud communications so that data is never exchanged between devices and the cloud without proven identity.
    • It  synchronizes the data on the device with AWS IoT Core, providing seamless functionality regardless of connectivity.
    • It  lets users to execute AWS Lambda functions locally, reducing the complexity of developing embedded software.
    • It  Secrets Manager allows users to securely store, access, rotate & manage secrets – device credentials, keys, endpoints & configurations – at the edge.
    • It  Secrets Manager is fully integrated with AWS IoT Greengrass Connectors.
    • It  offers customers the option to store their device private key on a hardware secure element.
    • Users are charged based on the number of AWS IoT Greengrass Core devices that interact with the AWS Cloud in a given month.
    • Users can also connect to third-party applications, on-premises software & AWS services out-of-the-box with AWS IoT Greengrass Connectors.
    • It  groups are used to organize entities in users edge environment.
    • All devices that communicate with an AWS IoT Greengrass core must be a member of a Greengrass group.
    • It  is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role or an AWS service in AWS IoT Greengrass.
    • With AWS IoT Greengrass, users can perform machine learning (ML) inference at the edge on locally generated data using cloud-trained models.

    Most views on this month