<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Serverless Terminal]]></title><description><![CDATA[Everything AWS, Serverless and Architectures]]></description><link>https://blog.theserverlessterminal.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 12:50:11 GMT</lastBuildDate><atom:link href="https://blog.theserverlessterminal.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Did you know you are a co-owner who secures your workloads with Serverless?]]></title><description><![CDATA[As developers focus on building great things, security sometimes takes the backseat. However, with Serverless, we know that developers also work on the infrastructure of using multiple Serverless microservices, while this integration and bringing the...]]></description><link>https://blog.theserverlessterminal.com/did-you-know-you-are-a-co-owner-who-secures-your-workloads-with-serverless</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/did-you-know-you-are-a-co-owner-who-secures-your-workloads-with-serverless</guid><category><![CDATA[serverless]]></category><category><![CDATA[APIs]]></category><category><![CDATA[Security]]></category><category><![CDATA[lambda]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Mon, 17 Feb 2025 12:21:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739291297131/9a9a4961-b89b-4897-8b30-b8201ae8449a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As developers focus on building great things, security sometimes takes the backseat. However, with Serverless, we know that developers also work on the infrastructure of using multiple Serverless microservices, while this integration and bringing the microservice with multiple architectural layers with security vision is important.</p>
<p>In this blog, we will look into how security in the Serverless ecosystem is a shared responsibility between AWS and the developer and how one Serverless developer can follow best practices for Lambda security. This could also apply to Cloud in general.</p>
<p>This blog is the extended version of a talk at AWS Community Day DevSecOps Edition Pune 2024 and CorpCon 2025 at Christ University, Bengaluru.</p>
<h1 id="heading-its-a-shared-responsibility">It’s a Shared Responsibility</h1>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUdg62oBWwVlKpCey3KkyoT_KRJUfvpGlSjqr1blCjvHGW22QO6wHqD4M4mxncgllok6D3CP3tYmfpAP5bDg4UiKUoATdtT0C3o5113oKreUkMG7qwO7nKJk-EjyoaiLrXZNEwc-2-12s43b0VrdxSp2HT46INuA77G6TA1Gyfq-RQ=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="Shared responsibility with Security for Lambda functions" class="image--center mx-auto" /></p>
<p>In the <a target="_blank" href="https://docs.aws.amazon.com/whitepapers/latest/security-overview-aws-lambda/the-shared-responsibility-model.html">AWS whitepaper</a>, you can learn about “Security and compliance” when building applications with AWS Lambda Functions, a shared responsibility between AWS and the developer/customer for a fully secure application.</p>
<p>The whitepaper showcases how the shared responsibility model is divided with the developer (customer of AWS) taking ownership <em>in the cloud</em> as the code of the Lambda function along with the right resource configurations and the importance of IAM roles and policies while the cloud provider (AWS) takes the responsibility <em>of the cloud</em> by maintaining the infrastructure and environment which Lambda is executed is secure.</p>
<h1 id="heading-a-simple-serverless-api">A Simple Serverless API</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739213293385/03934978-775f-4388-8318-5c87f178848b.png" alt="A simple API with AWS Serverless stack - API Gateway, Lambda, DynamoDB" class="image--center mx-auto" /></p>
<p>Let’s consider building a simple Serverless API for a CRUD operation invoked from a web app UI. The API is hosted on Amazon API Gateway which triggers a Lambda function to perform CRUD operations on the data in DynamoDB.</p>
<p>Let’s see how the shared responsibility model could be brought into action -</p>
<ul>
<li><p><strong>Identity and Access</strong></p>
</li>
<li><p><strong>Code</strong></p>
</li>
<li><p><strong>Data</strong></p>
</li>
<li><p><strong>Infrastructure</strong></p>
</li>
</ul>
<h1 id="heading-identity-and-access">Identity and Access</h1>
<p>Identity and Access Management (IAM) plays an important role in building the right access control for the AWS Services. For the same Serverless API, let’s understand what are the different IAM permissions needed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739290019815/0f18e69e-539d-42ae-8310-0fb96365b910.png" alt="IAM permissions for API Gateway, Lambda to access the underlying resources" class="image--center mx-auto" /></p>
<p>API Gateway endpoints require authorizers for an API used by an application with Identity Pools (IdPs). Authorized users should have an IAM policy that allows them to make API calls to Lambda Functions. The Lambda functions need an IAM execution role to perform SDK API actions with DynamoDB for CRUD operations. While this is a basic example of a simple Serverless API, the architecture can become complex, and the associated IAM policies can also become complicated.</p>
<h2 id="heading-least-privileges-policy">Least Privileges Policy</h2>
<p>One mistake that could end up being expensive is wildcard <code>*</code> permissions that allow all actions to a specific or a set of AWS Services, you can follow the best practice of -</p>
<ul>
<li><p><strong>Granting only necessary permissions</strong> needed for that execution could be by granting access to specific resources along with the Resource ARN, specific API actions only that are required.</p>
</li>
<li><p><strong>Using managed policies</strong> for commonly used IAM policies such as <code>DynamoDBCrudPolicy</code> which allows the API Actions - <code>dynamodb:GetItem</code>, <code>dynamodb:DeleteItem</code>, <code>dynamodb:PutItem</code>, <code>dynamodb:Scan</code>, <code>dynamodb:Query</code>, <code>dynamodb:UpdateItem</code>, <code>dynamodb:BatchWriteItem</code>, <code>dynamodb:BatchGetItem</code>, <code>dynamodb:DescribeTable</code>, <code>dynamodb:ConditionCheckItem</code> with the specified DynamoDB table and indexes defined in the table.</p>
</li>
</ul>
<p>Again from the simple Serverless API, the IAM execution role for the Lambda function would look something as below.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Resources:</span>
  <span class="hljs-attr">Users:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::DynamoDB::Table</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">AttributeDefinitions:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">AttributeName:</span> <span class="hljs-string">id</span>
          <span class="hljs-attr">AttributeType:</span> <span class="hljs-string">S</span>
  <span class="hljs-string">.</span>
  <span class="hljs-string">.</span>
  <span class="hljs-attr">PutItemsFunction:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Serverless::Function</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-string">.</span>
      <span class="hljs-string">.</span>
      <span class="hljs-string">.</span>
      <span class="hljs-attr">Policies:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">DynamoDBCrudPolicy:</span>
            <span class="hljs-attr">TableName:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">Users</span>
</code></pre>
<p>When using IAM execution policies, <a target="_blank" href="https://blog.theserverlessterminal.com/allow-only-what-your-lambda-code-needs">allow only what your Lambda code needs</a> blog talks in depth about the l<em>east privileges</em> policy.</p>
<h1 id="heading-code">Code</h1>
<p>Security with code in a Serverless application comes into play in the infrastructure layer and also in the application code level.</p>
<h2 id="heading-enabling-request-validations-with-model">Enabling Request Validations with Model</h2>
<p>When you use API Gateway, it offers Models that define the schema for your request and response. Models help validate the request body and parameters against the schema to ensure the correct parameters are passed with the right data type. This prevents data injections and invalid request body that can break the system.</p>
<p>One such way is defining the Model as part of your application’s IaC code as seen below.</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUeCR6fzovweRQqrAiRxRGopx2jMMrfM_MMnnvrkaINBrCpL2k-3k_SpWxYC8wu9xhjPPl-onGO2-nJLi9bLaswG1t15eZhdf_s631LMjErBJ1zBCjWjy8otkubKzCIdWC-mq81Z4cEn0JHtkE-BgmLgX-SOR2--Y9QoK2RKH7aX_Q=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="API Gateway using Models to define the request validations" class="image--center mx-auto" /></p>
<p>Additionally, when adding the API trigger to the Lambda function, enable Model mapping and configure the request parameters as needed to ensure the correct parameters are passed to the Lambda function.</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUe_bB9fSVsZC9xitALPtcATJYIN8qCTbm-zs8RtdMTwzDJUgii4V7m3P5B85al6KgCjLFFmpSSXK0bjyJShAMaCtShGTPCYJtBOgl_Fb8CPa4FEnwKTRgD9RXhUFmiKFN_Vhx_TY6khTbhpAk2t9wXnAxCOGVK3p_XS53hspZlZIQ=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="Reference to the Model defined in the Lambda triggers" class="image--center mx-auto" /></p>
<h2 id="heading-using-secrets-in-the-codebase">Using Secrets in the Codebase</h2>
<p>One of the best practices in applications is “<em>Not hardcoding credentials and secrets</em>” in your application code base. So, how do you use credentials in your Lambda function code? There are options with <em>Lambda environment variables</em>, <em>Secrets Manager,</em> and <em>Systems Manager Parameter Store</em>.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Lambda Environment Variable</strong></td><td><strong>Secrets Manager</strong></td><td><strong>Systems Manager Parameter Store</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Data type ⌗</strong></td><td>Key-value pair</td><td>Secrets (API keys, credentials, SSH key pairs).</td><td>Configurations (environment details) and secrets.</td></tr>
<tr>
<td><strong>Encryption 🔐</strong></td><td>Optional KMS Encryption</td><td>Always encrypted with KMS</td><td>Optional with SecureString/KMS</td></tr>
<tr>
<td><strong>Key rotation 🔁</strong></td><td>Values updated as per deployments</td><td>Built-in key rotation supported</td><td>Manual key rotation</td></tr>
<tr>
<td><strong>Size 💾</strong></td><td>4KB</td><td>64KB</td><td>Upto 8KB</td></tr>
<tr>
<td><strong>Cross Account Access 🗝️</strong></td><td>Not even cross-Lambda access</td><td>Yes</td><td>No</td></tr>
</tbody>
</table>
</div><p>Among the options for using credentials and secrets, AWS Secrets Manager is the top choice. It provides built-in KMS encryption and supports key rotation, which is essential for any production system. It also offers fine-grain IAM access control.</p>
<h2 id="heading-check-for-vulnerability">Check for vulnerability</h2>
<p>The codebase often uses third-party libraries and dependencies that can be vulnerable to security issues. One of the easiest ways to protect against vulnerabilities is to keep your packages updated to the latest versions. To do this, you need to scan the codebase. <a target="_blank" href="https://blog.theserverlessterminal.com/amazon-inspector-can-now-scan-aws-lambda-functions">Amazon Inspector can now scan your AWS Lambda functions</a>, detecting affected packages and dependencies that are vulnerable to security issues and providing steps to fix them. Amazon Inspector supports codebase scans, which can be scheduled or run manually with the deployed codebase.</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUcpwZ1qPgk2CxPOJjzqioEYweEIZKhpUwFgR8iHHGXIQvtqz7n_yMuAzBsuCcsafWYntzW_P4UC0GVgWFoAO5X1hjjAGpZNjSgStMKww3YsGNt5O19A-68jV8MD2JKLXxKoQqRrSaKoP8D8Cr6SImlGdMw8fZ1tjaFOGsw358zxPg=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="Amazon Inspector scan from a Lambda Function with layers that detects vulnerabilities. " class="image--center mx-auto" /></p>
<h1 id="heading-data">Data</h1>
<p>Even in the simple Serverless API, some data is involved either stored in the database or data that is transferred across AWS Services such as Lambda function and API Gateway or through API Gateway to the API client.</p>
<h2 id="heading-encryption-for-data-at-rest">Encryption for Data at Rest</h2>
<p>AWS Services like Amazon DynamoDB, Amazon Aurora, and Amazon S3 which are data stores support encryption of data at rest with native encryption supported with Amazon Key Management Service (KMS) and also with choice of the customer’s key with KMS.</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUe1JcIohSBBmofUDYv5i6rwjuaqWjqq2aaATjYjprt928qGBaZw1Ce-Fnr0J84WVZGNEr7BglmOCezh98OJdWBn7bbCHwQMtCzYu6FPcDnmQOEW80g_fmW75TPhSK_A2KVq5V-kFhQQE3NyYrdmfKH6C99MVAqWAp6WLpXrDRteDg=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="Encryption at rest with DynamoDB" class="image--center mx-auto" /></p>
<h2 id="heading-encryption-for-data-in-transit">Encryption for Data in Transit</h2>
<p>To ensure data in transit is encrypted and follows secure protocols, use services like SNS and SQS, which support encryption in transit by default. When using Lambda functions with other AWS services, Lambda automatically uses Transport Layer Security (TLS) for data transfers. As part of the shared responsibility model, AWS ensures the security of Lambda's TLS.</p>
<p>Additionally, the HTTPS protocol is used for endpoints exposed by API Gateway and Lambda function URLs, which are managed natively by the API Gateway and Lambda Function Service.</p>
<h1 id="heading-infrastructure">Infrastructure</h1>
<p>With Serverless, the infrastructure is maintained by the cloud provider (AWS) and we as developers would also configure things to ensure the infrastructure is maintained well.</p>
<h2 id="heading-enabling-waf">Enabling WAF</h2>
<p>Web Application Firewall (WAF) ensures the applications are protected from common exploits and abuses such as DDoS attacks, SQL injections, bot traffic, and cross-site scripting, and ensures the underlying AWS resources are not attacked.</p>
<p>As a developer, you can enable WAF for the endpoints created by API services like API Gateway and AppSync to secure API traffic. You can also secure distribution endpoints from CloudFront and S3 web hosting, as well as S3 objects, by enabling WAF.</p>
<h2 id="heading-throttling-and-rate-limits">Throttling and Rate-limits</h2>
<p>In addition to making sure web traffic comes from trusted sources, you also need to prevent traffic spikes from overloading your system. API Gateway supports throttling and rate limits for API stages, which can be configured to ensure that traffic spikes don't harm or crash the system.</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUdj_CnJLaug1_ltgGHggcjqeh_p3GDGCjR8znokhCxGZP6H2chiYdNvBkSMSFbEQfb9hbBvc6h42stFrf9YoF4TB1uCNt6VGMEi-4rsT4dD2AycTa5nwghi_sSu9CCUisIytTpnD5u6F8_kR6HMu0yEuKNBDFxwopUmVMHHrEnpIA=s2048?key=Npd2PX-89yOEcisb2lGltQ" alt="Enabling throttling and rate limits with API Gateway" class="image--center mx-auto" /></p>
<h1 id="heading-bringing-it-all-together">Bringing It All Together!</h1>
<p>For the same Serverless API that performs CRUD operation on DynamoDB and additionally makes API requests to third-party services, let’s apply what we discussed with the security practices.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739729023629/c981e36a-42be-489f-b68a-aa9b20cdaf48.png" alt="Applying the best security practices to the Serverless API" class="image--center mx-auto" /></p>
<p>The web app is hosted on Amazon S3 with the distribution URL enabled with CloudFront and WAF. The web app makes API requests. This ensures the users can securely access the web app.</p>
<p>The API layer with API Gateway has WAF enabled on the endpoint along with the defined usage plan with rate limits and throttling limits. These APIs are authenticated with IAM roles to trigger the Lambda function for execution.</p>
<p>The Lambda function uses VPC to ensure an elastic IP address that can be whitelisted on the third-party service. It also leverages Secrets Manager to store the API credentials of the third-party service and uses IAM to authorize Secrets Manager and DynamoDB access.</p>
<p>Does this sound like too much of an additional overhead for developing a simple Serverless API? Well, security is one of the pillars of the AWS Well-Architected Tool so not an overhead but a practice to follow for all kinds of workloads to ensure better security.</p>
<p>Hope with this blog, you have understood how you are the co-owners of security for the applications when it comes to Serverless (the cloud in general), where as a developer you can implement the recommended best practices with Security.</p>
]]></content:encoded></item><item><title><![CDATA[Amazon S3 is more than storage and brings in a lot for the analytics ecosystem]]></title><description><![CDATA[At AWS re:Invent 2024, Amazon S3 announced S3 Tables and S3 Metadata (preview) specifically for analytics workloads. Although not extensively into building Analytics workloads, coming from the background of building ETL pipelines that use S3 and parq...]]></description><link>https://blog.theserverlessterminal.com/amazon-s3-is-more-than-storage-and-brings-in-a-lot-for-the-analytics-ecosystem</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/amazon-s3-is-more-than-storage-and-brings-in-a-lot-for-the-analytics-ecosystem</guid><category><![CDATA[S3]]></category><category><![CDATA[AWS]]></category><category><![CDATA[storage]]></category><category><![CDATA[table]]></category><category><![CDATA[analytics]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Tue, 28 Jan 2025 15:01:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737998472957/9eba840a-fed3-4e00-bd38-40ef4dc68315.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At AWS re:Invent 2024, Amazon S3 announced <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-tables-apache-iceberg-tables-analytics-workloads/">S3 Tables</a> and <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-metadata-preview/">S3 Metadata (preview)</a> specifically for analytics workloads. Although not extensively into building Analytics workloads, coming from the background of building ETL pipelines that use S3 and parquet data and I wanted to explore this new capability!</p>
<h1 id="heading-parquet-data-in-s3">Parquet data in S3</h1>
<p>Parquet is the columnar storage format that is efficient for data storage and retrieval and is widely used by different ETL and big data processing frameworks such as Apache Spark, Hive, and Amazon Athena. This makes the parquet data more accessible with the query approach by services like Athena that use SQL queries on the parquet data. Parquet-formatted data is stored as any other object in an S3 Bucket, with an additional tag specifically mentioning that it is in <code>parquet</code> format.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737213615751/10a7fa74-b05e-40d1-969e-cdeec752bb34.png" alt="How Parquet data is stored and queried from S3 Buckets" class="image--center mx-auto" /></p>
<p>Storing this parquet data in Amazon S3 opens up the opportunity to leverage cloud storage by integrating AWS Services such as AWS Glue for ETL, Amazon Redshift for data warehouse, and Amazon SageMaker for ML workloads. Parquet stored on S3 is not only performant but also cost-efficient as the files are smaller than CSV and can be much cheaper to store on S3 with different S3 storage classes based on how frequently they are queried.</p>
<h1 id="heading-s3-tables">S3 Tables</h1>
<p>Amazon S3 launched a new bucket type - <code>tables</code>, which is designed for structured data formats using the Apache Iceberg table storing Apache parquet format data. This offers up to <strong>3x faster query performance</strong> with parquet and a <strong>10x higher transaction per second</strong> when compared with parquet data stored as an object in an S3 bucket, making it ideal for data analytics workloads.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737991474611/6f3e9916-6275-4f5b-aee7-87f40d3fce68.png" alt="Architecture of S3 tables" class="image--center mx-auto" /></p>
<p>S3 tables enable structure data formatting with the structure of Table Buckets with <code>namespace</code>, which contains the <code>tables</code> that could be queried from services like Amazon Athena the catch is to set the tables in a namespace, and also loading the tables is supported with only Amazon EMR and open source Apache Spark which makes the developers go through the environment setup of EMR clusters or Spark; making the getting started experience enforced to go to EMR or self-hosted Spark.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737991850576/cf9a29c9-cb85-494f-9041-5e257138a5de.png" alt="S3 table setup in Table Buckets" class="image--center mx-auto" /></p>
<h2 id="heading-setting-up-one-time-integration-with-aws-analytics-services">Setting up one-time integration with AWS Analytics Services</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737989203601/cddd5e0b-7a5a-45a2-870a-72ae379e1b53.png" alt="S3 tables from the AWS Console." class="image--center mx-auto" /></p>
<p>Another region-wide, one-time setup of AWS Analytics services is an additional environment setup process one should follow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737991917925/c78c7970-9db8-4688-a164-85f88b062600.png" alt class="image--center mx-auto" /></p>
<p>Honestly, this could have been done natively by AWS, and given the developers to manage permissions to different AWS Analytics services.</p>
<h2 id="heading-s3-tables-with-built-in-management">S3 Tables with built-in management</h2>
<p>What makes S3 Tables performance is that way S3 Tables handles the typical table management with -</p>
<ul>
<li><p><strong>Data compaction</strong> - which combines the small table objects into larger objects which is configurable between 64MB and 512MB as a snapshot.</p>
</li>
<li><p><strong>Snapshot management</strong> - ensures the snapshot lifecycle has a minimum number of snapshots to retain the maximum age of the snapshot to retain, and eventually deleting the expired snapshots.</p>
</li>
</ul>
<p>These factors weigh in to make S3 Tables performant, you can read about <a target="_blank" href="https://aws.amazon.com/blogs/storage/how-amazon-s3-tables-use-compaction-to-improve-query-performance-by-up-to-3-times/">how S3 table uses compaction to improve query performance by up to 3 times</a>, along with some benchmarks with uncompacted tables in general-purpose buckets v/s compacted tables in table buckets.</p>
<h1 id="heading-s3-metadata">S3 Metadata</h1>
<p>Along with S3 Tables, Amazon S3 also announced S3 Metadata, which is a bucket property that can be enabled to capture the metadata about each S3 object in a general-purpose S3 bucket. S3 Metadata uses S3 Tables with the power of parquet data, the metadata of the objects are now queriable making searching for S3 objects with metadata much more efficient.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737993643054/e0216843-b745-4e12-b6f7-202b9612a20b.png" alt="S3 Metadata using S3 Tables to store object metadata" class="image--center mx-auto" /></p>
<h2 id="heading-types-of-s3-metadata">Types of S3 Metadata</h2>
<p>S3 Metadata supports each object in a bucket with metadata that are of two categories -</p>
<ul>
<li><p><strong>System defined</strong> - the metadata that is controlled by Amazon S3 natively, such as - <code>Date</code>, <code>Content-Length</code>, <code>Last-Modified</code>, and <code>ETag</code> that are immutable values by the user along with the metadata such as - <code>Cache-Control</code>, <code>Content-Disposition</code>, and <code>Content-Type</code>, which allow change in value by the user.</p>
</li>
<li><p><strong>User defined -</strong> the metadata that could be assigned by the user when the object is being uploaded, these keys are prefixed with <code>x-amz-meta-</code>.</p>
</li>
</ul>
<h2 id="heading-s3-metadata-in-action">S3 Metadata in action</h2>
<ul>
<li><p>Enable the S3 Metadata for the S3 Bucket from the console which requires you to set the name of the S3 Table and following the creation, it also creates the namespace <code>aws_s3_metadata</code>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737997436299/703394f3-9224-4e30-8a1e-2b0dfed6e730.png" alt="S3 Console with S3 Metadata enabled" class="image--center mx-auto" /></p>
</li>
<li><p>And when you navigate to <code>Table Buckets</code>, you can see the S3 Table bucket with the tables created and listed.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737998132449/fb360e18-47fe-40f0-8e21-62e770599f8e.png" alt="S3 Tables listing the table for S3 Metadata" class="image--center mx-auto" /></p>
</li>
<li><p>When you try to upload a new object to the S3 bucket, you can optionally define the Metadata - <code>system-defined</code> or <code>user-defined</code>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737998273683/5dcddeae-598b-479f-8197-c69bf91dfd73.png" alt="Uploading S3 object with S3 Metadata" class="image--center mx-auto" /></p>
</li>
<li><p>Once uploaded, you can view the metadata from the S3 Console and also query the same with Athena.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737998353012/32e1e41a-1589-41b3-86fa-db6a7dd3e80a.png" alt="Viewing the defined S3 Metadata" class="image--center mx-auto" /></p>
</li>
</ul>
<h1 id="heading-why-would-i-choose-s3-metadata">Why would I choose S3 Metadata?</h1>
<p>Since the data is stored as the Apache Iceberg with S3 Tables, one question that hit me was - <em>why would I enable S3 Metadata instead of directly implementing it with S3 Tables</em>?</p>
<p>S3 Metadata is crucial for organizing and managing the data of S3 Buckets, imagine having a system that extensively uses S3 Buckets to store data (of any format), leveraging the S3 Metadata for that would mean you can have the metadata of the object (user-defined metadata) that can help with data categorization and management. Additionally, with the metadata of the objects, data discovery with additional attributes would play a pivotal role for applications.</p>
]]></content:encoded></item><item><title><![CDATA[Serverless Applications at Scale? Now Go Build with Best Practices]]></title><description><![CDATA[At AWS re:Invent 2024, Arshad Zackeriya, Darshit Pandya, Jones Zachariah Noel N, Pubudu Jayawardana, and Sean Kendall hosted a PeerTalk meet-up about Serverless application best practices at scale. The discussion was very insightful although given th...]]></description><link>https://blog.theserverlessterminal.com/serverless-applications-at-scale-now-go-build-with-best-practices</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/serverless-applications-at-scale-now-go-build-with-best-practices</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[best practices]]></category><category><![CDATA[serverless]]></category><category><![CDATA[scalability]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Thu, 09 Jan 2025 10:55:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736248240861/cacd83c5-d237-40b2-ba99-6e9854d80484.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At AWS re:Invent 2024, <a target="_blank" href="https://www.linkedin.com/in/arshad-zackeriya-713b821b/">Arshad Zackeriya</a>, <a target="_blank" href="https://www.linkedin.com/in/darshitpandya/">Darshit Pandya</a>, <a target="_blank" href="https://www.linkedin.com/in/jones-zachariah-noel-n/">Jones Zachariah Noel N</a>, <a target="_blank" href="https://www.linkedin.com/in/pubudusj/">Pubudu Jayawardana</a>, and <a target="_blank" href="https://www.linkedin.com/in/sdkyyc/">Sean Kendall</a> hosted a PeerTalk meet-up about <strong>Serverless application best practices at scale.</strong> The discussion was very insightful although given the limited time and Serverless is a broad topic, we focused our discussion on <strong>AWS Lambda at Scale</strong>, with all of us sharing our expertise with AWS Lambda to be specific in the Serverless space to help the folks with their concerns and queries when building on AWS.</p>
<p><a target="_blank" href="https://aws.amazon.com/lambda/">AWS Lambda Functions</a> is the core of our compute layer. The reasons that weigh in for Lambda as a choice are scalability, a focus on business logic rather than infrastructure management, cost efficiency with a pay-as-you-go billing model, and the ability to bring your own programming (BYOP) language for the workloads. There is also a caveat that you should be aware of, how different best practices help with better architectures at scale.</p>
<h1 id="heading-choice-of-runtime"><strong>Choice of runtime</strong></h1>
<p>AWS Lambda makes it feasible to choose the runtime/programming language for the workload you are building. Some runtimes are managed by AWS, such as Java, Python, NodeJS, and a few more. When you want a specific runtime like PHP, you can run the custom runtime with the image, and you, as a developer, should manage the updates with the runtime.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfi1nFUv-GB6SL7D2OhcmhMD8Zf5TgKCupvx_qh0OhrbmooojtTEfnM0cqKP_H9T2bWrGQ9RfIyyEToxg0iZpCsQJRMhKdOQ2ymULVHGXD11m9TfOwJ5dH-bQapS7hX75Y_tD0Kqw?key=w3xiS6QZ-re-bh-OQeYYtwsT" alt="a man in a suit giving a thumbs up in front of a white board that says it depends on the situation" class="image--center mx-auto" /></p>
<p>The choice of the runtime depends on the workload and the requirement of the Lambda function. For example, a simple web application with CRUD could use NodeJS whereas for an application that is dependent on data and has extensive support with Python libraries, you may choose Python as your Lambda’s runtime. At times, it’s also considering the skillset of the team and the learning curve that could factor in for choosing a popular or performant runtime. For instance, if the team is well skilled in NodeJS and Python but knows that Rust is performant, one should not blindly start their Lambdas with Rust.</p>
<h1 id="heading-cold-starts-are-a-thing"><strong>Cold starts are a thing!</strong></h1>
<p>Even in 2024, with Lambda being around for a decade, Cold Starts with Lambda function is still something one has to factor in. Cold Starts occur when the Lambda function is not invoked for a period of time where the Lambda function is de-provisioned for larger scalability. When the Lambda function is invoked, there is an init phase that basically initializes the Lambda function with the respective runtime and runtime dependencies and additional code dependencies that add to the latency of the first invoked Lambda execution.</p>
<p>Suppose you aren’t aware of how Cold Start and Lambda performance vary with different runtimes. In that case, Maxime David has built a tool, <a target="_blank" href="https://maxday.github.io/lambda-perf/">Lambda Perf,</a> which benchmarks Cold Start across multiple runtimes supported by the Lambda function.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXd3A5VcjWJck7WyKQT1f5U_DWL-Eqf_gidflZP_CRgcgQMzNkeeO3HXrXD6D4rwes1Q7ryn975Ampb96A0H7w7U2nbSBdMQaF61oRvVkUUe0pT44ACK_JwCdawxtO3jWIRx-of9?key=w3xiS6QZ-re-bh-OQeYYtwsT" alt="Different Cold Starts with runtimes and memory" class="image--center mx-auto" /></p>
<h1 id="heading-snapstart-is-coming-in-hot"><strong>SnapStart is coming in hot</strong></h1>
<p>AWS Lambda's feature focused on reducing Cold Starts associated with Lambda functions with specific runtimes. At AWS re:Invent 2022, SnapStart made a debut with support for Java runtimes—Java 11 and extended to Java 17. After a worthwhile wait, AWS announced at AWS re:Invent 2024 that SnapStart now supports Python 3.12 or later and .NET 8 and later.</p>
<p>SnapStart addresses the Cold Start problem by creating a pre-initialized snapshot of the Lambda function’s environment during deployment that is used for subsequent invocations. This snapshot contains an image of all the necessary dependencies, resources, and configurations for that Lambda function to run.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdChA507k4IQMMOGPejzUqXWqw8KunaQJ3n810Q526FftCF3IXTXMHWcfaq3nlwPiuxuIClMTX5OZQfMLVr4hkLgbDyZs54jHHSdbTmcCSTjzPiQjOFfBQ2rZuDHe1DRFNItTXEsQ?key=w3xiS6QZ-re-bh-OQeYYtwsT" alt="Lambda's SnapStart illustration" class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://www.linkedin.com/in/vadymkazulkin/">Vadym Kazulkin</a> has authored a whole series about <a target="_blank" href="https://dev.to/vkazulkin/series/24979">AWS SnapStart with Java blog series</a> that benchmarks Cold Starts, reduced Cold Starts with SnapStart and end-to-end deployment with Java runtimes.</p>
<h1 id="heading-is-lambdalith-a-good-approach"><strong>Is Lambdalith a good approach?</strong></h1>
<p>Fat Lambdas or monolithic Lambdas often referred to as Lambdalith is the approach that many developers take to have all the business logic in one single Lambda function which could be invoked either by API Gateway / AppSync so that it’s one entry point for most of your executions. Unfortunately, not every invocation would need every bit of Lambda’s business logic or imported dependencies. This approach brings in cost efficiency and usage of frameworks such as Express with Node runtime (which is not the need of the hour for most cases) however, it introduces scalability issues where concurrency in production becomes a major concern with added Cold Starts due to large code and dependencies which are to be loaded during the initialization phase of the first invoked Lambda execution. Additionally, code management is also a concern given the Lambda function performs multiple tasks.</p>
<p>Some lessons from Dr. Werner’s keynote: we should bring in the simplexity while building our Lambda functions so that we have Lambda function for specific tasks which could give us the feasibility to have the needed dependencies alone for that specific Lambda function also configurations (reserved concurrency or timeouts or memory or runtime) based on the workload handled by the Lambda function.</p>
<h1 id="heading-purpose-fit-dependencies-with-managing-lambda-layers"><strong>Purpose fit dependencies with managing Lambda Layers</strong></h1>
<p>As the previous section called out, breaking down the Lambda function based on the task. A similar approach to Lambda Layers which contain additional code dependencies and packages that could be shared across multiple Lambdas.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfNSFaJoW2cdirQ8BCAZmOaBUebvZBKX159ZXOd1E3LrfuaXD8bIgCI6LYYCP4ZEa4_ZLh4_6DBTCIXE-CKLkUGycOf5-Ew89WEwPmW8fp5l-DAUYfexJlGBfaui7kbYeHe_TM3?key=w3xiS6QZ-re-bh-OQeYYtwsT" alt="Lambda Layer's illustration" class="image--center mx-auto" /></p>
<p>Creating Lambda layers that are only required by the Lambda function enables the Lambda function to not only perform better but also help in code management. This approach could be achieved by understanding what’s the purpose of your Lambda layer and how are they reused by other Lambda functions. Keeping the size of Layers small avoids the bloating of Layers and Functions, especially during initialization and execution.</p>
<h1 id="heading-aws-lambda-power-tuning-for-optimizing-performance-and-cost"><strong>AWS Lambda Power Tuning for optimizing performance and cost</strong></h1>
<p><a target="_blank" href="https://github.com/alexcasalboni/aws-lambda-power-tuning">AWS Lambda Power Tuning</a> tool is an AWS Step Function State Machine that runs multiple concurrent versions of the Lambda function with different memory allocations (128MB to 10GB) and later it analyses the execution logs with cost and time for that specific invocation and recommends the best possible configuration for optimized cost and best performance.</p>
<p>AWS Lambda Power Tuning tool helps with data-driven decisions to make the right choice of Lambda configurations with an automated approach. This tool can be integrated into your CI/CD Pipeline so that you can run the analysis based on the deployment of the Lambda function and later optimize the configuration for best performance and cost.</p>
<h1 id="heading-choice-of-deployment-canary-vs-blue-green"><strong>Choice of deployment - Canary v/s Blue-Green</strong></h1>
<p>The deployment strategy depends on how your Lambda function is being invoked and the traffic patterns. The factors to consider would be how are you testing the feature/change end-to-end, if something is broken, what’s your rollback plan and the traffic at the time of deployment.</p>
<p>Canary is a strategy where you switch the changes and roll out to a subset of users. For instance, if the rollout is expected across regions/ AWS accounts, start with one region in one AWS account, and test for completeness. In a sunny day scenario, you would continue this with a region-by-region rollout.</p>
<p>Blue-Green is a strategy that makes the changes instantly between two environments. This brings into account that changes have to be tested in a different environment end-to-end. The best option for applications and workloads that require extensive testing before deployment with zero downtime and isolated testing.</p>
<p>Choosing between a Canary or Blue-Green deployment strategy is based on how your application is architected and tolerance with impact in production.</p>
<h1 id="heading-wrap-up"><strong>Wrap up!</strong></h1>
<p>This blog talks about the specific factors we spoke about at PeerTalk meet-up, but the things to consider for best practices for a Serverless application at scale also include factors such as concurrency and approach to handling concurrency for Lambda function with reserved or provisioned when it’s in the deployed state in production. However, this blog focuses mostly on the factors and trade-offs during development and deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736417517491/ff3e2073-fd96-46a8-b705-18e1984dc25d.png" alt="Serverless application best practices at scale PeerTalk Meet-up at re:Invent '24" class="image--center mx-auto" /></p>
<p>Thanks to <a target="_blank" href="https://www.linkedin.com/in/arshad-zackeriya-713b821b/">Arshad Zackeriya</a>, <a target="_blank" href="https://www.linkedin.com/in/darshitpandya/">Darshit Pandya</a>, <a target="_blank" href="https://www.linkedin.com/in/jones-zachariah-noel-n/">Jones Zachariah Noel N</a>, <a target="_blank" href="https://www.linkedin.com/in/pubudusj/">Pubudu Jayawardana</a>, and <a target="_blank" href="https://www.linkedin.com/in/sdkyyc/">Sean Kendall</a> for sharing their expertise and also contributing to this blog.</p>
]]></content:encoded></item><item><title><![CDATA[Serverless APIs? This will help you choose the right one!]]></title><description><![CDATA[APIs are the core component that is bringing in the integrations with Server or backend with the client that could be a front-end or another service or even a third-party application that is invoking them.

Application Programming Interface (APIs) ar...]]></description><link>https://blog.theserverlessterminal.com/serverless-apis-this-will-help-you-choose-the-right-one</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/serverless-apis-this-will-help-you-choose-the-right-one</guid><category><![CDATA[APIs]]></category><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[AppSync]]></category><category><![CDATA[lambda-function-urls]]></category><category><![CDATA[API basics ]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Mon, 14 Oct 2024 18:22:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728835436902/cec93864-927d-46d7-9213-ea317bca614b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>APIs are the core component that is bringing in the integrations with Server or backend with the client that could be a front-end or another service or even a third-party application that is invoking them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728906392000/88be7065-84a9-4c9d-9a4f-6be5a27240b1.png" alt="API architecture in a nutshell" class="image--center mx-auto" /></p>
<p>Application Programming Interface (APIs) are not only external facing and consumed by other systems or clients but APIs are broadly used for inter-service and layer communications.</p>
<p>Are you someone building APIs with AWS Serverless stack? Architecting them would mean that you primarily work with 3 specific AWS Services - <strong>Amazon API Gateway, AWS AppSync</strong> and <strong>AWS Lambda Function URLs</strong>.</p>
<h1 id="heading-api-considersations">API Considersations</h1>
<p>Before looking into what these 3 API services from AWS offer, let’s understand how do we identify key features that can an architect/developer should know before making a choice of API service.</p>
<h2 id="heading-api-type">API Type</h2>
<p>The key deciding factor is what kind of API are you trying to build.</p>
<ul>
<li><p><strong>RESTful APIs</strong>: Stateless architecture that uses the standard HTTP protocol and methods (GET, POST, PUT, DELETE) for the CRUD operations. It is one of the widely used API type for modern APIs.</p>
</li>
<li><p><strong>GraphQL</strong>: Query language based API server which allows to request the data that is required by the client and support complex queries with real-time updates with subscriptions.</p>
</li>
<li><p><strong>Websockets</strong>: The protocol for bidirectional communication which uses full-duplex channels over a TCP connection and used in real-time systems where low latency is crucial.</p>
</li>
</ul>
<h2 id="heading-api-interactions">API Interactions</h2>
<p>The data exchange works between the client and backend in the real world.</p>
<ul>
<li><p><strong>Unidirectional</strong>: Communication flows in one direction, either from the client to the server or vice versa. For example, a client requests data from a server without expecting any further communication.</p>
</li>
<li><p><strong>Bidirectional:</strong> Allows communication in both directions, enabling continuous interaction between the client and server. This is essential for applications needing real-time updates, such as chat apps.</p>
</li>
</ul>
<h2 id="heading-api-response">API Response</h2>
<p>The way how the API is expected to respond to the request.</p>
<ul>
<li><p><strong>Synchronous APIs:</strong> The client waits for a response after making a request. This blocking behavior is suitable for operations requiring immediate feedback.</p>
</li>
<li><p><strong>Asynchronous APIs:</strong> The client can continue executing other tasks while waiting for a response. This non-blocking behavior enhances efficiency and scalability.</p>
</li>
</ul>
<h2 id="heading-api-response-structure">API Response Structure</h2>
<p>The structure of API responses can vary:</p>
<ul>
<li><p><strong>Standardized Formats:</strong> Many APIs return responses in standardized formats like JSON or XML.</p>
</li>
<li><p><strong>Customizable Structures:</strong> Some APIs allow customization of response formats based on client needs.</p>
</li>
</ul>
<h2 id="heading-supported-content-type">Supported content-type</h2>
<p>APIs can support various content types including:</p>
<ul>
<li><p>Standard content-types like <code>application/json</code>, <code>application/xml</code>, <code>text/html</code> and more.</p>
</li>
<li><p>Custom content-types defined by the API layer or application.</p>
</li>
</ul>
<h2 id="heading-authentication-and-authorization">Authentication and Authorization</h2>
<p>Support for various methods of authentication and authorization for the APIs, making it more secure.</p>
<ul>
<li><p><strong>API keys</strong>: Keys which are passed to the API via headers that grants access to API actions.</p>
</li>
<li><p><strong>IAM roles</strong>: Roles that use polices to grant access to API actions and resources.</p>
</li>
<li><p><strong>Identity providers</strong>: Identity providers such as Amazon Cognito, Okta and others which authentication the users and authorizes them with tokens passed to API layers.</p>
</li>
<li><p><strong>Custom Lambda authorizers</strong>: Lambda functions that works as the authorizer by providing access to APIs with valid bearer tokens.</p>
</li>
</ul>
<h1 id="heading-amazon-api-gateway">Amazon API Gateway</h1>
<p>Amazon API Gateway is a fully managed AWS Service for the API layer with RESTful APIs and Websocket APIs. API Gateway supports various integrations with Lambda Functions, ECS and also direct integrations with other AWS Services with the Velocity Template Language (VTL).</p>
<p>API Gateway also facilitates creation and managing of API keys with different rate-limit and throttling capabilities that not only ensures the security but also ensures the application is not abused with excessive API invocations.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Protocol</td><td>HTTPs, WebSocket</td></tr>
</thead>
<tbody>
<tr>
<td>API Type</td><td>RESTful, WebSocket</td></tr>
<tr>
<td>Security</td><td>API keys, IAM, Cognito, Custom authorizers</td></tr>
<tr>
<td>Caching</td><td>Various caching options available</td></tr>
<tr>
<td>Offline capability</td><td>N/A</td></tr>
<tr>
<td>Integration type</td><td>REST APIs, Lambda functions, selective AWS Services</td></tr>
<tr>
<td>API operation</td><td>CRUD with HTTP methods</td></tr>
<tr>
<td>Direct integration with AWS services with transformation</td><td>Mapping request and response templates</td></tr>
<tr>
<td>Throttling and rate-limits</td><td>Granular control with usage plan</td></tr>
</tbody>
</table>
</div><h1 id="heading-aws-appsync">AWS AppSync</h1>
<p>AWS AppSync is a fully managed GraphQL API server that allows clients (front-end mobile/web) and went GraphQL clients to query data from multiple data sources where data sources integrate with multiple AWS Services directly with Velocity Template Language (VTL) and also JavaScript resolvers making it easier to design queries, mutations and subscriptions from AppSync really easy!</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Protocol</td><td>HTTPs, WebSocket</td></tr>
</thead>
<tbody>
<tr>
<td>API Type</td><td>GraphQL</td></tr>
<tr>
<td>Security</td><td>API keys, IAM, Cognito</td></tr>
<tr>
<td>Caching</td><td>Simple caching option available</td></tr>
<tr>
<td>Offline capability</td><td>Supported with AppSync resolvers and SDK</td></tr>
<tr>
<td>Integration type</td><td>Data Sources - REST APIs, Lambda functions, AWS Services</td></tr>
<tr>
<td>API operation</td><td>GraphQL operations - Query, Mutation and Subscription</td></tr>
<tr>
<td>Direct integration with AWS services with transformation</td><td>AppSync resolvers with VTL and JS runtime - both unit resolvers and pipeline resolvers</td></tr>
<tr>
<td>Throttling and rate-limits</td><td>Limited throttling options for the APIs</td></tr>
</tbody>
</table>
</div><p>Here is <a target="_blank" href="https://blog.theserverlessterminal.com/series/appsync">a series on AppSync</a> that explains about various features of AppSync and also architectures leveraging AppSync.</p>
<h1 id="heading-aws-lambda-function-urls">AWS Lambda Function URLs</h1>
<p>AWS Lambda Functions now supports URLs that can trigger the Lambda function directly without any API Gateway making is ideal for internal APIs and for public APIs it also supports authentication with IAM roles. Although the API schema is defined and structured in the core logic of Lambda function, this supports different content-types defined by the Lambda function.</p>
<p>The added value of Lambda function URLs is enabling streaming responses from Lambda function which improves the Time To First Byte (TTFB), performance and better user experience when used for web and multimedia content. Read more about <a target="_blank" href="https://blog.theserverlessterminal.com/streaming-responses-via-aws-lambda">Streaming Responses via Lambda function</a>.</p>
<p>These Lambda functions URLs can use CloudFront distributions to make them available on the Edge.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Protocol</td><td>HTTPs</td></tr>
</thead>
<tbody>
<tr>
<td>API Type</td><td>RESTful in nature</td></tr>
<tr>
<td>Security</td><td>IAM roles</td></tr>
<tr>
<td>Caching</td><td>N/A</td></tr>
<tr>
<td>Offline capability</td><td>N/A</td></tr>
<tr>
<td>Integration type</td><td>Lambda function based integrations</td></tr>
<tr>
<td>API operation</td><td>Defined by Lambda function</td></tr>
<tr>
<td>Direct integration with AWS services with transformation</td><td>N/A</td></tr>
<tr>
<td>Throttling and rate-limits</td><td>N/A</td></tr>
</tbody>
</table>
</div><h1 id="heading-when-to-choose">When to choose</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Use-case</td><td>Amazon API Gateway</td><td>AWS AppSync</td><td>AWS Lambda Function URL</td></tr>
</thead>
<tbody>
<tr>
<td>Building RESTful APIs which uses different HTTP methods and request/response transformation</td><td>✅ Ideal for this.</td><td>❌ Follows GraphQL but handles request/response transformations in VTL/JS resolvers and pipeline resolvers.</td><td>✅ Lambda fURLs would require more management and overhead to maintain the routes and request/response transformation in Lambda function logic.</td></tr>
<tr>
<td>APIs which streams the responses with mulit-media content that requires longer load time</td><td>❌ It may timeout based on data payload/size.</td><td>❌ Streaming cannot be supported with queries, workaround with subscription is possible but not cost friendly and management efforts.</td><td>✅ Ideally with response streaming.</td></tr>
<tr>
<td>APIs for mobile applications which is a chat heavy workload</td><td>❌ Possible with websockets but managing connections and connection timeouts is a hassle so for a chat, it’s no.</td><td>✅ AppSync Subscriptions is best fit for this use-case.</td><td>❌ Works on the principle of request-response or streaming.</td></tr>
<tr>
<td>APIs that perform CRUD operations on a DynamoDB table</td><td>✅ Possbile with direct integrations.</td><td>✅ Possbile with direct integrations. In terms of performance, AppSync is faster and more reliable.</td><td>✅ Possbile with Lambda function using AWS SDK.</td></tr>
<tr>
<td>APIs with different monetized API keys for different usage purposes</td><td>✅ A simple configuration with API Keys, Usage Plans with rate-limit and throttling set.</td><td>❌ Doesn’t support rate-limiting and throttling which makes it hard to track usage based on keys.</td><td>❌ Supports only IAM authentication and tracking based on it is not possible.</td></tr>
<tr>
<td>Preflight validation of input parameters for the API</td><td>✅ Supported with models.</td><td>✅ Follows a strict GraphQL schema.</td><td>❌ Manually in the Lambda function, preflight is not available.</td></tr>
<tr>
<td>APIs with multiple data sources and aggregating response</td><td>❌ Ideally performed in the compute layer.</td><td>✅ AppSync supports merged APIs and pipeline resolvers help with aggregating the response.</td><td>❌ Lambda function can perform it but not built for this case.</td></tr>
</tbody>
</table>
</div><h1 id="heading-wrap-up">Wrap up</h1>
<p>In a nutshell, how do you decide API Gateway v/s AppSync v/s Lambda function URLs.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>AWS Service</td><td>Amazon API Gateway</td><td>AWS AppSync</td><td>AWS Lambda Function URL</td><td></td></tr>
</thead>
<tbody>
<tr>
<td>Protocol</td><td>HTTPs, WebSocket</td><td>HTTPs, WebSocket</td><td>HTTPs</td><td></td></tr>
<tr>
<td>API Type</td><td>RESTful, WebSocket</td><td>GraphQL</td><td>RESTful in nature</td><td></td></tr>
<tr>
<td>Security</td><td>API keys, IAM, Cognito, Custom authorizers</td><td>API keys, IAM, Cognito</td><td>IAM roles</td><td></td></tr>
<tr>
<td>Caching</td><td>Various caching options available</td><td>Simple caching option available</td><td>-</td><td></td></tr>
<tr>
<td>Offline capability</td><td>-</td><td>Supported with AppSync resolvers and SDK</td><td>-</td><td></td></tr>
<tr>
<td>Integration type</td><td>REST APIs, Lambda functions, selective AWS Services</td><td>Data Sources - REST APIs, Lambda functions, AWS Services</td><td>Lambda function based integrations</td><td></td></tr>
<tr>
<td>API operation</td><td>CRUD with HTTP methods</td><td>GraphQL operations - Query, Mutation and Subscription</td><td>Defined by Lambda function</td><td></td></tr>
<tr>
<td>Direct integration with AWS services with transformation</td><td>Mapping request and response templates</td><td>AppSync resolvers with VTL and JS runtime - both unit resolvers and pipeline resolvers</td><td>-</td><td></td></tr>
<tr>
<td>Throttling and rate-limits</td><td>Granular control with usage plans</td><td>Limited throttling options for the APIs</td><td>-</td><td></td></tr>
<tr>
<td>Pricing</td><td>No of requests per month, additional for caching</td><td>Priced for queries, data changes and real-time updates separately</td><td>As per Lambda function execution, addition data cost for response streaming</td><td></td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Streaming Responses via AWS Lambda]]></title><description><![CDATA[Building APIs on a Serverless stack includes using AWS Lambda Function URLs where the client traditionally requests the Lambda function and it responds after the Lambda function has completed execution.

This pattern for large payloads induces latenc...]]></description><link>https://blog.theserverlessterminal.com/streaming-responses-via-aws-lambda</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/streaming-responses-via-aws-lambda</guid><category><![CDATA[response streaming]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[APIs]]></category><category><![CDATA[lambda-function-urls]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Mon, 29 Jul 2024 18:09:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722276542777/25a68913-378b-4f61-b9e9-31bcba66b7b3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building APIs on a Serverless stack includes using <a target="_blank" href="https://blog.theserverlessterminal.com/lambda-functions-over-urls">AWS Lambda Function URLs</a> where the client traditionally requests the Lambda function and it responds after the Lambda function has completed execution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722263777130/65ce6110-849f-4e6f-8a39-93160c7b3fc2.png" alt="Traditional API request-response via Lambda Function URL" class="image--center mx-auto" /></p>
<p>This pattern for large payloads induces latency that can negatively affect the application's performance.</p>
<p>In this blog, we will look at how Lambda function's Response Streaming would be helpful and identify when it would be ideal to use Response Streaming for your workloads.</p>
<h2 id="heading-lambda-response-streaming">Lambda Response Streaming</h2>
<p>AWS Lambda launched support for <a target="_blank" href="https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/">response streaming</a> where this new pattern of API invocation allows the Lambda function to progressively send the response in chunks of data back to the client.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722264838781/c7d4cf5e-3f42-4f04-8d78-b1c589933623.png" alt="Lambda functions' Response Streaming" class="image--center mx-auto" /></p>
<h3 id="heading-what-does-this-pattern-bring-in">What does this pattern bring in?</h3>
<ul>
<li><p>Improved time to first byte (TTFB) that improves the performance where the latency of the API requests is reduced and response is received as chunks of data as and when they are available.</p>
</li>
<li><p>API response from Response Streaming supports larger payloads which have a soft limit of 20MB without the need to send it as a whole response.</p>
</li>
<li><p>Enabling asynchronous data streaming which sends the API response when the data is available.</p>
</li>
</ul>
<h3 id="heading-gotchas-of-this-pattern">Gotchas of this pattern</h3>
<ul>
<li><p><strong>Runtime support</strong>: AWS Lambda supports the capability of Response Streaming with only NodeJS runtime as part of the managed runtimes but with custom runtimes, you can leverage the Runtimes API to implement this.</p>
</li>
<li><p><strong>Payload size</strong>: The maximum response payload size supported is 20MB which is a soft limit you can request more via AWS Support. The first 6MB of the response is streaming without bandwidth constraints but as the data exceeds 6MB, you will have a maximum throughput of 2MB/s.</p>
</li>
<li><p><strong>Network cost</strong>: Similar to the payload limits, the first 6MB of the response is free and there would be data transfer charges for the total data streamed from the Lambda function.</p>
</li>
<li><p><strong>Function timeout</strong>: This goes without saying that you need to configure the right timeout for the Lambda function, the default 3s may be short but something beyond the 60s would also result in the API client terminating with the timeout error.</p>
</li>
<li><p><strong>API endpoints</strong>: The Response Streaming feature is available with only Lambda Function URL and expects the Function URL to use <code>ResponseStream</code> as the invocation mode. APIs are best managed with AWS API Gateway or Application Load Balancer (ALB) which doesn't support the response feature yet.</p>
</li>
</ul>
<h2 id="heading-building-aws-lambda-with-response-streaming">Building AWS Lambda with Response Streaming</h2>
<p><img src="https://media1.tenor.com/m/45kUy0Dpbi0AAAAd/we-learn-something-new-everyday-here-learning.gif" alt="We Learn Something New Everyday Here Learning GIF" class="image--center mx-auto" /></p>
<p>AWS Lambda uses the Node's Writable Stream API, so you can use <code>write()</code> from NodeJS to write into the stream, or Lambda functions introduced <code>pipeline()</code> which extends the capability of the stream from <code>util</code>.</p>
<h3 id="heading-nodes-write">Node's <code>write()</code></h3>
<p>Response Streaming with <code>write()</code> would directly write into the Streams and whenever there is data in the stream, that would be sent to the client as a response. This method would expect you to manually handle the stream end.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">exports</span>.handler = awslambda.streamifyResponse(
    <span class="hljs-keyword">async</span> (event, responseStream, context) =&gt; {
        <span class="hljs-keyword">const</span> httpResponseMetadata = {
            <span class="hljs-attr">statusCode</span>: <span class="hljs-number">200</span>,
            <span class="hljs-attr">headers</span>: {
                <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"text/html"</span>,
            }
        };

        responseStream = awslambda.HttpResponseStream.from(responseStream, httpResponseMetadata);

        responseStream.write(<span class="hljs-string">"&lt;html&gt;"</span>);
        responseStream.write(<span class="hljs-string">"&lt;p&gt;Hello!&lt;/p&gt;"</span>);

        responseStream.write(<span class="hljs-string">"&lt;h1&gt;Let's start streaming response&lt;/h1&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;h2&gt;Serverless&lt;/h2&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;h3&gt;Is&lt;/h3&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;h3&gt;Way&lt;/h3&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;h3&gt;More&lt;/h3&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;h3&gt;Mature&lt;/h3&gt;"</span>);
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
        responseStream.write(<span class="hljs-string">"&lt;p&gt;DONE!&lt;/p&gt;"</span>);
        responseStream.end();
    }
);
</code></pre>
<p>When you publish the above Lambda function and invoke the Function URL via a web browser, you can notice how the streaming responses are received.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722273473238/4a609a62-3d62-4e06-9627-11774e0422ab.gif" alt="Streaming Response with NodeJS write()" class="image--center mx-auto" /></p>
<h3 id="heading-lambdas-pipeline">Lambda's <code>pipeline()</code></h3>
<p>Lambda function's <code>pipeline()</code> is a util extension that handles the end of the stream automatically. This is available in <code>util</code> package and use <code>pipeline(requestStream, responseStream)</code> to write data into the stream.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> util <span class="hljs-keyword">from</span> <span class="hljs-string">'util'</span>;
<span class="hljs-keyword">import</span> stream <span class="hljs-keyword">from</span> <span class="hljs-string">'stream'</span>;
<span class="hljs-keyword">const</span> { Readable } = stream;
<span class="hljs-keyword">const</span> pipeline = util.promisify(stream.pipeline);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> handler = awslambda.streamifyResponse(<span class="hljs-keyword">async</span> (event, responseStream, _context) =&gt; {
  <span class="hljs-keyword">const</span> httpResponseMetadata = {
    <span class="hljs-attr">statusCode</span>: <span class="hljs-number">200</span>,
    <span class="hljs-attr">headers</span>: {
      <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"text/html"</span>,
    }
  };

  responseStream = awslambda.HttpResponseStream.from(responseStream, httpResponseMetadata);
  <span class="hljs-keyword">let</span> requestStream = Readable.from(Buffer.from(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Array</span>(<span class="hljs-number">1024</span> * <span class="hljs-number">1024</span>).join(<span class="hljs-string">'🚀'</span>)));
  <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
  requestStream = Readable.from(Buffer.from(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Array</span>(<span class="hljs-number">1024</span> * <span class="hljs-number">1024</span>).join(<span class="hljs-string">'⚡️'</span>)));
  <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">r</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(r, <span class="hljs-number">1000</span>));
  requestStream = Readable.from(Buffer.from(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Array</span>(<span class="hljs-number">1024</span> * <span class="hljs-number">1024</span>).join(<span class="hljs-string">'🚀 Serverless is not dead!'</span>)));
  <span class="hljs-keyword">await</span> pipeline(requestStream, responseStream);
});
</code></pre>
<p>Let's publish the Lambda function and invoke it via the Lambda Function URL.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722275041437/ad4f0c78-bdf0-41ea-a767-741c5971367f.gif" alt="Response Streaming via pipeline()" class="image--center mx-auto" /></p>
<h3 id="heading-iac-to-publish-lambda-function-url">IaC to publish Lambda Function URL</h3>
<p>While deploying and publishing the Lambda function, keep in mind to use the right <code>Timeout</code> and also <code>FunctionUrlConfig</code> with <code>InvokeMode</code> set as <code>RESPONSE_STREAM</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Resources:</span>
  <span class="hljs-attr">responseStreamingLambda:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Serverless::Function</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">Timeout:</span> <span class="hljs-number">20</span>
      <span class="hljs-attr">Handler:</span> <span class="hljs-string">index.handler</span>
      <span class="hljs-attr">Runtime:</span> <span class="hljs-string">nodejs20.x</span>
      <span class="hljs-attr">Architectures:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">x86_64</span>
      <span class="hljs-attr">FunctionUrlConfig:</span>
        <span class="hljs-attr">AuthType:</span> <span class="hljs-string">NONE</span>
        <span class="hljs-attr">InvokeMode:</span> <span class="hljs-string">RESPONSE_STREAM</span>
</code></pre>
<h2 id="heading-response-streaming-is-the-best-fit-in">Response Streaming is the best fit in</h2>
<p>It's important to understand when it would be ideal to use Response Streaming and some of the use cases include -</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Use Case</td><td>Why it's the best fit?</td></tr>
</thead>
<tbody>
<tr>
<td>Real-time chat applications</td><td>Improved User Experience with low latency.</td></tr>
<tr>
<td>Streaming large files from S3</td><td>Enabling downloading of large (&gt;6MB) S3 objects and receiving the data as and when available.</td></tr>
<tr>
<td>Server-side rending</td><td>When using SSR with incremental updates it improves the time to first byte (TTFB) while parts of the page render based on the data.</td></tr>
<tr>
<td>Streaming data from IoT devices</td><td>Enabling near real-time monitoring of data without delays and latency.</td></tr>
</tbody>
</table>
</div><h2 id="heading-wrap-up">Wrap up!</h2>
<p>Lambda functions' Response Streaming is ideal for web applications and monitoring systems where near real-time data is crucial. However, considering the limitations, you need to use Lambda Function URL with NodeJS runtime and be aware of constraints on cost and network bandwidth.</p>
]]></content:encoded></item><item><title><![CDATA[Orchestration of HTTP invocation made possible with Step Functions]]></title><description><![CDATA[When building applications, we often interact with various systems, such as in-house microservices or third-party systems with HTTP endpoints as applications would need data and depend on third-party systems for business-critical tasks or data.

HTTP...]]></description><link>https://blog.theserverlessterminal.com/orchestration-of-http-invocation-made-possible-with-step-functions</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/orchestration-of-http-invocation-made-possible-with-step-functions</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Step Functions]]></category><category><![CDATA[http]]></category><category><![CDATA[APIs]]></category><category><![CDATA[events]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Sat, 22 Jun 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719493763931/27ca4191-564f-4dce-afaf-d1ad090b9226.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When building applications, we often interact with various systems, such as in-house microservices or third-party systems with HTTP endpoints as applications would need data and depend on third-party systems for business-critical tasks or data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715266608652/ef3cccdf-bbde-4b76-8bfd-4097f48ba512.png" alt="Application service integrating with third party systems via HTTP invocations" class="image--center mx-auto" /></p>
<h2 id="heading-http-api-invocation-from-lambda-functions">HTTP API invocation from Lambda functions</h2>
<p>This process may require using AWS Lambda functions that utilize HTTP libraries (such as <code>request</code> or <code>axios</code> for NodeJS) to construct the request with the appropriate <code>headers</code> and <code>body</code>, ensuring that the request is sent and the response is processed successfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719493021893/e997e0c8-910e-4e1e-a62d-bb9fc76d15eb.png" alt="State machine with Lambda states to invoke HTTP endpoint" class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Increased complexity</strong>: Introducing the Lambda function to invoke HTTP endpoints adds components that must be managed and maintained.</p>
</li>
<li><p><strong>Increased latency</strong>: Since the request response has to hop off another architectural component, it adds to the latency of the response.</p>
</li>
<li><p><strong>Increased cost</strong>: Running a separate Lambda function to handle the HTTP invocation will incur additional costs, as you'll be charged for the Lambda function execution in addition to the Step Functions execution.</p>
</li>
</ul>
<p>Also, this opens up the flexibility to invoke the HTTP endpoints with whitelisted IP addresses and more security aspects of the Lambda function with Secrets Manager to maintain HTTP credentials.</p>
<h2 id="heading-http-api-invocation-with-eventbridge-api-destinations">HTTP API invocation with EventBridge API destinations</h2>
<p>In an Event-Driven world, Amazon EventBridge's API destination feature allows you to configure the HTTP endpoint and credentials from Secrets Manager, collectively known as EventBridge Connection.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719496333854/e3570b20-4d66-4fad-92b6-2f855d9d755a.png" alt="API Destinations to invoke HTTP endpoint" class="image--center mx-auto" /></p>
<p>In a blog about <a target="_blank" href="https://medium.com/freshworks-developer-blog/a-guide-to-understanding-freshworks-integrations-with-amazon-eventbridge-9782e54f1076">How Freshworks Developer Platform integrates with AWS via EventBridge</a>, I've explained in detail the integration but in this section of the blog, we will focus on the API destination part of the architecture.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">########################   API Connection   ############################</span>
<span class="hljs-attr">MyConnection:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Events::Connection</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">AuthorizationType:</span> <span class="hljs-string">BASIC</span>
      <span class="hljs-attr">Description:</span> <span class="hljs-string">'My connection with username/password'</span>
      <span class="hljs-attr">AuthParameters:</span>
        <span class="hljs-attr">BasicAuthParameters :</span>
          <span class="hljs-attr">Username :</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">APIKeyValue</span>
          <span class="hljs-attr">Password :</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">Password</span>
<span class="hljs-comment">########################   API Destination   ############################</span>
<span class="hljs-attr">MyApiDestination:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Events::ApiDestination</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">Name:</span> <span class="hljs-string">'FreshdeskAPI'</span>
      <span class="hljs-attr">ConnectionArn:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">MyConnection.Arn</span>
      <span class="hljs-attr">InvocationEndpoint:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">FreshdeskURL</span>
      <span class="hljs-attr">HttpMethod:</span> <span class="hljs-string">PUT</span>
      <span class="hljs-attr">InvocationRateLimitPerSecond:</span> <span class="hljs-number">10</span>
<span class="hljs-comment">########################   Event Rule to invoke API destination   ############################</span>
<span class="hljs-attr">EventRule:</span> 
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Events::Rule</span>
    <span class="hljs-attr">Properties:</span> 
      <span class="hljs-attr">Description:</span> <span class="hljs-string">"EventRule"</span>
      <span class="hljs-attr">State:</span> <span class="hljs-string">"ENABLED"</span>
      <span class="hljs-attr">EventBusName:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">MyEventBus</span>
      <span class="hljs-attr">EventPattern:</span> 
        <span class="hljs-attr">source:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"fw-sentiment"</span>       
      <span class="hljs-attr">Targets:</span> 
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Arn:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">MyApiDestination.Arn</span>
          <span class="hljs-attr">RoleArn:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">EventBridgeTargetRole.Arn</span>
          <span class="hljs-attr">Id:</span> <span class="hljs-string">"MyAPIdestination"</span>
          <span class="hljs-attr">InputTransformer:</span>
            <span class="hljs-attr">InputPathsMap:</span>
              <span class="hljs-attr">id:</span> <span class="hljs-string">$.detail.id</span>
              <span class="hljs-attr">priority:</span> <span class="hljs-string">$.detail.priority</span>
            <span class="hljs-attr">InputTemplate:</span> <span class="hljs-string">&gt;
                {
                  "priority": &lt;priority&gt;
                }
</span>          <span class="hljs-attr">HttpParameters:</span>
            <span class="hljs-attr">PathParameterValues:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-string">$.detail.id</span>
          <span class="hljs-attr">DeadLetterConfig:</span>
            <span class="hljs-attr">Arn:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">MyDLQueue.Arn</span>
</code></pre>
<h3 id="heading-api-destination-brings-to-the-table">API Destination brings to the table</h3>
<ul>
<li><p><strong>Improved Security</strong>: API connections use AWS Secrets Manager to securely store and manage authentication credentials, reducing the risk of exposing sensitive information, which is more secure than hardcoding credentials in Lambda functions.</p>
</li>
<li><p><strong>Reduced complexity</strong>: Using API connections with HTTP Tasks in Step Functions eliminates the need to write and maintain Lambda functions solely for making HTTP requests, simplifying the architecture and reducing the amount of code to manage.</p>
</li>
<li><p><strong>Dependency of EventBridge Bus</strong>: Introduces dependency on Event Buses to be able to invoke API destinations. Although it's managed, adding in additional components would mean that you have to define the Event Rules.</p>
</li>
</ul>
<h2 id="heading-http-invoke-on-step-functions">HTTP invoke on Step Functions</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719496941010/aee9cefc-7776-4f91-9424-07674ba3f0ff.png" alt class="image--center mx-auto" /></p>
<p>The above image from Application Composer indicates that when creating a State Machine with <code>HTTPInvoke</code> State, it would also require EventBridge Connection. You can also refer to the SAM template of the State Machine.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">StateMachine:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Serverless::StateMachine</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">Definition:</span>
        <span class="hljs-attr">StartAt:</span> <span class="hljs-string">Call</span> <span class="hljs-string">third-party</span> <span class="hljs-string">API</span>
        <span class="hljs-attr">States:</span>
          <span class="hljs-attr">Call third-party API:</span>
            <span class="hljs-attr">Type:</span> <span class="hljs-string">Task</span>
            <span class="hljs-attr">Resource:</span> <span class="hljs-string">arn:aws:states:::http:invoke</span>
            <span class="hljs-attr">Parameters:</span>
              <span class="hljs-attr">Method:</span> <span class="hljs-string">GET</span>
              <span class="hljs-attr">ApiEndpoint:</span> <span class="hljs-string">${ApiEndpoint}</span>
              <span class="hljs-attr">Authentication:</span>
                <span class="hljs-attr">ConnectionArn:</span> <span class="hljs-string">${EBConnectionARN}</span>
            <span class="hljs-attr">Retry:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-attr">ErrorEquals:</span>
                  <span class="hljs-bullet">-</span> <span class="hljs-string">States.ALL</span>
                <span class="hljs-attr">BackoffRate:</span> <span class="hljs-number">2</span>
                <span class="hljs-attr">IntervalSeconds:</span> <span class="hljs-number">1</span>
                <span class="hljs-attr">MaxAttempts:</span> <span class="hljs-number">3</span>
                <span class="hljs-attr">JitterStrategy:</span> <span class="hljs-string">FULL</span>
            <span class="hljs-attr">ResultPath:</span> <span class="hljs-string">$.ApiResponse</span>
            <span class="hljs-attr">End:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">Logging:</span>
        <span class="hljs-attr">Level:</span> <span class="hljs-string">ALL</span>
        <span class="hljs-attr">IncludeExecutionData:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">Destinations:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">CloudWatchLogsLogGroup:</span>
              <span class="hljs-attr">LogGroupArn:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">StateMachineLogGroup.Arn</span>
      <span class="hljs-attr">Policies:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">AWSXrayWriteOnlyAccess</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Statement:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
              <span class="hljs-attr">Action:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:CreateLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:GetLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:UpdateLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DeleteLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:ListLogDeliveries</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:PutResourcePolicy</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DescribeResourcePolicies</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DescribeLogGroups</span>
              <span class="hljs-attr">Resource:</span> <span class="hljs-string">'*'</span>
      <span class="hljs-attr">Tracing:</span>
        <span class="hljs-attr">Enabled:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">Type:</span> <span class="hljs-string">STANDARD</span>
      <span class="hljs-attr">DefinitionSubstitutions:</span>
        <span class="hljs-attr">ApiEndpoint:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">APIEndpoint</span>
        <span class="hljs-attr">EBConnectionARN:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">EventBridgeConnection.Arn</span>
</code></pre>
<p>Now that the State Machine is defined, let's also define EventBridge Connection along with the authorization credentials.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">EventBridgeConnection:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::Events::Connection</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">AuthorizationType:</span> <span class="hljs-string">BASIC</span>
      <span class="hljs-attr">AuthParameters:</span>
        <span class="hljs-attr">BasicAuthParameters:</span>
          <span class="hljs-attr">Password:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">EventBridgeConnectionPassword</span>
          <span class="hljs-attr">Username:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">EventBridgeConnectionUsername</span>
</code></pre>
<p>The resources - <code>StateMachine</code> and <code>EventBridgeConnection</code> are defined but for the StateMachine to successfully invoke the HTTP point, you would need to authorize the State Machine IAM execution policy as below -</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">Statement:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
      <span class="hljs-attr">Action:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">events:RetrieveConnectionCredentials</span>
      <span class="hljs-attr">Resource:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">EventBridgeConnection.Arn</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
      <span class="hljs-attr">Action:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">secretsmanager:GetSecretValue</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">secretsmanager:DescribeSecret</span>
      <span class="hljs-attr">Resource:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-type">!Sub</span> <span class="hljs-string">arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:events!connection/*</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
      <span class="hljs-attr">Action:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">states:InvokeHTTPEndpoint</span>
      <span class="hljs-attr">Resource:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-type">!Sub</span> <span class="hljs-string">arn:aws:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*</span>
</code></pre>
<p>Note that, EventBridge Connection under the hood uses AWS Secrets Manager so you would also need to allow <code>GetSecretValue</code> and <code>DescribeSecret</code>.</p>
<h3 id="heading-http-invoke-state-with-its-advantages">HTTP Invoke state with it's advantages</h3>
<p>Since HTTP invocation on Step Functions uses EventBridge Connection, it brings in the added advantage over invoking HTTP endpoints via Lambda Functions.</p>
<ul>
<li><p><strong>Enhanced execution insights</strong>: Step Functions provides you with built-in monitoring and logging capability of the State Machine execution which makes it easier to trace and debug the HTTP requests within the context of the workflow.</p>
</li>
<li><p><strong>Streamlined workflow management</strong>: Directly integrating HTTP requests within Step Functions allows for more streamlined and cohesive workflow definitions. This makes it easier to manage and update workflows without needing to modify and redeploy Lambda functions</p>
</li>
<li><p><strong>Error retry and handling</strong>: Step Functions' error handling techniques give you the flexibility to define the errors and their retry by handling the errors elegantly. Additionally, EventBridge Connections also retries failed requests by default.</p>
</li>
</ul>
<h3 id="heading-http-invoke-state-with-its-disadvantages">HTTP Invoke state with it's disadvantages</h3>
<ul>
<li><p><strong>Dependency on EventBridge Connections</strong>: The HTTP invoke state depends on EventBridge connections for managing authentication credentials. This adds extra setup steps and dependencies, which can make the process more complicated. Users need to create and manage EventBridge connections and related secrets in AWS Secrets Manager, which can be confusing for some.</p>
</li>
<li><p><strong>No Support for Private Endpoints</strong>: The HTTP invoke state does not support private endpoints within a VPC. This limitation means it can't be used for secure, internal communication, forcing developers to use Lambda functions or other methods to interact with private APIs.</p>
</li>
<li><p><strong>IAM Permissions Complexity</strong>: To use the HTTP invoke state, you need to set up specific IAM permissions for the state machine's role. This includes permissions to make HTTP requests, use the EventBridge connection, and access the connection's secret. Managing these permissions can add complexity and potential security risks if not configured correctly.</p>
</li>
<li><p><strong>Error debugging</strong>: While error retry and handling are supported, error debugging becomes a challenge as HTTP requests could fail for various reasons - timeouts, authentication fail, missing header or parameter that can be hard to debug the error as it's an EventBridge Connection used under the hood.</p>
</li>
</ul>
<h2 id="heading-wrap-up">Wrap up</h2>
<p>Yes, orchestrating the HTTP invocations is made possible with Step Functions but it does have its advantages and disadvantages. While building Serverless applications or workflows, the trade-offs are important to consider.</p>
<p>At times, you would need the flexibility to make dynamic HTTP invocations which is possible on the Lambda function rather than via EventBridge Connection. Additionally, it would require you to be careful with the IAM execution policy to allow the needed Secrets.</p>
]]></content:encoded></item><item><title><![CDATA[Serverless with orchestration and choreography]]></title><description><![CDATA[Building Serverless on AWS would be using multiple AWS Services which are building either following an event-driven architecture or a microservice architecture which needs either orchestrated or a choreographed architecture so that execution happens ...]]></description><link>https://blog.theserverlessterminal.com/serverless-with-orchestration-and-choreography</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/serverless-with-orchestration-and-choreography</guid><category><![CDATA[Amazon EventBridge]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Orchestration]]></category><category><![CDATA[Choreography]]></category><category><![CDATA[AWS Step Functions]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Tue, 23 Apr 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714067942569/b94629d0-5b83-4c4b-9051-cbf9aa3df7b8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building Serverless on AWS would be using multiple AWS Services which are building either following an event-driven architecture or a microservice architecture which needs either orchestrated or a choreographed architecture so that execution happens smoothly! Sometimes, it also involves using a hybrid of orchestration and choreography.</p>
<h1 id="heading-orchestration">Orchestration</h1>
<p><img src="https://lh6.googleusercontent.com/izRpzDSJq6syGarWedXayMM6MiiQow7mbbf0b140zprmuSPxXMTnGStolqmjr3Zz433k9ekmbCUhphKaBhq-ZbrhspxiYFEgHqbe51aUijbaahquabe5Nw9MyfJrLlWzM8GdkZyDfoT9mdoUhHdw9w1uDQ=s2048" alt="Orchestration of music GIF from the internet" class="image--center mx-auto" /></p>
<p>Just as an orchestration looks, there is a concertmaster or lead who is part of the band would controls how each musician plays their instrument; in the orchestration pattern in Serverless architecture, the orchestrator controls the flow and execution of different tasks and processes based on the workflow that is defined.</p>
<h2 id="heading-why-do-you-need-orchestration">Why do you need orchestration?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714044592591/ba6dc9f8-ca43-46dc-a156-3f71364cc01c.png" alt="Perks of orchestrations" class="image--center mx-auto" /></p>
<p>Orchestration as a pattern promotes Serverless applications to execute in a structured and less chaotic.</p>
<ul>
<li><p><strong>Centralized control</strong> - With centralized control and a defined workflow for task flow and execution, orchestration assists in promptly adhering to the workflow definition. Furthermore, error-handling techniques for both the workflow and each task can be integrated into the workflow definition.</p>
</li>
<li><p><strong>Scalability</strong> - In the orchestration pattern, it enables scaling individual tasks within the same workflow, simplifying the scaling of the entire workflow and only the tasks that require scaling up or down. These tasks could include Lambda Functions, ECS tasks, or the implementation of asynchronous tasks.</p>
</li>
<li><p><strong>Monitoring</strong> - The centralized control enhances visibility into individual tasks and the workflow as a whole. This improves monitoring capabilities for each iteration of the execution, including the state and data passed between tasks. This makes debugging easier and provides a better understanding of the workflow's execution.</p>
</li>
</ul>
<h2 id="heading-aws-step-functions">AWS Step Functions</h2>
<p>In the AWS ecosystem, it is AWS Step Functions that enable orchestration of a workflow (<code>State Machine</code> in Step Functions terminology) with the execution of different tasks in the workflow.</p>
<p><img src="https://lh7-us.googleusercontent.com/_p2tEkkFFBVlcRDh6Xh_39VZUPx40raPfFRg1ChCVZSI12XyY2oWtMGrQ9nvIP_gEZ5hWl6fAAZE1LuNgL28mOMmq7muNE39Cj33ZIWqA1QTvC86Wn8EcYHYyjHC62JG3MUPbATe1c4K01c=s2048" alt="A State Machine for User sign-up and it's related process" class="image--center mx-auto" /></p>
<h3 id="heading-serverless-orchestration-of-workflows">Serverless orchestration of workflows</h3>
<p>AWS Step Functions helps with different kinds of flows - <code>parallel</code>, <code>map</code>, <code>pass</code>, <code>choice</code> and more which allows the flexibility to define a workflow with a combination of flows and <code>state</code>.</p>
<p>Step Functions is Serverless by nature as there isn't any infrastructure management or provision resources for the execution. Pricing for Step functions is based on the types of workflows offered -</p>
<ul>
<li><p><code>Standard workflow</code> - for longer-running workflows, which are priced with each state transition.</p>
</li>
<li><p><code>Express workflow</code> - for shorter workflow duration where the pricing is based on number of requests for the workflow and it's duration.</p>
</li>
</ul>
<p>The <code>State Machine</code> or the workflow is defined with Amazon States Language (ASL) that is a JSON-based definition. The <code>State Machine</code> could be used in the application's Infrastructure as Code (IaC) definitions.</p>
<p>Step Functions supports <a target="_blank" href="https://blog.theserverlessterminal.com/handling-errors-with-stepfunctions-sns-sdk-integration">error-handling techniques</a> and the <a target="_blank" href="https://blog.theserverlessterminal.com/the-jitter-strategy-for-step-functions-error-retries-on-the-new-workflow-studio">JitterStrategy</a>. Additionally, Step Functions integrates with over 1000 API actions from 200+ AWS Services using <a target="_blank" href="https://blog.theserverlessterminal.com/why-aws-step-functions-and-sdk-integrations">SDK Integrations</a> and <a target="_blank" href="https://blog.theserverlessterminal.com/intrinsic-functions-to-level-up-your-step-functions">intrinsic functions</a>. These features can enhance integrations and improve how data is accessed and processed with Step Functions.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.theserverlessterminal.com/series/aws-step-functions">https://blog.theserverlessterminal.com/series/aws-step-functions</a></div>
<p> </p>
<h1 id="heading-choreography">Choreography</h1>
<p><img src="https://lh7-us.googleusercontent.com/dWznCrziGVioiGhZtYbI-IkBLFuyRz2CCmKM6mo0xyR_QivASNaip1Q5icr-ANgP3B7UVd_Cz1IAe-4_nOPo10GCFXuV03JXh7LQE8aFeBuItejTRDbRPS51NofmwRrWZZjCyn0yEjsauAU=s2048" alt="Choreography GIF from the internet" class="image--center mx-auto" /></p>
<p>In the world of Event-Driven Architecture (EDA), the choreography pattern works well to ensure that microservices or architectural components are coordinated based on events. It's like a dance choreography where you wait for the right music beats and cues from your dance partner to make the next move.</p>
<h2 id="heading-why-do-you-need-choreography">Why do you need choreography?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714064685487/01256ad3-55a7-4af2-9415-a5d52dd9a9d9.png" alt="Perks of choreography" class="image--center mx-auto" /></p>
<p>In Event-Driven Architecture (EDA), the application relies on real-time events and reactive responses, choreography allows individual microservices to reach independently to the event.</p>
<ul>
<li><p><strong>Loosely-coupled</strong> - The microservices and independent services are decoupled to ensure they operate independently, even though they are triggered by events and share events to communicate with other services. Keeping the microservices loosely coupled also enhances fault tolerance. If one microservice goes down, the system remains operational because the other microservices are still running as intended.</p>
</li>
<li><p><strong>Scalability</strong> - Microservices can scale independently according to the need to scale up or down, promoting efficient resource utilization.</p>
</li>
<li><p><strong>Flexibility</strong> - Since microservices can scale based on the need and are loosely coupled, the architecture allows for more flexibility in making updates and rolling out upgrades while the system continues to operate as usual.</p>
</li>
</ul>
<h2 id="heading-amazon-eventbridge">Amazon EventBridge</h2>
<p>Amazon EventBridge enables Event-Driven Architecture (EDA) in the AWS Serverless with the EventBridge Bus offering making it easier to route events between a source and destination. EventBridge can scale up and down to support the routing of millions of events in a production-grade system.</p>
<p>Here is a blog about why EventBridge is a missing piece to your Serverless application -</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.theserverlessterminal.com/amazon-eventbridge-the-missing-piece-to-your-app">https://blog.theserverlessterminal.com/amazon-eventbridge-the-missing-piece-to-your-app</a></div>
<p> </p>
<h3 id="heading-choreographing-with-eventbridge">Choreographing with EventBridge</h3>
<ul>
<li><p><strong>Asynchronous messaging</strong> - EventBridge decouples services by allowing them to post events to a shared event bus. This bus then directs the events to specific destinations, enabling the source to post the message and proceed with the rest of the execution.</p>
</li>
<li><p><strong>Event rules and filters</strong> - EventBridge works with event buses that use event rules to determine which events should be sent to which destination. When creating these intelligent rules, filters can be applied to the event payloads for more detailed routing of specific events.</p>
</li>
<li><p><strong>Archive and replay</strong> - EventBridge Bus also supports archiving and replaying events. This feature allows events to be stored for later reproduction, aiding in debugging processes.</p>
</li>
<li><p><strong>SaaS integrations</strong> - EventBridge supports SaaS integrations with AWS partners where the events from the AWS partner's SaaS products can be targeted to a specific SaaS Bus to process the events.</p>
</li>
</ul>
<h1 id="heading-orchestration-vs-choreography">Orchestration v/s Choreography</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td></td><td><strong>Orchestration</strong></td><td><strong>Choreography</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Control</strong></td><td>Centralized control for workflow execution but has its dependencies making it coupled</td><td>No central control instead a bus for the flow of events and also loosely coupled</td></tr>
<tr>
<td><strong>Complexity</strong></td><td>Works for both simple and complex workflows</td><td>Simple event-driven process</td></tr>
<tr>
<td><strong>Error Handling and Retry</strong></td><td>Supports native error handling and retry</td><td>Distributed error handling at the microservice level</td></tr>
<tr>
<td><strong>Scalability</strong></td><td>Highly scalable but needs to adhere to different limitations during the workflow</td><td>Highly scalable as each service can scale independently</td></tr>
<tr>
<td><strong>Latency</strong></td><td>Possible latency because of dependencies</td><td>Low latency with direct event routing and Pipes P2P integrations.</td></tr>
<tr>
<td><strong>Cost Efficiency</strong></td><td>Depends on the workflow and type of workflow</td><td>Pay only for the usage of resources</td></tr>
</tbody>
</table>
</div><h1 id="heading-best-of-orchestration-and-choreography">Best of orchestration and choreography</h1>
<p>The choice between <code>Orchestration</code> and <code>Choreography</code> is hard to choose as it depends on the use-case and architectural pros and trade-offs!</p>
<p>One such example is when EventBridge SaaS Event bus can integrate with AWS partners such as Freshworks events choreographed into the AWS account to route the events to trigger a <code>State Machine</code> execution to process the events from Freshworks product - Freshdesk.</p>
<p><img src="https://lh7-us.googleusercontent.com/TeHgSTW-QljRHAjj8IVserPaqTahmHiOtcQtf2SjnzdOY1QB56UDn3SAf3Zn4KFrTBMrcZVmjQ0euWh3X2cEEQPW2QEu-npxVYLnBRaWOTeTR04iP_0ONc8gBUo_G18mKBY9PcIXOmKwerU=s2048" alt /></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://medium.com/freshworks-developer-blog/a-guide-to-understanding-freshworks-integrations-with-amazon-eventbridge-9782e54f1076">https://medium.com/freshworks-developer-blog/a-guide-to-understanding-freshworks-integrations-with-amazon-eventbridge-9782e54f1076</a></div>
<p> </p>
<h1 id="heading-wrap-up">Wrap up!</h1>
<p>Serverless architecture on AWS includes orchestration and choreography patterns for structured execution. Orchestration offers centralized control, scalability, and monitoring using AWS Step Functions, while choreography enables loosely coupled, event-driven coordination with Amazon EventBridge. Each pattern has its advantages and trade-offs, so the decision between them depends on the specific use case.</p>
<p>In the "Build on Weekly" episode 12 by <em>Darko Mesaroš</em> and <em>Rohini Gaonkar</em>, <a target="_blank" href="https://community.aws/content/2exdWp50hig2lCN8sJFi2ZFuJyt/orchestration-dancing-or-big-blocks-of-code-s03-e12-build-on-weekly">Orchestration, Dancing, or big blocks of code</a> discusses these patterns of orchestration, choreography, and monoliths, highlighting the benefits of each pattern.</p>
]]></content:encoded></item><item><title><![CDATA[No one size fits all: True also for selecting IaC tool]]></title><description><![CDATA[Serverless applications are architected using different components and using Infrastructure as Code (IaC) helps with provisioning the resources used by the Serverless applications using a developer workflow. This improves the developer journey from g...]]></description><link>https://blog.theserverlessterminal.com/no-one-size-fits-all-true-also-for-selecting-iac-tool</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/no-one-size-fits-all-true-also-for-selecting-iac-tool</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[serverless]]></category><category><![CDATA[tools]]></category><category><![CDATA[devtools]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Tue, 05 Mar 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709478644660/4d4b9f56-8fcd-4429-9a95-96a3311f5d88.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Serverless applications are architected using different components and using Infrastructure as Code (IaC) helps with provisioning the resources used by the Serverless applications using a developer workflow. This improves the developer journey from getting started on a Serverless <code>hello world</code> application to a complex production grade Serverless application.</p>
<p>Previously, I had authored why IaC should be the direction for Serverless applications.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.theserverlessterminal.com/serverless-apps-why-iac-should-be-the-direction">https://blog.theserverlessterminal.com/serverless-apps-why-iac-should-be-the-direction</a></div>
<p> </p>
<h2 id="heading-iac-and-ifc-tools">IaC and IfC tools</h2>
<p>When we talk about Infrastructure as Code (IaC) or Infrastructure from Code (IfC), today there is an overwhelming number of options of tools that focus on either IaC or IfC for Cloud and specifically Serverless.</p>
<p><img src="https://media1.tenor.com/m/h7I0bssoBksAAAAd/greys-anatomy-amelia-shepherd.gif" alt="Curious about making a choice of infrastructure tool" class="image--center mx-auto" /></p>
<p>Infrastructure tools have been the core of different developer workflows and how the development teams leverage these tools. And the different pillars which make your case about a tool stronger are -</p>
<h3 id="heading-support-for-services">Support for services</h3>
<p>The coverage of different services the tool has for you as a developer to be able to build applications that can use different services on Cloud providers. This poses a challenge for the tool to be in sync with the Cloud providers.</p>
<h3 id="heading-integration-with-cicd-pipelines">Integration with CI/CD pipelines</h3>
<p>The tool would be the ideal choice when it can integrate well into the CI/CD pipelines to automate the infrastructure provision in different environments either natively or using different plugins.</p>
<h3 id="heading-developer-experience">Developer experience</h3>
<p>The Developer Experience (DX) of the tool which makes it easier for developer to adapt and help with a smooth for defining, testing, deploying the resources for your Serverless application. This also relates to how the tool can have a positive impact to the developer productivity.</p>
<h3 id="heading-learning-curve">Learning curve</h3>
<p>If the tool introduces a steep learning curve about the tool and it's feature usage making it harder at one or more stages of the developer journey to build Serverless or cloud applications in general.</p>
<h3 id="heading-readily-available-patterns">Readily available patterns</h3>
<p>Today, GenAI has taken over different segments of a development process, while the tool can leverage that to enable frequent patterns or completion of different default properties for the resources either with the tool's native capability or as GenAI capability making it easier and increasing time to deploy.</p>
<h3 id="heading-community-and-support">Community and support</h3>
<p>The DevTools are driven by a community that can help with unblocking a developer and also enabling with latest happenings and best practices. It makes it an easier choice for developers to try and also adapt when there is support and peer contributions to improvise the tool.</p>
<h2 id="heading-your-voice-about-i-af-c-tools">Your voice about I a/f C tools</h2>
<p>I've been evaluating several IaC and IfC tools in the last few months, and to understand the love for these tools, I ran polls on <a target="_blank" href="https://www.linkedin.com/posts/jones-zachariah-noel-n_infrastructure-as-code-iac-and-infrastructure-activity-7167903144717246465-_Usk/?utm_source=share&amp;utm_medium=member_desktop">LinkedIn</a>, <a target="_blank" href="https://twitter.com/theslsterminal/status/1762137267962011715">Twitter</a>, and also my Newsletter.</p>
<p>Look at the poll results -</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709906397854/7d7d3d48-8945-4e19-b963-cf81ea1d3054.png" alt="Serverless Infrastructure tools poll on Twitter." class="image--center mx-auto" /></p>
<p>Terraform is widely adapted given that it is adapted for cloud applications in general, and the swift support with declarative IaC with Serverless services too and the fact that Terraform is already cloud agnostic makes the adoption much easier.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709906430473/774053d0-6fd4-4f09-ad6b-ca793e9e06cd.png" alt="Serverless Infrastructure tools poll on LinkedIn." class="image--center mx-auto" /></p>
<p>One thing that stood out was specifically how Serverless Framework and CDK are the popular choice for infrastructure for Serverless applications. Some of the key callouts were of declarative YAML on Serverless Framework and programming constructs with CDK.</p>
<p>In this blog series, I will look into different IaC and IfC tools that enable you to build Serverless applications with a much better experience than what it was a few years back, so tune in!</p>
<h2 id="heading-learn-about-serverless-i-af-c-devtools">Learn about Serverless I a/f C DevTools</h2>
<p>Personally, have been a big AWS SAM user so here are a few blogs about SAM -</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.theserverlessterminal.com/building-serverless-with-sam">https://blog.theserverlessterminal.com/building-serverless-with-sam</a></div>
<p> </p>
<p>And about how Serverless workloads also works with more configurations rather than complex application code.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.theserverlessterminal.com/building-serverless-apps-with-more-configurations">https://blog.theserverlessterminal.com/building-serverless-apps-with-more-configurations</a></div>
]]></content:encoded></item><item><title><![CDATA[Serverless workflow design and development using Application Composer and Step Functions]]></title><description><![CDATA[AWS Step Functions has become one of the crucial architectural components for prompting Serverless orchestrations on AWS. Since Step Functions's launch, defining the state machine using JSON has been a pain point.
In this blog, we will look at the im...]]></description><link>https://blog.theserverlessterminal.com/serverless-workflow-design-and-development-using-application-composer-and-step-functions</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/serverless-workflow-design-and-development-using-application-composer-and-step-functions</guid><category><![CDATA[Application Composer]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Step Functions]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[Amazon Bedrock]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Sat, 10 Feb 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707890193839/f8a98d19-15b7-4f7d-9a5e-abb54577532f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS Step Functions has become one of the crucial architectural components for prompting Serverless orchestrations on AWS. Since Step Functions's launch, defining the state machine using JSON has been a pain point.</p>
<p>In this blog, we will look at the importance of IaC and how tools like Application Composer and Workflow Studio enable it with the low-code approach of drag-and-drop of components with the comfort of VS Code.</p>
<h1 id="heading-infrastructure-as-code-iac">Infrastructure as Code (IaC)</h1>
<p>When building applications on the cloud, irrespective of Serverless, Containers, or Virtual Machines; it is recommended to use the Infrastructure as Code (IaC) approach so that provisioning and deploying resources to the cloud would be automated via workflows or with IaC tools that refers to a single source of truth where all the configurations of all the needed resources are defined.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707844629719/410bae99-f882-4bcd-aba7-ab91ff9d6aff.png" alt="Using IaC as part of developer workflows" class="image--center mx-auto" /></p>
<p>You can read more about <a target="_blank" href="https://blog.theserverlessterminal.com/serverless-apps-why-iac-should-be-the-direction">why IaC should be the direction for building applications</a>.</p>
<h1 id="heading-workflow-studio">Workflow Studio</h1>
<p><a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2021/announcing-workflow-studio-a-new-low-code-visual-workflow-designer-foraws-step-functions/">AWS Step Functions launched Workflow Studio in 2021</a>, where the new low code tool has been revolutionary for drag and drop for over 1000+ API Actions across multiple AWS Services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707829499925/6177060b-50e9-480e-b1fc-5dd171cec475.gif" alt="Workflow studio demonstration of creating a state machine with drag and drop" class="image--center mx-auto" /></p>
<p>Workflow Studio enables visual building and understanding of the workflow execution with different AWS Services API actions invocation via optimized or SDK integration. As part of the integration, you can define the API parameters and share data across different States.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707829998108/3e2e2fbb-92e1-4b15-b7da-e367ae736815.gif" alt="Using the Workflow Studio to generate ASL" class="image--center mx-auto" /></p>
<p>Workflow Studio generates the Amazon States Language (ASL) in real-time based on the States defined visually with all the parameters. Workflow Studio also facilitates different configurations such as error handling, input, and output for different states and also defines the flow with parallel, map for loops, and choice for branching.</p>
<h1 id="heading-application-composer">Application Composer</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707842329475/f7ecc5ae-7343-4855-9f01-9d6d0ab8b952.png" alt="Application Composer adds to improved developer productivity" class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://aws.amazon.com/application-composer">AWS Application Composer</a> is a visual IaC builder for making drag-and-drop designs of different AWS Services on the canvas to generate an equivalent Infrastructure as Code (IaC) template with SAM (Serverless Application Model) Template using YAML. The experience of designing applications with AWS Services that generate SAM templates to synchronize when using the Google Chrome browser locally was the first step to making IaC generation and a real-time sync to the local file system.</p>
<p>Application Composer initially supported a few AWS Serverless services and now it <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/09/aws-application-composer-1000-cloudformation-resources/">supports over 1000 CloudFormation supported resources</a>. During AWS re:Invent 2023, <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/ide-extension-aws-application-composer/">Application Composer announced the IDE (VS Code) experience of building IaC as part of AWS Toolkit</a> where you can use Application Composer to design your architectures while the SAM template would be generated in your VS Code directory.</p>
<p>Now you are also able to use <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/aws-application-composer-step-functions-workflow-studio/">AWS Step Functions' Workflow Studio in the Application Composer VS Code experience</a> as the integration enables using Workflow Studio that generates Amazon States Language (ASL) in real-time which synchronizes either with SAM template or as a <code>asl.json</code> or <code>asl.yaml</code> files in the local project directory.</p>
<h1 id="heading-application-composer-workflow-studio-on-vs-code">Application Composer + Workflow Studio on VS Code</h1>
<p>Install the latest <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.aws-toolkit-vscode">VS Code extension of AWS Toolkit</a> that includes Application Composer with Workflow Studio.</p>
<p>In an empty project directory, create a <code>template.yaml</code> file and click on "Application Composer" icon on the top right that launches Application Composer in VS Code.</p>
<h2 id="heading-creating-a-state-machine-with-an-external-asl-file">Creating a State Machine with an external ASL file</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707845515709/a3657a54-5bc9-42f3-8f27-aacdf24d922b.gif" alt="Application Composer with Step Function resource creation with boiler plate State Machine" class="image--center mx-auto" /></p>
<p>When you drag-and-drop <code>StateMachine</code> resource, it generates the boilerplate State Machine with Lambda task, and the equivalent changes are made to <code>template.yaml</code> with the State Machine resource. Since the option of an external file for State Machine ASL was selected, <code>statemachine.asl.json</code> file is generated with the ASL definition of the Lambda task.</p>
<h2 id="heading-defining-state-machine-using-workflow-studio">Defining State Machine using Workflow Studio</h2>
<p>In the Application Composer, the <code>StateMachine</code> resource which has an option to launch Workflow Studio locally on VS Code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707846308891/0aa4b6c5-cb0e-4a23-99cc-8281d0b33dc4.png" alt="Launching Workflow Studio from Application Composer" class="image--center mx-auto" /></p>
<h3 id="heading-bedrocks-invokemodel-task">Bedrock's <code>InvokeModel</code> task</h3>
<p>At AWS re:Invent 2023, <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/aws-step-functions-optimized-integration-bedrock/">Step Functions launched the support for optimized integration for Amazon Bedrock</a> that enables the state machines to invoke Bedrock APIs.</p>
<p>From the list of supported resources on Workflow Studio, choose Bedrock's <code>InvokeModel</code> API Action that requires you to define the LLM model used on Bedrock along with the input.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707846209713/9be56c02-2b90-4995-981a-4a502518e894.gif" alt="Adding Bedrock InvokeModel API via Workflow Studio" class="image--center mx-auto" /></p>
<p>Below is the generated ASL from Workflow Studio.</p>
<pre><code class="lang-json"><span class="hljs-string">"Bedrock InvokeModel"</span>: {
    <span class="hljs-attr">"Type"</span>: <span class="hljs-string">"Task"</span>,
    <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:states:::bedrock:invokeModel"</span>,
    <span class="hljs-attr">"Parameters"</span>: {
        <span class="hljs-attr">"ModelId"</span>: <span class="hljs-string">"arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2:1"</span>,
            <span class="hljs-attr">"Body"</span>: {
                <span class="hljs-attr">"prompt"</span>: <span class="hljs-string">"Human:Tell me a fun story about MARVEL characters using Infrastructure as Code to build Serverless Applications\nAssistant:"</span>,
                <span class="hljs-attr">"max_tokens_to_sample"</span>: <span class="hljs-number">2000</span>
            }
        },
        <span class="hljs-attr">"ResultPath"</span>: <span class="hljs-string">"$.response"</span>
}
</code></pre>
<h3 id="heading-adding-a-dynamodb-putitem-task">Adding a DynamoDB <code>PutItem</code> task</h3>
<p>Now that Bedrock would generate a story, it has to be stored on DynamoDB. To do so, on Workflow Studio add <code>PutItem</code> API action to the state machine definition where the table name is passed with CloudFormation substitution and story from the previous task's response. Also, using <code>STATES.UUID()</code> intrinsic function to auto-generate UUIDs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707847003215/e3c64f5b-1cb9-48b3-8338-04b2abfb4937.gif" alt class="image--center mx-auto" /></p>
<pre><code class="lang-json"><span class="hljs-string">"DynamoDB PutItem"</span>: {
    <span class="hljs-attr">"Type"</span>: <span class="hljs-string">"Task"</span>,
    <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:states:::aws-sdk:dynamodb:putItem"</span>,
    <span class="hljs-attr">"Parameters"</span>: {
        <span class="hljs-attr">"TableName"</span>: <span class="hljs-string">"${MyDynamoDBTable}"</span>,
            <span class="hljs-attr">"Item"</span>: {
                <span class="hljs-attr">"id"</span>: {
                    <span class="hljs-attr">"S"</span>: <span class="hljs-string">"STATES.UUID()"</span>
                },
                <span class="hljs-attr">"story"</span>: {
                    <span class="hljs-attr">"S.$"</span>: <span class="hljs-string">"$.response.Body.completion"</span>
                }
            }
        }
}
</code></pre>
<h2 id="heading-defining-dynamodb-resource">Defining DynamoDB resource</h2>
<p>In the Workflow Studio, the DynamoDB's <code>PutItem</code> action would need the table to be created, and to do so, on Application Composer drag-and-drop DynamoDB and define the table structure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707847423829/581f0f32-18e6-4976-a469-81e9c82b2166.gif" alt="Defining DynamoDB using Application Composer" class="image--center mx-auto" /></p>
<p>Once the DynamoDB table is defined, update the CloudFormation references for the State Machine to use the table that is defined in the SAM template.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707847660337/4a2d6a15-a8d8-48a9-9d42-411abf76bfe9.gif" alt="Using CloudFormation reference for DDB table name on State Machine" class="image--center mx-auto" /></p>
<p>Application Composer can detect the different resources used in the SAM project and provides all the resource options when using <code>!Ref</code>.</p>
<h2 id="heading-updating-iam-execution-role-for-state-machine">Updating IAM execution role for State Machine</h2>
<p>When <code>StateMachine</code> is created, Application Composer adds the policies <code>AWSXrayWriteOnlyAccess</code> and CloudWatch logs permissions by default but even though State Machine is defined; Step Function doesn't auto-generate the needed IAM execution role and we would have to update the IAM policy with access to DynamoDB <code>PutItem</code> and Bedrock <code>InvokeModel</code> APIs.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Policies:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">AWSXrayWriteOnlyAccess</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Statement:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
              <span class="hljs-attr">Action:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:CreateLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:GetLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:UpdateLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DeleteLogDelivery</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:ListLogDeliveries</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:PutResourcePolicy</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DescribeResourcePolicies</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">logs:DescribeLogGroups</span>
              <span class="hljs-attr">Resource:</span> <span class="hljs-string">'*'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Statement:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
              <span class="hljs-attr">Action:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:PutItem</span>
              <span class="hljs-attr">Resource:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">Table.Arn</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Statement:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
              <span class="hljs-attr">Action:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">bedrock:InvokeModel</span>
              <span class="hljs-attr">Resource:</span> <span class="hljs-string">arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2:1</span>
</code></pre>
<h1 id="heading-deploy-using-sam-sync">Deploy using SAM sync</h1>
<p>Now that the SAM template is complete with Application Composer and Workflow Studio, use <code>sam sync --stack-name "&lt;your-stack-name&gt;"</code> to deploy the resources to your AWS Account.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707848254096/01ec3fa9-6ea2-488f-85a3-58f9969a4ffe.gif" alt="Deploying the stack to cloud via sam sync" class="image--center mx-auto" /></p>
<h1 id="heading-its-deployed">It's deployed!</h1>
<p>Now that the State Machine is deployed, time to start the execution to see what story about "Marvel heroes using IaC for Serverless" is generated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707848519520/4250418c-3fe1-4a19-91fc-e31b39a30550.png" alt="Fun story generated with Gen AI" class="image--center mx-auto" /></p>
<p>Check out Arshad Zackeriya and me talking about Application Composer on <a target="_blank" href="https://www.youtube.com/channel/UCr8ggoShNiCvMvyMEKlpxnA">The Zacs' Show Talking AWS</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Cloudy with a Chance of Mind-Blown: AWS re:Invent 2023]]></title><description><![CDATA[Last week I experienced AWS re:Invent 2023 in person, it was a great time to connect with fellow Serverless Minds (a.k.a like-minded folks) and have casual chats and serious discussions about Serverless.
Amazon PartyRock was launched a few days befor...]]></description><link>https://blog.theserverlessterminal.com/cloudy-with-a-chance-of-mind-blown-aws-reinvent-2023</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/cloudy-with-a-chance-of-mind-blown-aws-reinvent-2023</guid><category><![CDATA[AWS]]></category><category><![CDATA[community]]></category><category><![CDATA[reInvent2023]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Thu, 07 Dec 2023 08:14:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701884806337/6d892154-dd32-4cf9-b322-a52cc8b4af21.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I experienced AWS re:Invent 2023 in person, it was a great time to connect with fellow Serverless Minds (a.k.a like-minded folks) and have casual chats and serious discussions about Serverless.</p>
<p><a target="_blank" href="https://partyrock.aws/">Amazon PartyRock</a> was launched a few days before re:Invent and guess what? I created an app to suggest a blog title with a few descriptions and the title of this blog is generated with PartyRock!</p>
<p>The theme of re:Invent was around Generative AI and Dr. Werner Vogels' keynote which emphasised cost and sustainability while being Frugal Architects. I remember how reInvent 2022 was about making everything "Serverless", just a year later we are still on track to lose the meaning of "Serverless" and making it all about "GenAI".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701874243955/88ad1d5c-44f0-48e0-ba91-03b828b97620.png" alt="Everyone talking about &quot;Generative AI&quot; - Image generated with Amazon PartyRock and Amazon Bedrock with Stability AI." class="image--center mx-auto" /></p>
<p><em>Everyone talking about "Generative AI" - Image generated using Amazon PartyRock and Amazon Bedrock with Stability AI.</em></p>
<h2 id="heading-keynotes">Keynote(s)</h2>
<p>Adam Selipsky's keynote had an announcement about a new AWS Service - <a target="_blank" href="https://aws.amazon.com/q/">Amazon Q</a> which helps developers during development and also companion-based developer experience of using AWS Services.</p>
<p>Dr. Werner Vogels' keynote as I mentioned emphasized being "The Frugal Architect" with designing and architecting our applications and workloads on AWS with known pillars of The Well Architected Framework but also including cost-centric and sustainability-centric architects which go a long way!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701875018299/720e2bd5-ded1-4226-b1fc-4eec48fd0a47.png" alt="Werner's The Frugal Architect laws" class="image--center mx-auto" /></p>
<p>Werner's The Frugal Architect states laws that enable architects to be cost-efficient in the lens of design, measure, and optimize. As developers, architects, and techies who are building on the cloud, we must have a sense of cost and sustainability. It's time, we take a step back and evaluate our architectures so that we are building for tomorrow not just for today.</p>
<p>Werner's keynote announced the new <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/myapplications-view-manage-applications-aws/">myApplications</a> which is a new Console experience with consolidated costs, health, and performance of our applications. <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/ide-extension-aws-application-composer/">Application Composer gets a new makeover with an IDE extension</a> that not only gets drag-and-drop experience but also adds on the <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/aws-application-composer-step-functions-workflow-studio/">Step Functions' workflow studio experience</a> (hopefully, I'll get to try things around soon and more blogs around it coming up). And also, Amazon Q as part of the IDE.</p>
<p>As an AWS Hero, being able to get the front-row seats to witness the keynotes was one of the best experiences. A special shoutout to Heroes by Werner just before he started the keynote was a special moment!</p>
<h2 id="heading-aws-service-updates-and-announcements">AWS Service updates and announcements</h2>
<p>If you are actively following updates and announcements during pre:Invent season, there were major ones concerning improving the operational excellence of Serverless applications to better monitoring and observability in the world of distributed.</p>
<p>ICYMI, <a target="_blank" href="https://www.theserverlessterminal.com/p/aws-lambda-9-years-of-serverless">The Serverless Terminal issue - AWS Lambda 9 years of Serverless AWSomeness #43</a> covers the pre:Invent announcements with Lambda updates making the troubleshooting experience better, CloudWatch Logs supporting regex filter pattern.</p>
<blockquote>
<p>Dude, I've literally searched logs for specific keywords/patterns and the search on console/CLI has got on my nerves.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701883393690/7408a4f2-aeb5-439a-b4f6-35f67a2be69b.png" alt="Troubleshooting and debugging with searching logs like needle in a haystack- Image generated using Amazon PartyRock and Amazon Bedrock with Stability AI" class="image--center mx-auto" /></p>
<p><em>Troubleshooting and debugging with searching logs like a needle in a haystack- Image generated using Amazon PartyRock and Amazon Bedrock with Stability AI</em></p>
<p>I remember years ago, my first-ever interaction with <em>Eric Johnson (Serverless DA, AWS)</em> was about <strong><em>how to migrate my rookie Serverless application that I built on console to IaC</em></strong>, never have been a better way other than export as SAM template but now you can export that into Application Composer which is much smoother and simpler. Just that after that poor console management experience, I advocated for the IaC approach for building Serverless applications. Good to see that it's all falling into place.</p>
<p>While the week of re:Invent was hectic enough, I also challenged myself to publish <a target="_blank" href="https://www.theserverlessterminal.com/p/weve-always-done-it-this-way-44">The Serverless Terminal issue - We've always done it this way #44</a> that calls out the different updates from Lambda new runtime supports to Step Functions making its way with HTTPs invocation make it much easier for integration options for third party APIs, how can I forget Bedrock integration on Step Functions!? With all that integration, if the state machine fails; there is a way to restart the workflow whenever it fails (no, it's not the entire workflow rather from the point of failure).</p>
<p>Strong opinion about what Serverless is, because many of the Server-full services have termed some of it's offerings as "Serverless" even when you have flat pricing for non-usage. Are we losing out on the real Serverless features and services in this journey?</p>
<h2 id="heading-connecting-with-cloudserverless-minds">Connecting with Cloud/Serverless Minds</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701886093367/10d74a47-11fe-45bd-9a48-96a3b8fae220.png" alt="Connecting with Serverless Minds, running out of space to fit all photos :P" class="image--center mx-auto" /></p>
<p>This is my first re:Invent and I'm meeting the majority of them for the first time. You ask, whom did I meet? AWS Heroes, AWS Community Builders, AWS UG Leaders, AWS folks who are listening to me whine and also patiently giving an ear to my queries and helping - AWS Developer Relations team, AWS Service teams - PMs, Engineers; the list does not end.</p>
<p>The conversations around what they are building with Serverless to unlocking the potential of Serverless - left me with the thought - "<em>wow that is possible!!?</em>" Well, I've seen so many of them building content around cloud and Serverless but in-person connection is hard to put down in words.</p>
<p>Imposter Syndrome is real - having such great conversations although many of them were for a few minutes and how we have known each other over the internet but everyone is so welcoming and for a first-timer, I bet most of them gave me tips on how to survive "the week".</p>
<p>I finally met a fellow Bangalorian - <em>Praneeta Prakash</em> (<em>AWS Modern App Experience PM</em>) her Twitter bio says Bengaluru hudugi and we conversed in my native language <em>Kannada</em>. Check out the <a target="_blank" href="https://x.com/praneetaprakash/status/1730058181198610435">tweet</a>.</p>
<p>Not just meeting Serverless minds and having a conversation, did something different with <em>Luciano Mammino (AWS Serverless Hero)</em> who co-runs AWS Bites podcast where we took our conversation from in-person to record it with all the noise at the Expo. Check out the <a target="_blank" href="https://awsbites.com/107-expert-opinions-from-re-invent-2023/">podcast</a> featuring some brilliant folks!</p>
<h2 id="heading-aws-sessions">AWS Sessions</h2>
<p>Something that I'm not worthy to talk about 'cause I didn't attend them! Sessions of different categories and levels happened all across the Vegas Strip in multiple Hotels and conference areas although I was in Venetian; I was not able to make it to the sessions. However, I did catch up with many speakers whose sessions I had booked but didn't make it to, and spoke about mostly things around their session topics.</p>
<p>Guess what? All recorded sessions during re:Invent are now available on demand and I created a <a target="_blank" href="https://x.com/zachjonesnoel/status/1731660115798045141?s=20">Twitter thread</a> to keep a check on what sessions to watch before 2023 concludes.</p>
<h2 id="heading-community-panel-discussion">Community Panel discussion</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701934213412/4090a255-2138-4976-8804-3f6cc30d3ac5.jpeg" alt="Speaking at Community Panel Discussion" class="image--center mx-auto" /></p>
<p>Although this being my first re:Invent, I was able to represent AWS UG Bengaluru, being AWS Hero at the community panel discussion that happened in the DevLounge area of the Expo along with <em>Kaushik, Lena, and Shafraz</em>.</p>
<p>We all shared our views of being part of the AWS Community and programs - AWS User Groups, AWS Community Builders, and AWS Heroes.</p>
<blockquote>
<p>I guess I can say - "I did speak at re:Invent"</p>
</blockquote>
<h2 id="heading-serverlessland-video">ServerlessLand Video</h2>
<p><em>Eric Johnson (Serverless DA, AWS)</em> and I had a chat on ServerlessLand Video talking about my Serverless journey and my role as a DA at Freshworks.<br />Check out the <a target="_blank" href="https://video.serverlessland.com/video/st-1EKVZfWqOUHYlX8x1qw0MTY">video</a>.</p>
<p><img src="https://pbs.twimg.com/media/GANCp2ha8AEruaA?format=jpg&amp;name=large" alt="Interview on ServerlessLand Video" class="image--center mx-auto" /></p>
<h2 id="heading-glorious-moment-for-freshworks">Glorious moment for Freshworks</h2>
<p>Freshworks was one of the sponsors for AWS re:Invent 2023 and we had our own wins be with sessions about how Freshworks is innovating with Generative AI to having <em>Adam Selipsky (CEO, AWS)</em> visit us at the booth and also gave a shoutout in his social media posts.</p>
<p><img src="https://media.licdn.com/dms/image/D5622AQGKwyiKMHzBMQ/feedshare-shrink_2048_1536/0/1701261739547?e=1704931200&amp;v=beta&amp;t=NVkIp5FrY0D0k0sFimDbYL1e8xL4atY4ctgY_lDChbM" alt="Adam Selipsky at Freshworks booth" class="image--center mx-auto" /></p>
<p>Check out <a target="_blank" href="https://www.linkedin.com/posts/adamselipsky_awsreinvent-aws-activity-7135833454587383808-6uMH">Adam's LinkedIn post</a> for more details.</p>
<h2 id="heading-whats-next">What's next?</h2>
<p>I'm still re:Covering from re:Invent but being able to connect with community folks not only as people over the internet, but now with human connection. I'd look forward to collaborating with many (I wish it's all of them) for many things with AWS UG Bengaluru, The Serverless Terminal, and The Zacs' Show Talking AWS.</p>
<p>To wrap up, I'd quote and agree with Werner -</p>
<blockquote>
<p>There has never been a better time to be a BUILDER</p>
</blockquote>
<p>With that, my first-ever in-person re:Invent is in the books!! Big cheers to <em>Farrah Campbell (Head of Modern Compute Community, AWS)</em>, <em>Taylor Jacobson (AWS Heroes PM, AWS),</em> AWS Developer Relations team, AWS Community and Freshworks for making re:Invent memorable.</p>
<p>P.S. I planned to make this a tech blog and while I jotted down my initial thoughts, I felt that my experience of re:Invent is worthy to be shared rather than just a tech / cloud / Serverless takeaway from re:Invent.</p>
]]></content:encoded></item><item><title><![CDATA[Improved developer experience of GraphQL APIs with JavaScript resolvers for AWS AppSync APIs]]></title><description><![CDATA[AWS AppSync is a fully-managed serverless GraphQL API that enables serverless developers to build, and deploy and also a robust API layer for integrating seamlessly with other AWS services. GraphQL is destined to be the future of APIs and learn about...]]></description><link>https://blog.theserverlessterminal.com/improved-developer-experience-of-graphql-apis-with-javascript-resolvers-for-aws-appsync-apis</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/improved-developer-experience-of-graphql-apis-with-javascript-resolvers-for-aws-appsync-apis</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AppSync]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[developer experience]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Thu, 02 Nov 2023 18:14:57 GMT</pubDate><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/appsync/">AWS AppSync</a> is a fully-managed serverless GraphQL API that enables serverless developers to build, and deploy and also a robust API layer for integrating seamlessly with other AWS services. <a target="_blank" href="https://community.aws/posts/four-reasons-why-graphql-is-the-future">GraphQL is destined to be the future of APIs</a> and learn about how you can get started with <a target="_blank" href="https://blog.theserverlessterminal.com/aws-appsync-101">AppSync</a>. In this blog, we will look into how <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/08/aws-appsync-javascript-all-resolvers-graphql-apis/">AWS AppSync supporting JavaScript resolvers</a> has improved the developer experience.</p>
<h2 id="heading-appsync-resolvers"><strong>AppSync resolvers</strong></h2>
<p><a target="_blank" href="https://docs.aws.amazon.com/appsync/latest/devguide/resolver-components.html">AppSync resolvers</a> are the components responsible for integrating with different supported AppSync datasources such as - AWS Lambda functions, Amazon Aurora, Amazon DynamoDB, HTTP endpoints and many others. These resolvers have mapping templates for the GraphQL <code>type</code>, <code>query</code>, <code>mutation</code> and <code>subscription</code>. Resolvers can be written in Velocity Template Language (VTL) or JavaScript runtime.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698410992182/d4a0eea0-72cb-420c-a655-a0d2b4f58e68.png" alt="AppSync components with AppSync resolvers" class="image--center mx-auto" /></p>
<h3 id="heading-why-are-resolvers-important"><strong>Why are resolvers important?</strong></h3>
<p>Whenever integrating with AWS Services, resolvers (VTL or JavaScript based) would help with data manipulation for constructing the request mapping which the data source requires along with the response mapping where the response is constructed to the needed GraphQL schema. Additionally, resolvers support sharing of data between different resolver functions in a <code>pipeline resolver</code> with <code>context</code> and <code>arguments</code>. In both <code>unit resolver</code> and <code>pipeline resolver</code>, AppSync supports usage of different utility functions for data transformation of JSONs, Arrays Lists and generation of UUIDs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698411038725/d49acfc4-680c-4e84-abc9-54ba432f1a1f.png" alt="Screenshot of a AppSync Pipeline resolver" class="image--center mx-auto" /></p>
<p>The above screenshot from AppSync console shows how a Pipeline resolver can have multiple functions which can integrate with the respective data source and sharing the data amongst the different resolver functions using <code>context.stash</code>.</p>
<h2 id="heading-vtl-resolvers"><strong>VTL resolvers</strong></h2>
<p>Velocity Template Language (VTL) is used to generate dynamic JSON content and based on Java environment.</p>
<p>For instance, the AppSync model <code>PrivateNote</code>.</p>
<pre><code class="lang-javascript">type PrivateNote {
    <span class="hljs-attr">id</span>: ID!
    content: <span class="hljs-built_in">String</span>!
    createdAt: AWSDateTime!
    updatedAt: AWSDateTime!
    owner: <span class="hljs-built_in">String</span>
}
</code></pre>
<p>A mutation <code>createPrivateNote</code> for creating new notes, in a typical VTL resolver which sets the different defualt values and validates for authentication of the user with Amazon Cognito for identifying the owner of the note by using the <code>util</code> functions on AppSync - <code>$util.defaultIfNull()</code>, <code>$$util.dynamodb.toDynamoDBJson()</code> and <code>$$util.dynamodb.toMapValues()</code> to make the DynamoDB operation of <code>PutItem</code>.</p>
<pre><code class="lang-JSON">## [Start] Initialization default values. **
$util.qr($ctx.stash.put(<span class="hljs-string">"defaultValues"</span>, $util.defaultIfNull($ctx.stash.defaultValues, {})))
#set( $createdAt = $util.time.nowISO8601() )
$util.qr($ctx.stash.defaultValues.put(<span class="hljs-string">"id"</span>, $util.autoId()))
$util.qr($ctx.stash.defaultValues.put(<span class="hljs-string">"createdAt"</span>, $createdAt))
$util.qr($ctx.stash.defaultValues.put(<span class="hljs-string">"updatedAt"</span>, $createdAt))
$util.toJson({
  <span class="hljs-attr">"version"</span>: <span class="hljs-string">"2018-05-29"</span>,
  <span class="hljs-attr">"payload"</span>: {}
})
## Check for Cognito authorization and set the owner
#if( $util.authType() == <span class="hljs-string">"User Pool Authorization"</span> )
  #if( !$isAuthorized )
    #set( $ownerEntity0 = $util.defaultIfNull($ctx.args.input.owner, <span class="hljs-literal">null</span>) )
    #set( $ownerClaim0 = $util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"sub"</span>), <span class="hljs-string">"___xamznone____"</span>) )
    #set( $currentClaim1 = $util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"username"</span>), $util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"cognito:username"</span>), <span class="hljs-string">"___xamznone____"</span>)) )
    #set( $ownerClaim0 = <span class="hljs-string">"$ownerClaim0::$currentClaim1"</span> )
    #set( $ownerClaimsList0 = [] )
    $util.qr($ownerClaimsList0.add($util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"sub"</span>), <span class="hljs-string">"___xamznone____"</span>)))
    $util.qr($ownerClaimsList0.add($util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"username"</span>), $util.defaultIfNull($ctx.identity.claims.get(<span class="hljs-string">"cognito:username"</span>), <span class="hljs-string">"___xamznone____"</span>))))
    #set( $ownerAllowedFields0 = [<span class="hljs-string">"id"</span>,<span class="hljs-string">"content"</span>] )
    #set( $isAuthorizedOnAllFields0 = <span class="hljs-literal">true</span> )
    #if( $ownerClaim0 == $ownerEntity0 || $ownerClaimsList0.contains($ownerEntity0) )
      #if( $isAuthorizedOnAllFields0 )
        #set( $isAuthorized = <span class="hljs-literal">true</span> )
      #else
        $util.qr($allowedFields.addAll($ownerAllowedFields0))
      #end
    #end
    #if( $util.isNull($ownerEntity0) &amp;&amp; !$ctx.args.input.containsKey(<span class="hljs-string">"owner"</span>) )
      $util.qr($ctx.args.input.put(<span class="hljs-string">"owner"</span>, $ownerClaim0))
      #if( $isAuthorizedOnAllFields0 )
        #set( $isAuthorized = <span class="hljs-literal">true</span> )
      #else
        $util.qr($allowedFields.addAll($ownerAllowedFields0))
      #end
    #end
  #end
#end
## [Start] Create Request template. **
#set( $args = $util.defaultIfNull($ctx.stash.transformedArgs, $ctx.args) )
## Set the default values to put request **
#set( $mergedValues = $util.defaultIfNull($ctx.stash.defaultValues, {}) )
## copy the values from input **
$util.qr($mergedValues.putAll($util.defaultIfNull($args.input, {})))
## set the typename **
$util.qr($mergedValues.put(<span class="hljs-string">"__typename"</span>, <span class="hljs-string">"PrivateNote"</span>))
#set( $PutObject = {
  <span class="hljs-attr">"version"</span>: <span class="hljs-string">"2018-05-29"</span>,
  <span class="hljs-attr">"operation"</span>: <span class="hljs-string">"PutItem"</span>,
  <span class="hljs-attr">"attributeValues"</span>:   $util.dynamodb.toMapValues($mergedValues),
  <span class="hljs-attr">"condition"</span>: $condition
} )
#if( $args.condition )
  $util.qr($ctx.stash.conditions.add($args.condition))
#end
## Begin - key condition **
#if( $ctx.stash.metadata.modelObjectKey )
  #set( $keyConditionExpr = {} )
  #set( $keyConditionExprNames = {} )
  #foreach( $entry in $ctx.stash.metadata.modelObjectKey.entrySet() )
    $util.qr($keyConditionExpr.put(<span class="hljs-string">"keyCondition$velocityCount"</span>, {
  <span class="hljs-attr">"attributeExists"</span>: <span class="hljs-literal">false</span>
}))
    $util.qr($keyConditionExprNames.put(<span class="hljs-string">"#keyCondition$velocityCount"</span>, <span class="hljs-string">"$entry.key"</span>))
  #end
  $util.qr($ctx.stash.conditions.add($keyConditionExpr))
#else
  $util.qr($ctx.stash.conditions.add({
  <span class="hljs-attr">"id"</span>: {
      <span class="hljs-attr">"attributeExists"</span>: <span class="hljs-literal">false</span>
  }
}))
#end
## End - key condition **
## Start condition block **
#if( $ctx.stash.conditions &amp;&amp; $ctx.stash.conditions.size() != <span class="hljs-number">0</span> )
  #set( $mergedConditions = {
  <span class="hljs-attr">"and"</span>: $ctx.stash.conditions
} )
  #set( $Conditions = $util.parseJson($util.transform.toDynamoDBConditionExpression($mergedConditions)) )
  #if( $Conditions.expressionValues &amp;&amp; $Conditions.expressionValues.size() == <span class="hljs-number">0</span> )
    #set( $Conditions = {
  <span class="hljs-attr">"expression"</span>: $Conditions.expression,
  <span class="hljs-attr">"expressionNames"</span>: $Conditions.expressionNames
} )
  #end
  ## End condition block **
#end
#if( $Conditions )
  #if( $keyConditionExprNames )
    $util.qr($Conditions.expressionNames.putAll($keyConditionExprNames))
  #end
  $util.qr($PutObject.put(<span class="hljs-string">"condition"</span>, $Conditions))
#end
#if( $ctx.stash.metadata.modelObjectKey )
  $util.qr($PutObject.put(<span class="hljs-string">"key"</span>, $ctx.stash.metadata.modelObjectKey))
#else
  #set( $Key = {
  <span class="hljs-attr">"id"</span>:   $util.dynamodb.toDynamoDB($mergedValues.id)
} )
  $util.qr($PutObject.put(<span class="hljs-string">"key"</span>, $Key))
#end
$util.toJson($PutObject)
## [End] Create Request template. **
</code></pre>
<p>VTL could be powerful but at certain scenarios, just isn't <em>developer friendly</em> for a developer who is building the Serverless GraphQL API.</p>
<h3 id="heading-debugging-vtl-resolvers"><strong>Debugging VTL resolvers</strong></h3>
<p>In a complex <code>unit resolver</code> or <code>pipeline resolver</code>, debugging is tedious as the VTL code is executed at runtime and having debugging breakpoints or ways to track the flow of execution is hard.</p>
<p>It's possible to add debug logs using the util functions <code>$</code><a target="_blank" href="http://util.log.info"><code>util.log.info</code></a><code>()</code> and <code>$util.log.error()</code> which are available in CloudWatch logs as the execution happens. However, this is not an efficient way of debugging during development.</p>
<p>A good way to catch the errors is possible in the response mapping template.</p>
<pre><code class="lang-JSON">#if( $ctx.error )
$util.error($ctx.error.message, $ctx.error.type)
#else
$util.toJson($ctx.result)
#end
</code></pre>
<h3 id="heading-vtl-verbose-syntax"><strong>VTL verbose syntax</strong></h3>
<p>When working with <code>if-else</code> statements or <code>foreach</code> looping statement on VTL, the syntax of the VTL can be verbose.</p>
<pre><code class="lang-JSON">#set($start = <span class="hljs-number">0</span>)
#set($end = <span class="hljs-number">5</span>)
#set($range = [$start..$end])

#foreach($i in $range)
   $util.qr($myMap.put($i, <span class="hljs-string">"${i}foo"</span>))
#end

#if($condition)
  #set($result = <span class="hljs-string">"True"</span>)
#else
  #set($result = <span class="hljs-string">"False"</span>)
#end
$util.toJson($result)
</code></pre>
<h3 id="heading-complexity-of-the-resolver"><strong>Complexity of the resolver</strong></h3>
<p>With the syntaxs and complex resolvers, often the direct data source to a database or HTTP can become hard to handle with the limited utility hanlder functions and in case of <code>pipeline resolver</code>, it can be a huge learning curve of how to use different functions, mapping request/response for all the functions and the parent resolver.</p>
<p>For some use-cases and scenarios, using VTL resolvers may turn out to be overwhelming which results in developers moving towards Lambda function resolvers.</p>
<h2 id="heading-javascript-resolvers"><strong>JavaScript resolvers</strong></h2>
<p>AppSync's JavaScript resolvers enable developers to use JavaScript runtime for all the resolvers instead of VTL.</p>
<p>Using the same example of <code>createPrivateNote</code> mutation, the resolver for it would be something like -</p>
<pre><code class="lang-JavaScript"><span class="hljs-keyword">import</span> { util } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-appsync/utils'</span>;

<span class="hljs-comment">/**
 * Puts an item into the DynamoDB table.
 * @param {import('@aws-appsync/utils').Context&lt;{input: any}&gt;} ctx the context
 * @returns {import('@aws-appsync/utils').DynamoDBPutItemRequest} the request
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">request</span>(<span class="hljs-params">ctx</span>) </span>{
    <span class="hljs-keyword">const</span> { id, ...values } = ctx.args.input;
    <span class="hljs-keyword">const</span> key = { id };
    <span class="hljs-keyword">const</span> condition = { <span class="hljs-attr">and</span>: [] };
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> k <span class="hljs-keyword">in</span> key) {
        condition.and.push({ [k]: { <span class="hljs-attr">attributeExists</span>: <span class="hljs-literal">false</span> } });
    }
    <span class="hljs-keyword">return</span> dynamodbPutRequest({ key, values, condition });
}

<span class="hljs-comment">/**
 * Returns the item or throws an error if the operation failed.
 * <span class="hljs-doctag">@param <span class="hljs-type">{import('@aws-appsync/utils').Context}</span> </span>ctx the context
 * <span class="hljs-doctag">@returns <span class="hljs-type">{*}</span> </span>the result
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">response</span>(<span class="hljs-params">ctx</span>) </span>{
    <span class="hljs-keyword">const</span> { error, result } = ctx;
    <span class="hljs-keyword">if</span> (error) {
        <span class="hljs-keyword">return</span> util.appendError(error.message, error.type, result);
    }
    <span class="hljs-keyword">return</span> result;
}

<span class="hljs-comment">/**
 * Helper function to create a new item
 * <span class="hljs-doctag">@returns <span class="hljs-type">{*}</span> </span>the request
 */</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">dynamodbPutRequest</span>(<span class="hljs-params">params</span>) </span>{
    <span class="hljs-keyword">const</span> { key, values, <span class="hljs-attr">condition</span>: inCondObj } = params;

    <span class="hljs-keyword">let</span> condition;
    <span class="hljs-keyword">if</span> (inCondObj) {
        condition = <span class="hljs-built_in">JSON</span>.parse(util.transform.toDynamoDBConditionExpression(inCondObj));
        <span class="hljs-keyword">if</span> (condition &amp;&amp; condition.expressionValues &amp;&amp; !<span class="hljs-built_in">Object</span>.keys(condition.expressionValues).length) {
            <span class="hljs-keyword">delete</span> condition.expressionValues;
        }
    }
    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">operation</span>: <span class="hljs-string">'PutItem'</span>,
        <span class="hljs-attr">key</span>: util.dynamodb.toMapValues(key),
        <span class="hljs-attr">attributeValues</span>: util.dynamodb.toMapValues(values),
        condition,
    }
}
</code></pre>
<p>This defines the <code>request()</code> and <code>response()</code> methods for mapping with AppSync's mutation. With the familarity of JavaScript, it's easy to modularize your resolvers with different JavaScript functions as you see in the example above, <code>dynamodbPutRequest()</code> is invoked from the <code>request()</code> method and whenever this returns, the <code>response()</code> method would handle the response mapping with the defined schema.</p>
<p>In JavaScript resolvers, you can import <code>@aws-appsync/utils</code> for all the util helper methods such as <code>util.transform.toDynamoDBConditionExpression()</code> and <code>util.dynamodb.toMapValues()</code>.</p>
<h2 id="heading-a-closer-look-at-the-developer-experience"><strong>A closer look at the developer experience</strong></h2>
<p>Based on the stats and popularity of Node.JS usage for Serverless, developers would be familar with JavaScript runtimes which are used in their existing workloads with Amazon Lambda functions or their Infrastructure as Code (IaC) where NodeJS and TypeScript have been widely adapted. And ensuring that their resolvers could now also be built with JavaScripts, this new enhancement is a celebration for Serverless developers.</p>
<h3 id="heading-comfort-and-familiarity-with-the-tech-stack"><strong>Comfort and familiarity with the tech stack</strong></h3>
<p>Everyone loves to build with something that they are familiar with. As a developer who has used VTL and now played around with JavaScript resolvers, choosing JavaScript resolvers and building AppSync APIs is a faster and easier choice. Since developers have been used to JavaScript in their backend and also frontend, the learning curve to adapt VTL is reduced massively.</p>
<h3 id="heading-utilities-and-functions"><strong>Utilities and functions</strong></h3>
<p>JavaScript resolvers also support utility helper functions with <code>@aws-appsync/utils</code> package.</p>
<ul>
<li><p><code>@aws-appsync/eslint-plugin</code> is the ESLint tool which detects issues in code during the development.</p>
</li>
<li><p>Using <code>util.transform</code> helper functions for easier tranformations with DynamoDB and other data sources with differnet filters on Maps and Lists.</p>
</li>
<li><p>Built-in modules for different data sources makes it programming language compatible for developers to use functions and operations.</p>
</li>
<li><p>Apart from the native JavaScript's flexibility to work with <code>datetime</code>, there are <code>util.time</code> helper functions available.</p>
</li>
<li><p>Working with different type classes - <code>Array</code>, <code>String</code>, <code>Object</code> from JavaScript.</p>
</li>
</ul>
<h3 id="heading-going-type-safe-with-typescript"><strong>Going Type safe with TypeScript</strong></h3>
<p>Since JavaScript is supported, there are workarounds available for TypeScript to resolve more type strict.</p>
<pre><code class="lang-TypeScript"><span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> ddb <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-appsync/utils/dynamodb'</span>
<span class="hljs-keyword">import</span> { Context, util } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-appsync/utils'</span>
<span class="hljs-keyword">import</span> { PrivateNote, CreatePrivateNoteVariables } <span class="hljs-keyword">from</span> <span class="hljs-string">'../src/API'</span>

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">request</span>(<span class="hljs-params">ctx: Context&lt;CreatePrivateNoteVariables&gt;</span>) </span>{
    <span class="hljs-keyword">return</span> ddb.put({
        key: { __typename: <span class="hljs-string">'PrivateNote'</span>, id: util.auoId() },
        item: ctx.args.input
    })
}

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">response</span>(<span class="hljs-params">ctx: Context</span>) </span>{
    <span class="hljs-keyword">return</span> ctx.result <span class="hljs-keyword">as</span> PrivateNote
}
</code></pre>
<p>The catch here is that, AppSync doesn't directly support TypeScript so it's possible to build AppSync APIs only from IDE given that you have the configurations set for building them as JavaScript code.</p>
<h3 id="heading-code-bundling-with-esbuild"><strong>Code bundling with esbuild</strong></h3>
<p>JavaScript resolvers support the usage of custom and external libraries but there are some catches with JS runtime requirements on AppSync. While bundling with <code>esbuild</code>, it's possible to bundle with external libraries. Keep in mind that <code>@aws-appsync/*</code> library is already available in JS runtime and this should not be bundled with the code.</p>
<h3 id="heading-codebase-with-developer-tools"><strong>Codebase with developer tools</strong></h3>
<p>AppSync is deployed with AWS CDK and AWS Amplify which are developer-friendly tools for building and deploying AppSync to the cloud, these already support JavaScript and TypeScript. Building AppSync resolvers will make the codebase more organized with the same tech stack. While developing, IDE supports various extensions and tools for JavaScript making the code authoring process smoother for development and debugging.</p>
<h3 id="heading-whats-best-for-you">What's best for you</h3>
<p>Choosing what's best for you is really important. JavaScript with no doubt has the better experience while building GraphQL APIs on AppSync when compared with VTL. Some of the patterns that I've seen is that for simpler resolvers such as using Lambda function as the data source, still makes sense to use VTL as in that case VTL is only forwarding all the context arguments to Lambda function</p>
]]></content:encoded></item><item><title><![CDATA[The Jitter Strategy for Step Functions Error Retries on the New Workflow Studio]]></title><description><![CDATA[AWS Step Function just announced a new enhanced error handling and retry mechanism in State Machine executions that enables more fine-grain control of error retry rules. And also uses the enchanced Workflow Studio authoring experience to build the wo...]]></description><link>https://blog.theserverlessterminal.com/the-jitter-strategy-for-step-functions-error-retries-on-the-new-workflow-studio</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/the-jitter-strategy-for-step-functions-error-retries-on-the-new-workflow-studio</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AWS Step Functions]]></category><category><![CDATA[error handling]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Wed, 13 Sep 2023 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694716507673/3fc23da8-4ecb-40e8-af65-ae6ea0de604b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS Step Function just announced a <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/09/aws-step-functions-enhanced-error-handling/">new enhanced error handling and retry mechanism</a> in State Machine executions that enables more fine-grain control of error retry rules. And also uses the <a target="_blank" href="https://aws.amazon.com/blogs/compute/enhancing-workflow-studio-with-new-features-for-streamlined-authoring">enchanced Workflow Studio authoring experience</a> to build the workflow.</p>
<p>Read more about <a target="_blank" href="https://blog.theserverlessterminal.com/handling-errors-with-stepfunctions-sns-sdk-integration">how error handling works on Step Functions</a> but in this blog, we will focus more on the new parameters in error handling with catch and retries on Step Functions such as <code>MaxDelaySeconds</code> and <code>JitterStrategy</code>.</p>
<h2 id="heading-errors-in-step-functions">Errors in Step Functions</h2>
<p>During the execution of a State Machine, there are possibilities of execution would be interrupted by various errors such as <code>States.Timeout</code> when the task execution has taken more than <code>TimeoutSeconds</code> defined as when it failed to get a heartbeat longer than <code>HeartbeatSeconds</code> defined.</p>
<p>Some of the possible errors are listed below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694626388997/c6e99ff7-01c7-46cc-bd77-fb7d3590070f.png" alt class="image--center mx-auto" /></p>
<p><code>States.All</code> wildcard is available on Step Functions to work with all the errors encountered during the execution.</p>
<h2 id="heading-deploying-a-state-machine">Deploying a State Machine</h2>
<p>Navigate to AWS Step Functions <em>Create state machine</em> and then select the <em>Orchestrate Lambda Functions</em> template with the new console experience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694710086638/3666c21a-59c6-4516-9e8c-5714bc17a414.png" alt="Choosing a template to create State Machine on Workflow Studio" class="image--center mx-auto" /></p>
<p>Once the template is selected, Workflow Studio will give you more details of the template.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694710186979/5df8cc84-c725-4cfc-bf2e-55509aed6b13.png" alt="Choosing Orchestrate Lambda Function template and preview of the template before deploying to your AWS Account" class="image--center mx-auto" /></p>
<p>The <em>Orchestrate Lambda Functions</em> template showcases the stock buy/sell recommendation based on the stock price. Choose <em>Run a demo</em> option to deploy the state machine and other resources such as Lambda Functions with SNS and SQS to your AWS Account.</p>
<p>To test out the state machine, you could run a sample execution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694710569774/3aa57dfb-400f-4360-81b7-5f6754587e16.png" alt="A sample execution of the workflow which was successful." class="image--center mx-auto" /></p>
<h2 id="heading-updating-the-state-machine-with-error-retries">Updating the State Machine with error retries</h2>
<h3 id="heading-enabling-retry">Enabling retry</h3>
<p>Let's update the <em>Check Stock Price</em> state which invokes a Lambda function with an error retry with a few retry options</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"Retry"</span>: [
        {
          <span class="hljs-string">"ErrorEquals"</span>: [
            <span class="hljs-string">"States.ALL"</span>
          ],
          <span class="hljs-string">"BackoffRate"</span>: <span class="hljs-number">2</span>,
          <span class="hljs-string">"IntervalSeconds"</span>: <span class="hljs-number">1</span>,
          <span class="hljs-string">"MaxAttempts"</span>: <span class="hljs-number">3</span>,
          <span class="hljs-string">"Comment"</span>: <span class="hljs-string">"Check Stock Price Lambda error"</span>,
          <span class="hljs-string">"MaxDelaySeconds"</span>: <span class="hljs-number">2</span>
        }
}
</code></pre>
<p>In this error retry snippet, the wildcard <code>States.ALL</code> error listens to all the errors in this state to perform a retry. This retry has a few other options -</p>
<ul>
<li><p><code>IntervalSeconds</code> is an integer that specifies the number of seconds before the first retry.</p>
</li>
<li><p><code>BackoffRate</code> which is a property that would multiply the <code>IntervalSeconds</code> property to determine the next retry would occur.</p>
</li>
<li><p><code>MaxAttempts</code> defines the maximum number of retries possible.</p>
</li>
<li><p><code>MaxDelaySeconds</code> defines the maximum time in seconds that the retry interval can increase.</p>
</li>
</ul>
<p>When the state machine is executed with an error retry in the first state,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694713246550/49571c9e-b97b-4b71-9dcf-f67a26197f76.png" alt="Error retry for Check Stock Price state" class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Retry</td><td>Retry attempt second</td></tr>
</thead>
<tbody>
<tr>
<td>1st retry</td><td>1s (<code>IntervalSeconds</code>)</td></tr>
<tr>
<td>2nd retry</td><td>3s</td></tr>
<tr>
<td>3rd retry</td><td>6s</td></tr>
</tbody>
</table>
</div><h3 id="heading-enabling-retry-with-jitterstrategy">Enabling retry with JitterStrategy</h3>
<p>In the second <em>Generate Buy/Sell recommendation</em> state, enabling retry for all <code>States.ALL</code> wildcard with the below retry options -</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"Retry"</span>: [
        {
          <span class="hljs-string">"ErrorEquals"</span>: [
            <span class="hljs-string">"States.ALL"</span>
          ],
          <span class="hljs-string">"JitterStrategy"</span>: <span class="hljs-string">"FULL"</span>,
          <span class="hljs-string">"Comment"</span>: <span class="hljs-string">"Buy/Sell recommendation error"</span>,
          <span class="hljs-string">"MaxAttempts"</span>: <span class="hljs-number">5</span>
        }
      ]
}
</code></pre>
<p>Along with the previously enabled options, the additional property set is - <code>JitterStrategy</code> as <code>FULL</code>. When <code>JitterStrategy</code> is enabled with the value <code>FULL</code>, it randomizes delay intervals so that the retry mechanism doesn't retry excessively, this is very powerful especially when Lambda is invoked concurrently.</p>
<p>When the error occurs in the <em>Generate</em> <em>Buy/Sell recommendation</em> state,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694713962178/6407e7cb-ad75-4d58-9e5f-69165c6c4b9e.png" alt="Error retry for Generate Buy/Sell recommendation state with JitterStrategy" class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Retry</td><td>Retry attempt started after</td></tr>
</thead>
<tbody>
<tr>
<td>1st retry</td><td>00:00:01.298</td></tr>
<tr>
<td>2nd retry</td><td>00:00:01.479</td></tr>
<tr>
<td>3rd retry</td><td>00:00:01.752</td></tr>
<tr>
<td>4th retry</td><td>00:00:02.986</td></tr>
<tr>
<td>5th retry</td><td>00:00:04.213</td></tr>
</tbody>
</table>
</div><p>Notice the retry attempts are between 1s and 5s (<code>MaxAttempts</code>) which are random when compared to the previous case without <code>JitterStrategy</code>.</p>
<h3 id="heading-enabling-fail-flow">Enabling Fail flow</h3>
<p>Whenever failing the state machine, it throws an <code>error</code> and the <code>cause</code> that can customize the error states. But for this flow, enabling a custom error message defined in the state machine for the choice state.</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"Fail"</span>: {
      <span class="hljs-string">"Type"</span>: <span class="hljs-string">"Fail"</span>,
      <span class="hljs-string">"Error"</span>: <span class="hljs-string">"Stock prediction failed"</span>,
      <span class="hljs-string">"Cause"</span>: <span class="hljs-string">"Unable to proceed as stock prediction to buy or sell stock failed"</span>
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694714651241/218d57d2-c823-427f-86ff-c44e040f069f.png" alt="Fail flow with error message and cause defined for the choice state" class="image--center mx-auto" /></p>
<p>When the choice state goes through a default state, the state machine fails execution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694714766637/6a9d7e21-43a9-4906-97f2-00674b5adb04.png" alt class="image--center mx-auto" /></p>
<p>When the error occurs, the <code>fail</code> flow captures the error as <em>Stock prediction failed</em> with the cause <em>Unable to proceed as stock prediction to buy or sell stock failed</em>.</p>
<h2 id="heading-keep-a-watch-on-retries">Keep a watch on retries</h2>
<p>When working with state machines that have multiple states with retries defined in all or many of them, the retry mechanism would retry until <code>MaxAttempts</code>.</p>
<h4 id="heading-q-its-good-to-retry-multiple-times">Q: It's good to retry multiple times</h4>
<p>Well, not if the computing brains or the state would reproduce the same error. However, definitely good when retrying would result in success.</p>
<h4 id="heading-q-multiple-retries-shouldnt-cause-interruptions-in-my-state-machine-execution">Q: Multiple retries shouldn't cause interruptions in my state machine execution</h4>
<p>If the retries result in a success, it won't interrupt the state machine execution. But if the same error occurs, better to <code>fail</code> the execution with passing events to EventBridge or DLQ.</p>
<h4 id="heading-q-would-this-be-expensive">Q: Would this be expensive?</h4>
<p>In a <em>Standard workflow</em>, state machine execution is priced based on every state transition and when retry is configured there would be a state transition for each retry attempt this would be expensive.</p>
<p>Also, the retry attempt is invoking another AWS resource, that execution would also be billed. In this workflow, the Lambda function was invoked multiple times which also accounts for the total workflow cost.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694715301741/aa4bc989-5b15-41b1-983e-141a42fbe5b0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-wrap-up">Wrap up!</h3>
<p>The retry mechanism would help when working with resource-based errors such as <code>Lambda.ServiceException</code> or any other AWS Service error based on the task. With <code>JitterStrategy</code> enabled, addresses the concurrent retries.</p>
<p>Note, this example was to showcase the features with error retry and fail flow.</p>
]]></content:encoded></item><item><title><![CDATA[Buses and queues: Head-on]]></title><description><![CDATA[Amazon EventBridge is a serverless managed service for event-driven applications to build loosely coupled applications and route events smartly across different services.You can read more about how Amazon EventBridge is the missing piece to your app....]]></description><link>https://blog.theserverlessterminal.com/buses-and-queues-head-on</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/buses-and-queues-head-on</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AWS EventBridge]]></category><category><![CDATA[SQS]]></category><category><![CDATA[messaging]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Wed, 28 Jun 2023 17:25:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688057371280/cd33ed2b-e7f5-4e81-b78c-955a110e6318.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/eventbridge/">Amazon EventBridge</a> is a serverless managed service for event-driven applications to build loosely coupled applications and route events smartly across different services.<br />You can read more about how <a target="_blank" href="https://blog.theserverlessterminal.com/amazon-eventbridge-the-missing-piece-to-your-app">Amazon EventBridge is the missing piece to your app</a>.</p>
<p>Also, if you are wondering about <a target="_blank" href="https://aws.amazon.com/sqs/">Amazon SQS</a>, learn about <a target="_blank" href="https://blog.theserverlessterminal.com/getting-started-with-sns-and-sqs">getting started with SQS and SNS</a> and <a target="_blank" href="https://blog.theserverlessterminal.com/standard-vs-fifo-sns-and-sqs">understand Standard and FIFO queues (in SQS) and topics (in SNS)</a> with the understanding of <a target="_blank" href="https://blog.theserverlessterminal.com/when-to-sns-or-sqs">when to use SQS and SNS</a> in your Serverless architectures.</p>
<p>In this blog, we will look into Amazon EventBridge buses and Amazon SQS queues and how and where they fit right into your Serverless architectures.</p>
<h2 id="heading-catch-the-bus">Catch the bus!</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688051996038/d156b0f4-50d9-416c-9858-438630d55d2b.gif" alt="Take a bus meme" class="image--center mx-auto" /></p>
<h3 id="heading-building-event-driven-architectures">Building event-driven architectures</h3>
<p>When building EDA applications, Amazon EventBridge is best suited, as EventBridge provides you with different capabilities for service-to-service integrations across various sources in the Serverless space. Also ensuring the architecture is loosely coupled.</p>
<h3 id="heading-event-routing-and-filtering">Event routing and filtering</h3>
<p>EventBridge is known for its smart routing feature with event rules and also the support of filtering based on different filters and filter patterns. When building for service-to-service integrations, the routing helps with designated events routed to destined destinations.</p>
<h3 id="heading-integrations">Integrations</h3>
<p>EventBridge supports different service integrations such as AWS Lambda, AWS Step Functions, SQS, SNS, and many more. While these are natively supported service integrations, EventBridge also supports SaaS Partner integrations and third-party HTTP end-points with API destinations.</p>
<h2 id="heading-get-in-the-queue">Get in the queue!</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688053575975/63ed572b-367a-4da7-aac6-3c591eacf23a.gif" alt="In a queue" class="image--center mx-auto" /></p>
<h3 id="heading-building-message-queues">Building message queues</h3>
<p>When building applications, you may have to build messaging queues that need reliable and ordered delivery. SQS ensures this with FIFO queues and also enables load balancing with horizontal scaling.</p>
<h3 id="heading-fan-out-patterns">Fan out patterns</h3>
<p>Messages which have to be distributed across multiple consumers would efficiently process each message by each consumer. Fan-out patterns are best suited with Amazon SQS as it enables a balancing queue for the consumer.</p>
<h3 id="heading-time-bound-delivery">Time-bound delivery</h3>
<p>Amazon SQS has capabilities where messages could be configured to deliver the message to the consumer using the <code>DelaySeconds</code> parameter. Also, the messages can hold a time-to-live (TTL) attribute beyond which the message is removed from the queue. SQS supports visibility timeouts where the message is invisible and can be reversed for consumption later.</p>
<h2 id="heading-eventbridge-vs-queue">EventBridge v/s Queue</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Amazon EventBridge</td><td>Amazon SQS</td></tr>
</thead>
<tbody>
<tr>
<td>Complex processing</td><td>✅ EventBridge supports transformation and filtering which makes it easy to process complex events.</td><td>🚫 SQS can have targets as Lambda to process but cannot process them on its own.</td></tr>
<tr>
<td>Multiple destinations</td><td>✅ EventBridge supports multiple AWS services as the destination and also API destinations for external HTTP endpoints</td><td>🚫 Would need a Lambda function or EventBridge pipe to consume the message and programmatically route it to the destination.</td></tr>
<tr>
<td>Scheduled messages/events</td><td>✅ EventBridge scheduler helps to build event scheduling with CRON patterns.</td><td>🚫 SQS cannot trigger or push messages at scheduled times.</td></tr>
<tr>
<td>Ordering</td><td>🚫 EventBridge doesn't ensure strict ordering.</td><td>✅ FIFO queues could be used to ensure FIFO ordering.</td></tr>
<tr>
<td>Delay messages</td><td>🚫 EventBridge immediately delivers the event as it occurs.</td><td>✅ SQS supports delays in messages with <code>DelaySeconds</code></td></tr>
<tr>
<td>Message retention</td><td>✅ Only when archival is enabled, messages are retained for future use.</td><td>✅ SQS Supports TTL and DLQs for handling failed messages.</td></tr>
<tr>
<td>Throughput</td><td>10,000,000 events per second in a bus.</td><td>10,000 messages per second in a queue.</td></tr>
<tr>
<td>Payload limits</td><td>256 KB max supported.</td><td>256 KB max supported.</td></tr>
</tbody>
</table>
</div><h2 id="heading-co-existence-of-eventbridge-and-sqs">Co-existence of EventBridge and SQS</h2>
<p>While individually EventBridge and SQS can add a lot of value to the Serverless architecture, the combination of the two in certain patterns is possible.</p>
<p><strong>Messaging queues with event-driven patterns</strong> where both SQS and EventBridge can be used in combination. EventBridge in this pattern would broadcast events to multiple subscribers and SQS would be enforcing correct message ordering.</p>
<p>Messages in SQS could be consumed by EventBridge pipes for better event enrichment and transformation of events to the needed destination without having Lambda functions to be performing a transformational workload on messages.</p>
<p>Ultimately, it is about what the workload is and how each of the services - EventBridge or SQS either individually or in combination can elevate your Serverless architecture.</p>
]]></content:encoded></item><item><title><![CDATA[Maintaining Lambda Function Performance During Peak Traffic]]></title><description><![CDATA[AWS Lambda functions are the popular computing option for modern applications, specifically if you use Serverless or event-driven architectures on AWS. And these Lambda functions are invoked billions of times in a production environment.

AI-generate...]]></description><link>https://blog.theserverlessterminal.com/maintaining-lambda-function-performance-during-peak-traffic</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/maintaining-lambda-function-performance-during-peak-traffic</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[performance]]></category><category><![CDATA[scalability]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Fri, 26 May 2023 19:10:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685127496536/b8ceece8-1f05-487c-aa66-05c0be322f2d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/lambda/">AWS Lambda functions</a> are the popular computing option for modern applications, specifically if you use Serverless or event-driven architectures on AWS. And these Lambda functions are invoked billions of times in a production environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685097843995/6759b6f9-ada9-47c6-9695-63ef7f5c9bdd.jpeg" alt="A heavy traffic Serverless cloud, AI generated by Adobe Firefly" class="image--center mx-auto" /></p>
<p><em>AI-generated image by</em> <a target="_blank" href="https://firefly.adobe.com/"><em>Adobe Firefly</em></a> <em>- "Heavy traffic in Serverless cloud"</em></p>
<p>In this blog, we will understand what are the different sources for Lambda triggers and also understand the factors to ensure performance during peak traffic.</p>
<h2 id="heading-lambda-trigger-sources">Lambda trigger sources</h2>
<p>AWS Lambda function is well-knit with different sources triggering them either on demand or event-based. Understanding the source of Lambda function triggers would help in addressing performance during high traffic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684932877703/08fec44e-1c90-43b5-9f30-3d4ce99cb3e5.png" alt="Add trigger to Lambda function via AWS Console" class="image--center mx-auto" /></p>
<h3 id="heading-invoked-via-http-api">Invoked via HTTP API</h3>
<p>Lambda functions would be the core compute of the backend systems which are served over API endpoints generated from <a target="_blank" href="https://aws.amazon.com/api-gateway/">Amazon API Gateway</a> or <a target="_blank" href="https://aws.amazon.com/appsync/">AWS AppSync</a> or Lambda Function URLs. These are often triggered by applications programmatically either from a front-end application or backend systems.</p>
<h3 id="heading-aws-services-invoking-lambda">AWS Services invoking Lambda</h3>
<p>In event-driven architectures, another AWS Service usually triggers the Lambda function whenever certain events occur.</p>
<p>A typical scenario would be, whenever there is a new object uploaded into a S3 bucket, a Lambda function could be triggered with the details of the event payload so that Lambda can further process it.</p>
<h2 id="heading-performance-with-heavy-traffic">Performance with heavy traffic</h2>
<p>When using Lambda functions, being mindful of what it offers and how best it fits into your solutions is essential.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685120400554/804ef2ee-9fc4-49fd-9e36-f1ad85cae926.jpeg" alt="Busy day in a traffic cop life managing traffic, AI generated by Adobe Firefly" class="image--center mx-auto" /></p>
<p><em>AI-generated image by</em> <a target="_blank" href="https://firefly.adobe.com/"><em>Adobe Firefly</em></a> <em>- "busy day in a traffic cop life managing traffic".</em></p>
<h3 id="heading-load-balancers">Load balancers</h3>
<p>Load balancers, even from the traditional systems are that piece of architecture that enable smooth traffic distribution. In a serverless system also, load balances help with distributing traffic and also ensuring there isn't a burst of invocations of Lambda functions.</p>
<h3 id="heading-enable-caching">Enable caching</h3>
<p>For frequently accessed data, enabling caching in your architecture at different levels would improve performance which on a high-traffic day pays off!</p>
<p>Allen and me (Jones) authored a blog with Momento about <a target="_blank" href="https://www.gomomento.com/blog/improve-app-performance-by-caching-at-every-layer">how caching at every layer improves app performance</a>.</p>
<h3 id="heading-provision-and-reserved-concurrency">Provision and reserved concurrency</h3>
<p>With Lambda's provision and reserved concurrency, you can allocate the resources which would be needed when you anticipate a high traffic flow. This ensures the architecture would be able to perform better on a peak traffic day.</p>
<h3 id="heading-weighted-routing">Weighted routing</h3>
<p>Each lambda function is assigned weights and the weighted routing would use these weights as the measure to distribute traffic amongst different Lambda functions and in the case of Lambda aliases, it helps with distributing traffic amongst different versions of the Lambda function.</p>
<h3 id="heading-triggers-with-batch">Triggers with batch</h3>
<p>When defining the event triggers for Lambda functions, different services like DynamoDB, SQS, and other messaging services support batch triggers where the service would collectively trigger the Lambda function once as per the <code>BatchSize</code> defined.</p>
<h2 id="heading-options-for-serverless-architecture">Options for Serverless Architecture</h2>
<h3 id="heading-application-load-balancers-albs">Application Load Balancers (ALBs)</h3>
<ul>
<li><p>ALBs support targets as Lambda functions which help in distributing traffic with weights, geographical and round robin way.</p>
</li>
<li><p>Provides endpoints available over the public internet which helps in building APIs that can be backed with ALBs and Lambda functions.</p>
</li>
<li><p>ALBs are complex to configure when compared with other options available.</p>
</li>
<li><p>With the management and configurations, this is not truly a serverless way!</p>
</li>
</ul>
<h3 id="heading-amazon-api-gateway">Amazon API Gateway</h3>
<ul>
<li><p>Available by design to configure API Gateway integration with Lambda functions. And also seamless to work with API Gateway when compared with ALBs.</p>
</li>
<li><p>API Gateway enables the usage of API key based usage quotas and throttling controls.</p>
</li>
<li><p>Supports multiple authentication and authorization options.</p>
</li>
<li><p>Supports payload validation with models where in a peak traffic scenario, API Gateway will ensure the validated payloads are invoking the Lambda function thus eliminating junk requests.</p>
</li>
<li><p>Supports WAF and CloudFront-based endpoints which has added security for DDoS attacks.</p>
</li>
</ul>
<h3 id="heading-lambda-aliases">Lambda aliases</h3>
<ul>
<li><p>Traffic distribution at the Lambda level when using it with different AWS Services as triggers.</p>
</li>
<li><p>Configuration involves creating multiple versions of the same Lambda function and assigning an alias to one or max two versions.</p>
</li>
<li><p>Supports weighted routing to each alias in a Lambda function.</p>
</li>
<li><p>Not the best fit for Lambda functions which need to be invoked via the public internet as it lacks authentication and throttling.</p>
</li>
</ul>
<h2 id="heading-wrap-up">Wrap up!</h2>
<p>Lambda functions have proven scalability and perform well even during peak traffic. Keeping in mind different aspects of Lambda functions or other AWS Services that enrich your traffic management is the key when architecting solutions that are prone to high traffic.</p>
<p>With the different options of ALBs, API Gateway, or even Lambda aliases, each of them has an upside and downside depending on the use case and how they are placed in the serverless architecture.</p>
]]></content:encoded></item><item><title><![CDATA[Re-think and Re-architect Your Application]]></title><description><![CDATA[Recently, Amazon Prime Video revealed about moving their architecture from serverless to monolithic container-based architecture. Read more about Scaling up the Prime Video audio/video monitoring service and reducing costs by 90% by the Amazon Prime ...]]></description><link>https://blog.theserverlessterminal.com/re-think-and-re-architect-your-application</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/re-think-and-re-architect-your-application</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[architecture]]></category><category><![CDATA[scalability]]></category><category><![CDATA[cost-optimisation]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Fri, 12 May 2023 16:17:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1683994613015/af72e6f8-0ad6-4bc0-8870-5d3f87201f16.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, Amazon Prime Video revealed about moving their architecture from serverless to monolithic container-based architecture. Read more about <a target="_blank" href="https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90">Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%</a> by the Amazon Prime Video tech team.</p>
<h2 id="heading-initial-architecture-of-prime-video">Initial architecture of Prime Video</h2>
<p>Few pointers from the tech blog about the initial architecture,</p>
<ul>
<li><p>Distributed and microservices approach in their initial architecture.</p>
</li>
<li><p>A serverless architecture primarily built on AWS Lambda functions and AWS Step Functions.</p>
</li>
<li><p>Designed with the intent of Serverless being cost-effective and scalable because it was on-demand scaling up or scaling down of Lambda functions which resulted in keeping cost intact.</p>
</li>
</ul>
<p><img src="https://cdn.primevideotech.com/dims4/default/bf61eca/2147483647/strip/true/crop/1011x803+0+0/resize/1011x803!/quality/90/?url=https%3A%2F%2Famazon-k1-prod-entertainment.s3.amazonaws.com%2Fbrightspot%2Fb2%2Fd2%2Fdde535c6478d9c4b8c9891c4d93b%2F98191782.png" alt="Initial architecture of Prime Video defect detection system with Serverless" /></p>
<p><em>Image courtesy: Prime Video blog post -</em> <a target="_blank" href="https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90"><em>https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90</em></a></p>
<h2 id="heading-reasons-why-serverless-was-a-challenge">Reasons why Serverless was a challenge</h2>
<p>Prime Video is one of the popular OTT platforms, and the assumption here is that they would have peak traffic and the workload in this traffic times is what the architecture to hard test in terms of -</p>
<h3 id="heading-scalability">Scalability</h3>
<p>Yes, Serverless applications can scale well. But the main concern of scalability came in with orchestrating the Serverless workload with AWS Step Functions.</p>
<blockquote>
<p>The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits.</p>
</blockquote>
<p>As quoted in the blog post. Often when building Serverless applications when they hit a peak moment, there are challenges -</p>
<ul>
<li><p>Managing the distributed architecture because you have too many microservices doing various dedicated tasks even though it's a State Machine defined.</p>
</li>
<li><p>AWS Step Functions bring in limits in payload passed across states, the number of state machine executions where there should be workarounds in place to ensure the system doesn't breach the limits.</p>
</li>
</ul>
<h3 id="heading-cost-effectiveness">Cost-effectiveness</h3>
<p>Serverless applications are cost-effective because they run on demand and when certain events/requests are made.</p>
<blockquote>
<p>Besides that, AWS Step Functions charges users per state transition.</p>
</blockquote>
<p>AWS Step Functions have a per-state transition pricing model, for instance, in <code>us-east-1</code> region that has $0.025 per 1000 state transitions, Prime Video would have a large number of state transitions happening as this state machine itself is used to detect if any defect is there in the video frames.</p>
<blockquote>
<p>The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an <a target="_blank" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html">Amazon Simple Storage Service (Amazon S3)</a> bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.</p>
</blockquote>
<p>The microservice which is used to split videos in frames uploads images to an S3 bucket and these images are processed to detect defects where there would be 1000s of frames in a single video which itself would have S3 bucket costs high with respect to storage, data transfer and the cost of high-performance Lambda functions which run in parallel.</p>
<h2 id="heading-reaction-to-the-challenges">Reaction to the challenges</h2>
<p>The reasons mentioned are very valid and the Prime Video team has taken some steps to address this -</p>
<h3 id="heading-re-think">Re-think</h3>
<p>It is important when you have scalability and cost-related concerns with your architectures that you start to take a step back and think - "<em>what could have been the possible best solution?"</em> or "<em>what kind of components suit this use-case".</em> Sometimes, it begins with the basic question of "why do we need XYZ service" when the challenges stand out.</p>
<p>The first approach to adapting Serverless is good but when the architecture is not meeting the need in this use-case, the approach taken to <em>re-think</em> says how flexible we have to be in our architecture.</p>
<h3 id="heading-re-architect">Re-architect</h3>
<p>When we <em>re-think</em>, it is obvious that we get more clarity with what is the use case and what architectural approach and components would be the <em>best fit</em>. Once we identify the <em>best fit architectural approach or components</em>, re-architecting the complete workload would be the way to go.</p>
<p>In Prime Video's case, there was a shift from Serverless to a monolith because the system would always be processing images, and data transfer between different Services would make it an expensive workload but with the new architecture, the ECS task performs the orchestration of detecting deflects using the video buffer directly from Media Converter.</p>
<h2 id="heading-resonating-my-thoughts">Resonating my thoughts</h2>
<p>When the internet broke out with Prime Video moving away from Serverless, me being a Serverless advocate was shocked "<em>why</em>"!</p>
<p><a target="_blank" href="https://www.twitter.com/theslsterminal/status/1654787999484506114"><img src="https://pbs.twimg.com/media/Fvb86n3acAAj3Ln?format=jpg&amp;name=medium" alt="Image" /></a></p>
<p>But reading and understanding the blog post, appreciate the architectural shift even when it means that Serverless architecture is revamped into a monolith.</p>
<h3 id="heading-serverless-first-serverless-not-always">Serverless first, Serverless not always</h3>
<p>It's good to approach with the Serverless mindset and building with Serverless but if that doesn't work for your workload then "<em>revert! revert! revert</em>!". Sometimes, Serverless may never be the right approach, and investing in that would mean that it would turn out to be expensive in the longer run.</p>
<h3 id="heading-one-size-doesnt-fit-all">One size doesn't fit all</h3>
<p>We assume that if any approach works for someone out there, it would work for me as well. As it's repeatedly mentioned, "<em>Serverless scales and is cost-effective</em>" it is true but also we need to understand if the Serverless approach or any other approach in general would be the best fit for the use case.</p>
<h3 id="heading-iterative-building">Iterative building</h3>
<p>Oftentimes, the use-case evolves or the usage evolves which translates to the architecture should be adaptive and get there it's always with different iterations. Take software development as the instance where we go through different revisions and iterations to get to production and if we see the architecture of the software being developed, that too would have gone through revisions. These revisions are where the learning happens, to identify if the architecture is well-architected or not.</p>
]]></content:encoded></item><item><title><![CDATA[Workarounds for AppSync Subscriptions triggers via Lambda functions]]></title><description><![CDATA[AWS AppSync is a serverless GraphQL offering by AWS, previously I authored a blog about AWS AppSync 101 which gets you up to speed with the capabilities of AppSync and how you can leverage them in your serverless applications.
One query that stands o...]]></description><link>https://blog.theserverlessterminal.com/workarounds-for-appsync-subscriptions-triggers-via-lambda-functions</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/workarounds-for-appsync-subscriptions-triggers-via-lambda-functions</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AppSync]]></category><category><![CDATA[lambda]]></category><category><![CDATA[chatgpt]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Sat, 22 Apr 2023 13:40:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1682170423309/d4c147c9-e48a-47c2-82c1-8654232e6a58.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/appsync/">AWS AppSync</a> is a serverless GraphQL offering by AWS, previously I authored a blog about <a target="_blank" href="https://blog.theserverlessterminal.com/aws-appsync-101">AWS AppSync 101</a> which gets you up to speed with the capabilities of AppSync and how you can leverage them in your serverless applications.</p>
<p>One query that stands out is that -</p>
<blockquote>
<p>AppSync subscriptions work amazing when client makes mutations but I have some downstreaming patterns in my application where the backend uses subscriptions to send data from back-end to front-end, how do I achieve this with AppSync?</p>
</blockquote>
<h2 id="heading-ask-chatgpt">Ask ChatGPT</h2>
<p>Well, this does look simple so why not ask ChatGPT?</p>
<blockquote>
<p>What would be the better way to handle AppSync Subscriptions which are triggered with Lambda functions?</p>
</blockquote>
<h3 id="heading-chatgpts-solution">ChatGPT's solution</h3>
<blockquote>
<p>Use the appropriate trigger:</p>
<p>To use the appropriate trigger, you can create a trigger on the event that is relevant to your use case. For example, if you want to trigger a subscription when a record is added to a database, you can use a database trigger to invoke your Lambda function.</p>
<p>Here's an example using AWS Lambda and Amazon DynamoDB:</p>
<ul>
<li><p>Create a DynamoDB table with a stream enabled.</p>
</li>
<li><p>Create a Lambda function that is triggered by the DynamoDB stream.</p>
</li>
<li><p>In the Lambda function, write code that parses the stream event and invokes the AppSync subscription.</p>
</li>
</ul>
</blockquote>
<p>And also, ChatGPT generated the code snippet for this.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Sample code for a Lambda function that is triggered by a DynamoDB stream</span>

<span class="hljs-built_in">exports</span>.handler = <span class="hljs-keyword">async</span> (event) =&gt; {
  <span class="hljs-comment">// Parse the DynamoDB stream event</span>
  <span class="hljs-keyword">const</span> records = event.Records.map(<span class="hljs-function">(<span class="hljs-params">record</span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> {
      <span class="hljs-attr">eventName</span>: record.eventName,
      <span class="hljs-attr">record</span>: record.dynamodb.NewImage,
    };
  });

  <span class="hljs-comment">// Invoke the AppSync subscription</span>
  <span class="hljs-keyword">const</span> appSyncClient = <span class="hljs-keyword">new</span> AWSAppSyncClient({
    <span class="hljs-attr">url</span>: <span class="hljs-string">'https://my-appsync-endpoint.aws.com/graphql'</span>,
    <span class="hljs-attr">region</span>: <span class="hljs-string">'us-east-1'</span>,
    <span class="hljs-attr">auth</span>: {
      <span class="hljs-attr">type</span>: <span class="hljs-string">'AWS_IAM'</span>,
      <span class="hljs-attr">credentials</span>: AWS.config.credentials,
    },
  });

  <span class="hljs-keyword">const</span> mutation = gql<span class="hljs-string">`
    subscription OnRecordAdded {
      onRecordAdded {
        id
        name
      }
    }
  `</span>;

  <span class="hljs-keyword">const</span> observable = appSyncClient.subscribe({ <span class="hljs-attr">query</span>: mutation });
  observable.subscribe(<span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Subscription data'</span>, data));
};
</code></pre>
<p>And the GraphQL schema for this was also generated by AppSync.</p>
<pre><code class="lang-javascript">type Record {
  <span class="hljs-attr">id</span>: ID!
  name: <span class="hljs-built_in">String</span>!
}

type Subscription {
  <span class="hljs-attr">onRecordAdded</span>: Record!
}

type Mutation {
  addRecord(name: <span class="hljs-built_in">String</span>!): Record!
}

type Query {
  getRecord(id: ID!): Record
}
</code></pre>
<h2 id="heading-lets-validate-chatgpt-s-response">Let's validate ChatGPT 's response</h2>
<p>Do you notice something from ChatGPT's response? Now let's put on the <em>Serverless architect</em> lens and see how this would be implemented.</p>
<h3 id="heading-graphql-schema">GraphQL schema</h3>
<p>The schema generated by ChatGPT is a generic one that looks good at first look but when from the application's perspective, all the operations - <code>query</code>, <code>mutation</code> and <code>subscription</code> are accessible based on the default authorization set on AppSync.</p>
<p>Ensure the subscription is defined for a specific mutation, this is to be noted when you have multiple mutations. Although this looks good in this single mutation schema however it's good to define them with <code>@aws_subscribe</code> directive.</p>
<pre><code class="lang-javascript">type Subscription {
  <span class="hljs-attr">onRecordAdded</span>: Record
  @aws_subscribe(mutations: [<span class="hljs-string">"addRecord"</span>])
}
</code></pre>
<p>Irrespective of what authorization is used in your application for the client (be it <code>Cognito User Pool</code>, <code>API Key</code>, <code>Lambda authorizers</code> or <code>AWS IAM role</code>), define the mutation which would be invoked from the Lambda function as <code>AWS IAM role</code>.</p>
<pre><code class="lang-javascript">type Mutation {
  addRecord(name: <span class="hljs-built_in">String</span>!): Record!
  @aws_iam
}
</code></pre>
<h3 id="heading-graphql-resolvers">GraphQL resolvers</h3>
<p>The conversation with ChatGPT didn't initially give me a resolver for the mutation <code>addRecord</code>, but a follow-up query did generate a VTL resolver (understand JS resolver is a very new, and fair guess that model doesn't know what JS resolver is at this point).</p>
<blockquote>
<p>This VTL resolver defines the <code>PutItem</code> operation for the <code>Mutation</code> type in the AppSync schema. It uses the <code>$ctx.args</code> variable to extract the <code>name</code> argument from the mutation, and then generates a new <code>id</code> for the item using the <code>$util.autoId()</code> function. The resolver constructs a DynamoDB <code>PutItem</code> request using these values, and then returns the request in JSON format.</p>
</blockquote>
<pre><code class="lang-javascript">#set($name = $ctx.args.name)
#set($id = $util.autoId())
{
    <span class="hljs-string">"version"</span>: <span class="hljs-string">"2018-05-29"</span>,
    <span class="hljs-string">"operation"</span>: <span class="hljs-string">"PutItem"</span>,
    <span class="hljs-string">"tableName"</span>: <span class="hljs-string">"my-table"</span>,
    <span class="hljs-string">"item"</span>: {
        <span class="hljs-string">"id"</span>: { <span class="hljs-string">"S"</span>: <span class="hljs-string">"$id"</span> },
        <span class="hljs-string">"name"</span>: { <span class="hljs-string">"S"</span>: <span class="hljs-string">"$name"</span> }
    }
}
</code></pre>
<p>You may notice that the VTL resolver generated is doing a DynamoDB <code>PutItem</code> action even though the response ChatGPT gave didn't have the schema treating the <code>Record</code> type as a <code>model</code> or even give the steps to create a DynamoDB data source.</p>
<p>For the simplicity of things, let's create a simple VTL resolver that generates an ID and returns the name with the data source being a <code>None</code> type.</p>
<p>Request mapping template</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"version"</span>: <span class="hljs-string">"2017-02-28"</span>,
    <span class="hljs-string">"payload"</span>: {
        <span class="hljs-string">"name"</span>: <span class="hljs-string">"$context.arguments.name"</span>,
        <span class="hljs-string">"id"</span>: <span class="hljs-string">"$util.autoId()"</span>
    }
}
</code></pre>
<p>Response mapping template</p>
<pre><code class="lang-javascript">$util.toJson($context.result)
</code></pre>
<p>And the mutation <code>addRecord</code> is ready to be tested!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682155748549/eeddfd62-9008-417c-8656-d055c5ed93f3.png" alt="Testing GraphQL mutation addRecord in AppSync console" class="image--center mx-auto" /></p>
<h3 id="heading-lambda-function-to-trigger-subscriptions-on-client-end">Lambda function to trigger subscriptions on client-end</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682156671644/bb75a564-6ca9-4c1c-b0c5-2aa2c30b828d.png" alt="Lambda function uses AppSync mutation to trigger a AppSync subscription on client-end" class="image--center mx-auto" /></p>
<p>The solution that a <em>Serverless architect</em> would have come up with is the one above, where any Lambda trigger could invoke the Lambda function, and the Lambda function in turn uses an AppSync mutation and on the successful execution of the mutation, the subscription is internally invoked by AppSync.</p>
<p>The Lambda function generated by ChatGPT uses AppSync Client SDK. And it subscribes to an AppSync subscription rather than a mutation. 🤔 Why! 😵‍💫</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> observable = appSyncClient.subscribe({ <span class="hljs-attr">query</span>: mutation });
observable.subscribe(<span class="hljs-function">(<span class="hljs-params">data</span>) =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Subscription data'</span>, data));
</code></pre>
<p>Firstly, create a Lambda layer with the npm dependencies -</p>
<pre><code class="lang-powershell">npm install aws<span class="hljs-literal">-sdk</span> aws<span class="hljs-literal">-appsync</span> graphql<span class="hljs-literal">-tag</span> isomorphic<span class="hljs-literal">-fetch</span> axios
</code></pre>
<p>Create your Lambda function with the environment variable set with the AppSync API endpoint.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">require</span>(<span class="hljs-string">'isomorphic-fetch'</span>);
<span class="hljs-keyword">const</span> AWS = <span class="hljs-built_in">require</span>(<span class="hljs-string">'aws-sdk/global'</span>);
<span class="hljs-keyword">const</span> AUTH_TYPE = <span class="hljs-built_in">require</span>(<span class="hljs-string">'aws-appsync'</span>).AUTH_TYPE;
<span class="hljs-keyword">const</span> AWSAppSyncClient = <span class="hljs-built_in">require</span>(<span class="hljs-string">'aws-appsync'</span>).default;
<span class="hljs-keyword">const</span> gql = <span class="hljs-built_in">require</span>(<span class="hljs-string">'graphql-tag'</span>);

<span class="hljs-keyword">const</span> config = {
    <span class="hljs-attr">url</span>: process.env.APPSYNC_ENDPOINT,
    <span class="hljs-attr">region</span>: process.env.AWS_REGION,
    <span class="hljs-attr">auth</span>: {
        <span class="hljs-attr">type</span>: AUTH_TYPE.AWS_IAM,
        <span class="hljs-attr">credentials</span>: AWS.config.credentials,
    },
    <span class="hljs-attr">disableOffline</span>: <span class="hljs-literal">true</span>
};

<span class="hljs-keyword">const</span> createTodo = gql<span class="hljs-string">`
    mutation MyMutation($name: String!) {
      addRecord(name: $name) {
        id
        name
      }
    }
`</span>;

<span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> AWSAppSyncClient(config);

<span class="hljs-built_in">exports</span>.handler = <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">event</span>) </span>{
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"event "</span>, event);
    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> client.mutate({
            <span class="hljs-attr">mutation</span>: createTodo,
            <span class="hljs-attr">variables</span>: {
                <span class="hljs-string">"name"</span>:event.name
            }
        });
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"result "</span>, result);
        <span class="hljs-keyword">return</span> result
    } <span class="hljs-keyword">catch</span> (error) {
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"error "</span>, error);
        <span class="hljs-keyword">return</span> error
    }
};
</code></pre>
<p><code>credentials: AWS.config.credentials</code> uses the IAM credentials from Lambda's runtime and for this to work, you would have to define</p>
<p>Update your Lambda's execution role with the policy -</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Sid"</span>: <span class="hljs-string">"VisualEditor0"</span>,
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: <span class="hljs-string">"appsync:GraphQL"</span>,
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"&lt;Your AppSync ARN&gt;"</span>
        }
    ]
}
</code></pre>
<p>Back to your Lambda console, create an Event JSON.</p>
<pre><code class="lang-javascript">{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"This is a record from Lambda fn"</span>
}
</code></pre>
<p>When testing the subscription from the AppSync console. The API by default uses <code>API Key</code> as authorization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682169786774/f8231be5-f973-4640-b92e-e513d2d6e59a.gif" alt="Using AppSync console for testing subscriptions" class="image--center mx-auto" /></p>
<p>Behind the scene, the event was tested from the Lambda function console and here the mutation is authorized by <code>AWS IAM</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682169646933/1f8c3ebb-3e38-4115-8e4f-3b9a23d1491a.png" alt="Invoking the Lambda from Lambda console" class="image--center mx-auto" /></p>
<h2 id="heading-wrap-up">Wrap up</h2>
<p>ChatGPT gave us a high-level solution in terms of what needs to be done but the how to execute and the minor details were missed out, this is exactly where Serverless architects would come into the picture to start building on top of what AI suggests.</p>
<p>The TL;DR of this is that you can start to get solutions from ChatGPT but the accuracy of how well it solves is something humans would have to intervene on. And building Lambda functions that invokes an AppSync mutation to trigger subscriptions would be helpful when you are relying on real-time sync from external sources other than the support data sources from AppSync.</p>
]]></content:encoded></item><item><title><![CDATA[Serverless development with Amplify Studio]]></title><description><![CDATA[As full-stack developers, we often love to develop applications out of the box with tools that can visually design your application's UI and define your data and API layers that can seamlessly integrate with the application's UI.
AWS Amplify Studio a...]]></description><link>https://blog.theserverlessterminal.com/serverless-development-with-amplify-studio</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/serverless-development-with-amplify-studio</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Amplify]]></category><category><![CDATA[Amazon S3]]></category><category><![CDATA[AppSync]]></category><category><![CDATA[DynamoDB]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Fri, 14 Apr 2023 14:20:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681568058171/327e92d7-f765-4827-ae75-697fb9736fb7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As full-stack developers, we often love to develop applications out of the box with tools that can visually design your application's UI and define your data and API layers that can seamlessly integrate with the application's UI.</p>
<p><a target="_blank" href="https://aws.amazon.com/amplify/studio/">AWS Amplify Studio</a> amplifies your development with front-end and back-end by provisioning the right AWS resources under the hood for the right purposes and also the ability to easily integrate into the design.</p>
<h2 id="heading-working-with-data">Working with data</h2>
<p>Amplify Studio makes it really easy to define and manage your data irrespective of the environment it's used in.</p>
<h3 id="heading-defining-your-data">Defining your data</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681407800875/165527da-53ce-466b-9519-5a8db803871a.png" alt class="image--center mx-auto" /></p>
<p>Amplify Studio's Data allows you to define <a target="_blank" href="https://aws.amazon.com/dynamodb/">Amazon DynamoDB</a> keeping in mind the properties with the right type and also powered with <a target="_blank" href="https://aws.amazon.com/appsync/">AWS AppSync</a> where which generates a GraphQL schema under the hood.</p>
<pre><code class="lang-json">enum TopicLevelEnum {
  INTRODUCTORY
  INTERMEDIATE
  ADVANCE
}

type InfoCard @model @auth(rules: [{allow: public}]) {
  id: ID!
  title: String!
  short_description: String!
  long_description: String!
  thumbnail_url: AWSURL!
  categories: [String!]
  services: [String!]
  topic_level: TopicLevelEnum
  aws_docs_url: String!
  aws_workshow_url: String
  community_url: String
  aws_samples_url: String
  dislikes: Int
  url: String
  s3_thumbnail: String
  likes: Int
}

type Service @model @auth(rules: [{allow: public}]) {
  id: ID!
  service_name: String!
  category_name: String!
  logo_url: AWSURL!
  url: String
  s3_logo: String
}
</code></pre>
<p>Now that data is defined and the AWS resources - AppSync API and DynamoDB tables for the same are created with <code>@model</code> directives by AppSync. Since the API is using public read, <code>@auth(rules: [{allow: public}])</code> is part <code>InfoCard</code> and <code>Service</code> this will ensure that the auto-generated CRUD APIs for them will allow public access.</p>
<h3 id="heading-generating-data">Generating data</h3>
<p>The content management feature of Amplify Studio is elegant for ensuring an admin/content creator role to create or update content that the front-end application would be consuming. This uses the data definition based on your GraphQL schema and DynamoDB schema with a form-based input.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681561327503/45a632c7-01bb-499d-a3ce-6b1a83473dcf.png" alt="Create Service UI from Amplify Studio" class="image--center mx-auto" /></p>
<p>Amplify Studio also has an <em>underrated feature</em> that developers would love when using this during the course of development, Amplify Studio has auto-generate data which is handy for the development phase where developers need not worry about "there isn't enough data to display on the front-end" instead generate up to 100 records with the feasibility of setting some constraints so that you get valid data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681561809526/24daed44-e17e-4041-a359-353f113fbae1.png" alt="Amplify Studio with the feature of auto-generate data" class="image--center mx-auto" /></p>
<h3 id="heading-test-your-data">Test your data</h3>
<p>With the data populated, you would need to ensure the GraphQL APIs with AppSync are working as expected. Amplify Studio also features a GraphQL editor to simulate and test the APIs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681562332807/9507fda4-173f-4f3d-abcc-e6b2c3d42c51.png" alt="GraphQL simulator in Amplify Studio for testing APIs" class="image--center mx-auto" /></p>
<p>The simulator provides the API docs - where you can navigate the different <code>queries</code>, <code>mutations</code> or <code>subscription</code>.</p>
<h2 id="heading-amplify-storage">Amplify Storage</h2>
<p>Amplify Storage let's you manage the media files and assets which are stored on Amazon S3. Amplify Storage will first create the Amazon S3 bucket where you can set the authorization rules for user to <code>upload</code>, <code>view</code> or <code>delete</code> the objects (files in this case) on S3 bucket.</p>
<h3 id="heading-managing-files-on-storage">Managing files on Storage</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681567644277/87632fb3-017f-404e-8756-dd8dee43c671.png" alt class="image--center mx-auto" /></p>
<p>From Amplify Storage, you can navigate across different levels of folders and files and upload the files or create folders. Amplify Studio also provides your with a snippet of code that uses Storage APIs to get the file.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> file = <span class="hljs-keyword">await</span> Storage.get(<span class="hljs-string">"aws-amplify/auth/aws-amplify-auth.md"</span>, {
    <span class="hljs-attr">level</span>: <span class="hljs-string">"public"</span>
});
</code></pre>
<h2 id="heading-figma-design-to-code"><strong>Figma design to code</strong></h2>
<p>Using the <a target="_blank" href="https://www.figma.com/community/file/1047600760128127424"><strong>getting started Figma design</strong></a> from the community and the components available in this design uses the design guidelines which is a subset of <a target="_blank" href="https://ui.docs.amplify.aws/"><strong>AWS Amplify UI Kit</strong></a> that has a collection of UI components for different UI frameworks both web and mobile.</p>
<p>With my limited knowledge of Figma designing and using the pre-defined components and modifying the components as per my requirement, I've successfully created three components - <code>ServiceCard</code>, <code>FeatureCard</code> and <code>InfoCardDetails</code> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681405674242/eea9dfb6-fe52-4f16-8636-598ccb2a1212.png?auto=compress,format&amp;format=webp" alt="Figma design with the components created using pre-defined Amplify components" /></p>
<p>On successfully authorizing AWS Amplify Studio to access Figma designs from my Figma account, the UI Library on Amplify Studio lists all the components ready to be configured and imported into your Amplify project.</p>
<h3 id="heading-component-configuration"><strong>Component configuration</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681406439696/0160b36e-c998-4d82-8f3a-2ba6b079f6e1.png?auto=compress,format&amp;format=webp" alt="Detected ServiceCard component on Amplify Studio" /></p>
<p><code>FeatureCard</code> is now ready to be configured with the data that is defined. Amplify Studio provides a tree for all the elements created in this component and configuring different component properties.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681563346972/eab105f1-6407-47de-af77-6475c6f0deb3.png" alt="Component to Data model mapping" class="image--center mx-auto" /></p>
<p>Firstly, map the component to the data model that is used in the component.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681564052897/91b8966d-7995-479b-b31a-61b30fc92d1e.png" alt="Setting child properties of the component from Amplify Studio" class="image--center mx-auto" /></p>
<p>Set each of the child properties with the data from the defined data model. In the above screenshot, you can see the title of the card is set with <code>infoCard.Title</code> to the <code>label</code> property of the child component.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681564433437/08d32684-5437-4d57-9495-4123726c6f1d.png" alt="Configuring properties for event handlers" class="image--center mx-auto" /></p>
<p>In case of <code>InfoCardDetails</code> component, the data model defines URLs for documentation, samples and workshop. For those to be functional as a button <code>onClick</code> event, Amplify Studio allows those configurations where you can specify the target by selecting <code>Open URL in new tab</code> and also mapping the equivalent data from <code>InfoCard</code> data model.</p>
<h3 id="heading-collection-components">Collection components</h3>
<p>In the component configuration page, click on <em>Create collection</em> button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681564780015/3064cb6c-358a-4948-8d48-71e373cc7643.png" alt="Collections configuration on Amplify Studio" class="image--center mx-auto" /></p>
<p>The collections can be configured with properties that can make the collections UI really customizable in terms of how they would appear as a - <code>List</code> or <code>Grid</code>, ordering, spacing, alignment, enabling <code>search</code> functionality in the collection and also <code>pagination</code> with the page size.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681565407298/4b9dbe47-9f5f-4d53-b920-8688c5c59cbf.png" alt="Configuring collection properties in Amplify Studio" class="image--center mx-auto" /></p>
<p>The collections could also be configured with <code>filters</code> which are applied to the collections and also define the <code>sort</code> conditions.</p>
<h2 id="heading-sync-amplify-studio-with-local-ide">Sync Amplify Studio with local IDE</h2>
<p>With the data and UI configurations done on Amplify Studio, these changes can be synced to your local IDE with Amplify CLI.</p>
<pre><code class="lang-powershell">amplify pull -<span class="hljs-literal">-appId</span> your<span class="hljs-literal">-amplify</span><span class="hljs-literal">-project</span><span class="hljs-literal">-id</span> -<span class="hljs-literal">-envName</span> your<span class="hljs-literal">-env</span>
</code></pre>
<p>This will create all the <code>amplify</code> related files in your local project folder with <code>amplify</code> backed and resources configurations under <code>/amplify</code> folder. In your <code>src</code> folder, the UI components and data model related files are generated in <code>/ui-components</code> folder and <code>/models</code> folder respectively.</p>
<h3 id="heading-importing-and-using-components">Importing and using components</h3>
<p>Installing the needed npm dependencies.</p>
<pre><code class="lang-powershell">npm install <span class="hljs-literal">-g</span> @aws<span class="hljs-literal">-amplify</span>/<span class="hljs-built_in">cli</span>
npm install aws<span class="hljs-literal">-amplify</span> @aws<span class="hljs-literal">-amplify</span>/ui<span class="hljs-literal">-react</span>
</code></pre>
<p>In <code>index.js</code> file, configure Amplify.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { ThemeProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-amplify/ui-react"</span>;
<span class="hljs-keyword">import</span> { Amplify } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-amplify'</span>;

<span class="hljs-keyword">import</span> awsconfig <span class="hljs-keyword">from</span> <span class="hljs-string">'./aws-exports'</span>;

<span class="hljs-keyword">import</span> <span class="hljs-string">"@aws-amplify/ui-react/styles.css"</span>;
<span class="hljs-keyword">import</span> { studioTheme } <span class="hljs-keyword">from</span> <span class="hljs-string">"./ui-components"</span>;

Amplify.configure(awsconfig);
</code></pre>
<p>Using a theme from Amplify with <code>ThemeProvider</code>.</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">ThemeProvider</span> <span class="hljs-attr">theme</span>=<span class="hljs-string">{studioTheme}</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">App</span> /&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">ThemeProvider</span>&gt;</span>
</code></pre>
<p>Import the UI component created and configured from Amplify Studio.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> {
 FeatureCardCollection 
} <span class="hljs-keyword">from</span> <span class="hljs-string">'./ui-components'</span>;
</code></pre>
<h3 id="heading-modifying-the-components-locally">Modifying the components locally</h3>
<p>From the above screenshots, you may have noticed the images being broken while previewing on Amplify Studio. Let's modify the <code>InfoCardDetails</code> component to retrieve data from Amplify Storage.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, { useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;
<span class="hljs-keyword">import</span> {
  getOverrideProps,
  useNavigateAction,
} <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-amplify/ui-react/internal"</span>;
<span class="hljs-keyword">import</span> { Button, Divider, Flex, Image, Text } <span class="hljs-keyword">from</span> <span class="hljs-string">"@aws-amplify/ui-react"</span>;
<span class="hljs-keyword">import</span> { Storage } <span class="hljs-keyword">from</span> <span class="hljs-string">"aws-amplify"</span>
<span class="hljs-keyword">import</span> { useHistory } <span class="hljs-keyword">from</span> <span class="hljs-string">"react-router-dom"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">InfoCardDetails</span>(<span class="hljs-params">props</span>) </span>{
  <span class="hljs-keyword">const</span> { infoCard, overrides, ...rest } = props;
<span class="hljs-comment">//Set the states</span>
  <span class="hljs-keyword">const</span> [imageThumbnail, setImageThumbnail] = useState(<span class="hljs-string">''</span>)
  <span class="hljs-keyword">const</span> [mdDoc, setMDDoc] = useState(<span class="hljs-string">''</span>)
  useEffect(<span class="hljs-function">() =&gt;</span> {
    fetchImage()
    fetchMD()
  }, [])

<span class="hljs-comment">//Using Amplify Storage to download images </span>
  <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetchImage</span>(<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> image = <span class="hljs-keyword">await</span> Storage.get(infoCard.s3_thumbnail, {
        <span class="hljs-attr">level</span>: <span class="hljs-string">"public"</span>
      })
      setImageThumbnail(image)
    } <span class="hljs-keyword">catch</span> (err) { <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'error fetching images'</span>) }
  }
....
<span class="hljs-comment">//Setting image to Image component</span>
            &lt;Image
              width=<span class="hljs-string">"unset"</span>
              height=<span class="hljs-string">"408px"</span>
              display=<span class="hljs-string">"block"</span>
              gap=<span class="hljs-string">"unset"</span>
              alignItems=<span class="hljs-string">"unset"</span>
              justifyContent=<span class="hljs-string">"unset"</span>
              shrink=<span class="hljs-string">"0"</span>
              alignSelf=<span class="hljs-string">"stretch"</span>
              position=<span class="hljs-string">"relative"</span>
              padding=<span class="hljs-string">"0px 0px 0px 0px"</span>
              objectFit=<span class="hljs-string">"cover"</span>
              src={imageThumbnail}
              {...getOverrideProps(overrides, <span class="hljs-string">"image"</span>)}
            &gt;&lt;/Image&gt;
....
}
</code></pre>
<p>After modifying the same in <code>FeatureCard</code> and <code>ServiceCard</code>, locally run the app with <code>npm start</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681567088777/a23b3f22-5960-45b0-88de-fa797d15f6cd.png" alt="FeatureCard collection rendering on the front-end" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681567227116/0f01193b-1c17-4464-8ea2-460f94dade82.gif" alt="InfoCardDetails rendering with images and data from InfoCard model" class="image--center mx-auto" /></p>
<p>The app renders all the images after they are downloaded from Amplify Storage.</p>
<h2 id="heading-wrap-up">Wrap-up</h2>
<p>Building applications with Amplify Studio has certainly amplified developer experience with defining, provisioning, and also managing different AWS services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681568244210/278e905b-24eb-4126-b97e-5b330a8b3c12.png" alt="Managing AWS resources with Amplify Studio" class="image--center mx-auto" /></p>
<p>Having to do it manually for a full-stack developer during development would mean that they would need in-depth knowledge of all those underlying AWS services.</p>
]]></content:encoded></item><item><title><![CDATA[Amazon EventBridge: The missing piece to your app]]></title><description><![CDATA[Amazon EventBridge is a Serverless AWS Service that enables building event-driven applications with seamless integrations with multiple AWS Services and AWS Partner SaaS applications.
Launched in 2019, it was focused on building pathways for SaaS Pro...]]></description><link>https://blog.theserverlessterminal.com/amazon-eventbridge-the-missing-piece-to-your-app</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/amazon-eventbridge-the-missing-piece-to-your-app</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[AWS EventBridge]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Sun, 26 Feb 2023 17:33:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677422408715/4989e056-0845-4411-9ccc-6747e38f30b1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/eventbridge/">Amazon EventBridge</a> is a Serverless AWS Service that enables building event-driven applications with seamless integrations with multiple AWS Services and AWS Partner SaaS applications.</p>
<p>Launched in 2019, it was focused on building pathways for SaaS Product events to be consumed and routing the events to the AWS Lambda function. It initially supported buses and CloudWatch Events which was previously part of AWS CloudWatch. Since then, a lot of powerful features have been added to EventBridge to build an event-driven Serverless app.</p>
<h2 id="heading-eventbridge-buses">EventBridge Buses</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677423344321/1cc97c6a-a534-43e6-92d6-808167e8f540.png" alt="EventBridge buses in an architecture" class="image--center mx-auto" /></p>
<p><em>Event Buses</em> are the communication channel that receives events from different <em>event sources</em> which could be AWS Services such as Amazon S3 which is posting in events whenever there is any action happening in an S3 bucket, programmatically using <em>EventBridge APIs</em> to <code>send</code> events to a <em>default</em> or <em>custom bus</em> or events from a partner SaaS product posting events to a defined <em>SaaS event bus</em>. Once the events are available on the bus, <em>event rules</em> are used to route the event to the right designation these rules have powerful filtering capabilities which can be applied to event payloads.</p>
<p>Some scenarios, where you would be using it in your application -</p>
<ul>
<li><p>An S3 event that is <code>putObject</code> event would be routed to AWS Lambda functions for further computation or AWS Step Functions to kick-start a state machine execution.</p>
</li>
<li><p>An S3 event that is <code>deleteObject</code> would result in invoking an API destination that would use the <em>Input Transformer</em> to restructure the event as per API's specifications.</p>
</li>
</ul>
<p>Note: these rules are specific to event buses, and there can be multiple rules in a bus.</p>
<h2 id="heading-eventbridge-pipes">EventBridge Pipes</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677427885345/1cea506c-d246-4a84-a26a-8e193a30c866.png" alt="Sample EventBridge Pipe" class="image--center mx-auto" /></p>
<p><em>EventBridge Pipes</em> enable direct point-to-point integrations from defined event sources like Amazon DynamoDB streams, Amazon Kinesis, Amazon SQS, and more which can be filtered with advanced EventBridge filtering on the event payloads.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"eventName"</span>: [{
    <span class="hljs-attr">"prefix"</span>: <span class="hljs-string">"INSERT"</span>
  }]
}
</code></pre>
<p>The filtered events could be enriched to uplift the data before it is delivered to the target AWS Service. <em>Input Transformer</em> is available for the <em>enrichment</em> and <em>target</em> states which can transform to the expected payload structure by the AWS Service and also define the output path.</p>
<p>EventBridge Pipes, opening up the direct service integration with advanced filtering and enrichment</p>
<ul>
<li><p>EventBridge Pipe uses events from DynamoDB streams on any action on a DynamoDB table and would use filtering to process only <code>eventName: INSERT</code> for newly created items on DynamoDB and invoke a state machine execution.</p>
</li>
<li><p>New messages from Amazon MQ could be used to start a state machine execution.</p>
</li>
</ul>
<h2 id="heading-eventbridge-scheduler">EventBridge Scheduler</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677429115319/87f73498-7470-4991-9d3a-6c85b34a1bc9.png" alt="A sample EventBridge Scheduler with cron job" class="image--center mx-auto" /></p>
<p><em>EventBridge Scheduler</em> is a way to schedule your tasks that happen from time to time and need recurring events with corn expressions, time windows, and one time with a specific date and time. These events from the scheduler could be targeted to a specific AWS Service where the event payload set in the scheduler will be delivered.</p>
<ul>
<li><p>Tasks that have to be performed every day at a specific time (like 1 am).</p>
</li>
<li><p>Workloads that generate billing every month where the scheduler can invoke a Lambda function to generate bills.</p>
</li>
</ul>
<h2 id="heading-in-a-nutshell">In a nutshell</h2>
<p>Amazon EventBridge can be a crucial component of your Serverless applications which could be <em>solution-ed</em> to deliver the right events effectively to the destinations. For Event-driven architectures that consume a multitude of events, EventBridge can easily scale, and events can be filtered with advanced filtering capabilities. EventBridge eliminates the glue-coded Lambda functions which might contain additional logic to validate the events against certain patterns which are necessary to the application for further processing. EventBridge Pipes enhances your Serverless workload with direct service integration that positively affects the application's performance and scalability.</p>
<p>In any application, the design might contain multiple features by Amazon EventBridge where specific tasks could be scheduled with an <em>EventBridge Scheduler</em>, all events could be sent to an <em>Event Bus</em> which would use <em>Event Rules</em> to route the events to the designated targets or have point-to-point integrations with AWS services as producer and consumer for smooth orchestration.</p>
]]></content:encoded></item><item><title><![CDATA[Building Serverless apps with more configurations]]></title><description><![CDATA[Applications built on the Serverless stack have existed for nearly a decade with AWS Lambda functions being launched in 2014. And ever since, the association of a Serverless application is always with a Lambda function.

In this blog post, we will lo...]]></description><link>https://blog.theserverlessterminal.com/building-serverless-apps-with-more-configurations</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/building-serverless-apps-with-more-configurations</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Applications]]></category><category><![CDATA[stepfunction]]></category><category><![CDATA[AWS EventBridge]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Sat, 11 Feb 2023 18:19:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676106557912/6f1811b4-23db-4acd-bdd2-fdcb1b9c07cf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Applications built on the Serverless stack have existed for nearly a decade with <a target="_blank" href="https://aws.amazon.com/lambda/">AWS Lambda functions</a> being launched in 2014. And ever since, the association of a Serverless application is always with a Lambda function.</p>
<p><img src="https://lh4.googleusercontent.com/9gm9orKCYaUaeXPo1LwGtHd6gVWmvpLgmmzf4BG4BYaJDcWSocIWpUlgB_D_ygfwhPDswDMhv6slA2pgKb_hozhfsdoITEpqStW3vgdCtlvXErZ4B_0upnKkocNayhoodg-jNZjX9zKS1fA=s2048" alt class="image--center mx-auto" /></p>
<p>In this blog post, we will look into how Serverless apps can be built with just configurations and leveraging the features of AWS Services which can integrate with different services.</p>
<h2 id="heading-why-do-we-need-direct-integrations">Why do we need direct integrations?</h2>
<p>The AWS Services with direct integrations capabilities improve the architectures by removing the <em>glue code</em> needed.</p>
<p>Just let us think about building REST APIs with <a target="_blank" href="https://aws.amazon.com/api-gateway/">AWS API Gateway</a> which returns a particular set of data. For this, we would have to have an API Gateway stage in which <code>GET</code> method invoking a Lambda function which is either using <code>query</code> or <code>scan</code> to search the items on DynamoDB. The response from DyanmoDB SDK would then be returned to API Gateway from the Lambda function.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676110093802/d6bab7fc-c0f7-4bf7-ab8e-d3117c0c88e6.png" alt="Architecture of API Gateway + Lambda Fn + DynamoDB table" class="image--center mx-auto" /></p>
<p>Now, let's understand how direct integrations with this would help.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676113860200/4223d6b3-e74b-42d1-818f-d68e5d757624.png" alt="Upside of direct integrations" class="image--center mx-auto" /></p>
<h3 id="heading-cold-starts">Cold Starts</h3>
<p>Now that we are familiar with the downside of having a Lambda would be the Cold Starts that we would have to be mindful of and also have a workaround to overcome them. But when doing a direct integration with different AWS Services, since we would not have a Lambda function as the middle-man or the glue code, we would not have to worry about the problem of Cold Starts.</p>
<h3 id="heading-latency">Latency</h3>
<p>With the removal of an additional component from the architecture, we are reducing the latency with the requests routed with a Lambda function and returning the results back. Although this would be in milliseconds, it is very accountable in a production environment where there would be 100s and 1000s of requests at a given time.</p>
<h3 id="heading-scalability">Scalability</h3>
<p>"Would it be highly scalable when we have concurrent requests that are coming in?" With Lambda the other question of concern is, "how can we handle concurrency and ensure the application is scalable proof?" When we do a direct integration this responsibility would be on the cloud provider and the AWS Service would ensure your application could scale better.</p>
<h2 id="heading-direct-integrations-with-aws-services">Direct integrations with AWS Services</h2>
<h3 id="heading-aws-step-functions">AWS Step Functions</h3>
<p>Serverless applications which would need orchestration and a dedicated workflow that would integrate with multiple AWS Services are a huge win! Starting from AWS re:Invent 2021, where Step Functions announced supporting AWS SDK-based integrations there have been over 300+ API actions that are currently been supported.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676118266752/fa681c17-e632-444b-a7e0-1390f9a5b5e1.png" alt="AWS Step Functions with SDK support" class="image--center mx-auto" /></p>
<p>There are a lot of possibilities that would open up with SDK-based integrations with AWS Step Functions, there is a <a target="_blank" href="https://blog.theserverlessterminal.com/series/aws-step-functions">blog post series</a> about the same.</p>
<h3 id="heading-amazon-eventbridge">Amazon EventBridge</h3>
<p>Amazon EventBridge plays a vital role in event-driven, choreography-based Serverless architectures. While EventBridge supports different integrations with different <code>event sources</code> or <code>destinations</code> and also configuring HTTP endpoints as <code>API destinations</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676119765020/b8fe7778-2c76-4825-a155-9ffdd3df2a07.png" alt="Amazon EventBridge direct service integrations" class="image--center mx-auto" /></p>
<p>Not only does it support AWS Services which can post events to the <code>event bus</code> but also with the new <a target="_blank" href="https://aws.amazon.com/eventbridge/pipes/">EventBridge Pipes</a>, it is possible to have configurations that can integrate service to service.</p>
<p>For instance, before DynamoDB Streams could only trigger a Lambda function but now with EventBridge Pipes, you can build integrations where the source of the event originating from DynamoDB Streams and then it could be targeted to invoke a Step Functions state machine execution.</p>
<h3 id="heading-amazon-api-gateway">Amazon API Gateway</h3>
<p>API Gateway can integrate with Velocity Template Langauge (VTL) which could be used for request and response resolver mapping and integrate with services like DynamoDB, Step Functions, Kinesis, and more.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676131485450/0f9a31eb-41e8-4ac6-9a3b-4ea3258deb99.png" alt="AWS API Gateway with direct integrations" class="image--center mx-auto" /></p>
<p>Like the above mentioned example, API Gateway can directly integrate with DynamoDB. Take a look at the <a target="_blank" href="https://serverlessland.com/patterns/apigw-dynamodb">pattern from ServerlessLand</a>. And the snippet of the same of how you can integrate an API Gateway to <code>query</code> DynamoDB.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">MusicArtistMethodGet:</span>
    <span class="hljs-attr">Type:</span> <span class="hljs-string">'AWS::ApiGateway::Method'</span>
    <span class="hljs-attr">Properties:</span>
      <span class="hljs-attr">RestApiId:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">Api</span>
      <span class="hljs-attr">ResourceId:</span> <span class="hljs-type">!Ref</span> <span class="hljs-string">MusicArtistResource</span>
      <span class="hljs-attr">HttpMethod:</span> <span class="hljs-string">GET</span>
      <span class="hljs-attr">ApiKeyRequired:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">AuthorizationType:</span> <span class="hljs-string">NONE</span>
      <span class="hljs-attr">RequestParameters:</span>
        <span class="hljs-attr">method.request.path.artist:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">Integration:</span>
        <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS</span>
        <span class="hljs-attr">Credentials:</span> <span class="hljs-type">!GetAtt</span> <span class="hljs-string">APIGatewayRole.Arn</span>
        <span class="hljs-attr">IntegrationHttpMethod:</span> <span class="hljs-string">POST</span>
        <span class="hljs-attr">Uri:</span> <span class="hljs-type">!Sub</span> <span class="hljs-string">'arn:aws:apigateway:${AWS::Region}:dynamodb:action/Query'</span>
        <span class="hljs-attr">PassthroughBehavior:</span> <span class="hljs-string">WHEN_NO_TEMPLATES</span>
        <span class="hljs-attr">RequestParameters:</span>
          <span class="hljs-attr">integration.request.path.artist:</span> <span class="hljs-string">method.request.path.artist</span>
        <span class="hljs-attr">RequestTemplates:</span>
          <span class="hljs-attr">application/json:</span> <span class="hljs-string">"{\"TableName\":\"Music\",\"IndexName\":\"Artist-Index\",\"KeyConditionExpression\":\"artist=:v1\",\"ExpressionAttributeValues\":{\":v1\":{\"S\":\"$util.urlDecode($input.params('artist'))\"}}}"</span>
        <span class="hljs-attr">IntegrationResponses:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">StatusCode:</span> <span class="hljs-string">'200'</span>
            <span class="hljs-attr">ResponseTemplates:</span>
              <span class="hljs-attr">application/json:</span> <span class="hljs-string">"#set($inputRoot = $input.path('$'))\n{\n\t\"music\": [\n\t\t#foreach($field in $inputRoot.Items) {\n\t\t\t\"id\": \"$field.id.S\",\n\t\t\t\"artist\": \"$field.artist.S\",\n\t\t\t\"album\": \"$field.album.S\"\n\t\t}#if($foreach.hasNext),#end\n\t\t#end\n\t]\n}"</span>
      <span class="hljs-attr">MethodResponses:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">StatusCode:</span> <span class="hljs-string">'200'</span>
</code></pre>
<h3 id="heading-aws-appsync">AWS AppSync</h3>
<p>AppSync being a Serverless GraphQL offering, the <code>queries</code>, <code>mutations</code> and <code>subscriptions</code> can use data sources which are DynamoDB, Aroura, Open Search, or HTTP endpoints other than the expected AWS Lambda Functions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676132514833/f42bd252-c681-400c-a258-64ad9285dbe9.png" alt="AppSync integrations with VTL resolvers" class="image--center mx-auto" /></p>
<p>These direct service integrations can leverage Velocity Template Langauge (VTL) or JavaScript resolvers. Additional to the function based resolvers, you can create pipeline resolvers that can not only integrate one but multiple AWS Services.</p>
<h2 id="heading-where-to-configure">Where to configure</h2>
<p>These direct service configurations would make more sense when you are working with these Services from Infrastructure as Code (IaC) or Infrastructure from Code (IfC) approaches. Like the API Gateway example showcasing a <code>YAML</code> template, you can use AWS SAM where you can define the resources and then the integrations with resolver info.</p>
<p>Another simpler way to integrate services from AWS Step Functions is from the Workflow Studio on AWS Console, where you can define the task along with SDK integration by defining the parameters and results.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676133110258/8821b068-880a-44ad-bd63-8b20ba877456.png" alt class="image--center mx-auto" /></p>
<p>This generates a JSON structure for the State Machine which could be imported to your IaC project.</p>
<p>Similar to Workflow Studio, you can design Serverless apps that can generate SAM Template from AWS Application Composer. This currently supports EventBridge integrations with different Services.</p>
<h2 id="heading-sometimes-you-may-need-more-than-a-configuration">Sometimes you may need more than a configuration</h2>
<p>Before wrapping up, yes! Sometimes you may need more than a configuration where you need to define your Lambda function or glue codes that process the data from these Services in an efficient way. The above AWS Services support direct integration with different AWS Services, there are some of which like Amazon SNS and Amazon SQS which could use configurations to trigger a Lambda function.</p>
]]></content:encoded></item><item><title><![CDATA[Amazon Inspector can now scan AWS Lambda Functions]]></title><description><![CDATA[Amazon Inspector, a service focused on automated vulnerability scanner that continuously scans AWS workloads for vulnerabilities is now supporting scanning for AWS Lambda functions.
Amazon Inspector supports scanning of AWS Lambda functions and Lambd...]]></description><link>https://blog.theserverlessterminal.com/amazon-inspector-can-now-scan-aws-lambda-functions</link><guid isPermaLink="true">https://blog.theserverlessterminal.com/amazon-inspector-can-now-scan-aws-lambda-functions</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[serverless]]></category><category><![CDATA[aws-inspector]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Jones Zachariah Noel N]]></dc:creator><pubDate>Wed, 30 Nov 2022 12:21:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176410019/d0a16da0-7518-4550-ad62-de760e84b444.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://aws.amazon.com/inspector/">Amazon Inspector</a>, a service focused on automated vulnerability scanner that continuously scans AWS workloads for vulnerabilities is <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2022/11/aws-amazon-inspector-support-aws-lambda-functions/">now supporting scanning for AWS Lambda functions</a>.</p>
<p>Amazon Inspector supports scanning of AWS Lambda functions and Lambda layers with Java, NodeJS and Python runtimes.</p>
<h3 id="heading-need-for-vulnerability-checks">Need for vulnerability checks</h3>
<p>Often times, we have code which depends on many packages from installed via different package managers which are prone to security leaks. Although, updating to new version could resolve it, you might have dependencies which are still prone to vulnerabilities. The best way to address is a regular scanning of your codebase to ensure there aren't serious issues.</p>
<p>Serverless specific, until now we had to depend on a third party tool to scan but now it's possible with Amazon Inspector</p>
<h3 id="heading-enabling-inspector">Enabling Inspector</h3>
<p>First off, you would have to enable Inspector for your AWS Account.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176135036/73472bed-cfcf-4d26-9ee1-ce0ff195e774.png" alt="Enabling Inspector for your account" /></p>
<h3 id="heading-your-first-scan">Your first scan</h3>
<p>Once enabled, you will need a few minutes for <a target="_blank" href="https://aws.amazon.com/inspector/">Amazon Inspector</a> to scan across your resources across Amazon EC2 instances, Amazon ECR images and now AWS Lambda functions and Lambda layers.</p>
<p>After <a target="_blank" href="https://aws.amazon.com/inspector/">Amazon Inspector</a> has scanned you can view the report on Inspector dashboard.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176137518/f9215346-9315-44bd-ab46-201446ddf91e.png" alt="Amazon Inspector dashboard of all vulnerabilities" /></p>
<p>[Fun Fact] <em>As you can see, I don't have a single EC2 instance running on this AWS Account</em>.</p>
<h3 id="heading-scanned-findings">Scanned findings</h3>
<p>Inspector found that <em>9 of my Lambda functions</em> had a vulnerabilities with critical, high and medium levels.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176139769/e06c64f1-9e07-4e03-b632-868d0ebf695b.png" alt="Inspector findings by Lambda functions" /></p>
<p>If you click on one of the functions, you can find the summary for vulnerabilities in that specific AWS Lambda function or the vulnerability because of using an AWS Lambda layer.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176142188/8ec82027-013e-4ae7-ac8f-a54d5035758b.png" alt="Summary for a Lambda function" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176143629/8e094ce9-2264-495c-b7b9-5532b4b80267.png" alt="Findings for a Lambda function" /></p>
<h3 id="heading-lets-dive-into-the-finding">Let's dive into the finding</h3>
<p>One of the vulnerability is with <a target="_blank" href="https://www.npmjs.com/package/axios">Axios NPM package</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176145769/0b6c8153-4fdf-4cef-852a-cee148048768.png" alt="Vulnerability with Axios" /></p>
<p>This also gives details about axios package and the affected with fixed version.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176148127/546c6084-0fea-4bc8-a01c-153b5f030d54.png" alt="Affected packages" /></p>
<p>Inspector provides you the complete details of the vulnerability along with the link to National Vulnerability Database (NVD) report.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176150260/d558e57d-b8f8-4839-a573-d7d55bf19844.png" alt="Vulnerability details" /></p>
<p>Along the details, you can also find how to fix it with the available remedy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176151757/6cb0b36c-b7b2-458e-b860-7741cf30df65.png" alt="Remedy to fix the vulnerability" /></p>
<p>In this case, it's updating axios version.</p>
<p>Another way to understand the severity of the vulnerability, the score from National Vulnerability Database (NVD) and Inspector is available.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176154101/68173513-c984-47a0-a8ed-c93f7d4cf8d2.png" alt="Inspector score" /></p>
<h3 id="heading-pricing">Pricing</h3>
<p>Amazon Inspector is available as part of free trial for 15 days. For Lambda scans alone, there is monthly based on average number of Lambda functions scanned per month and price is prorated based on total Inspector coverage hours for the month.</p>
<p>More details on <a target="_blank" href="https://aws.amazon.com/inspector/pricing">Amazon Inspector Pricing</a>.</p>
<h3 id="heading-action-time">Action time!</h3>
<p>Now it's time to scan your Lambda functions and layers with Amazon Inspector.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675176155497/f4141230-a4d1-4879-a92a-3122d46dd543.gif" alt="Time for scans now" class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>