<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Cloudnloud Tech Community]]></title><description><![CDATA[The CloudnLoud community is a non-profit open source tech community, volunteer-run event presented by members of the CloudnLoud Community.]]></description><link>https://blog.cloudnloud.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 10:59:57 GMT</lastBuildDate><atom:link href="https://blog.cloudnloud.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[API Security]]></title><description><![CDATA[API (Application Programming Interface) security is critical for protecting sensitive data and maintaining the integrity of systems and applications. APIs are used to connect different systems and applications, and as a result, they can provide a gat...]]></description><link>https://blog.cloudnloud.com/api-security</link><guid isPermaLink="true">https://blog.cloudnloud.com/api-security</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Security]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Karthikeyan Sadayandi]]></dc:creator><pubDate>Wed, 01 Feb 2023 07:25:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1675233581504/bc2fdc9e-5342-41ea-b906-a337ee6e57ee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://thehackernews.com/new-images/img/b/R29vZ2xl/AVvXsEjvNiCRORHFOKA1jju16HMN8q9BABEjLs3v8LckkI_0tKNGG5ZdsTDfVua9Ze7RChqKnb9ahd1KSPMSG-h82T4ocry5BPJl0d9oXi0CmW0LDFxTx1g5Q6VOsLSB-6m7z-5LqxYLABdTH5NChvBJ9NK-yoBUyPpX5aw2cmqIF9QGqvOoKnCFUoyMJQsF/w0/api.jpg" alt="Top 5 API Security Myths That Are Crushing Your Business" /></p>
<p>API (Application Programming Interface) security is critical for protecting sensitive data and maintaining the integrity of systems and applications. APIs are used to connect different systems and applications, and as a result, they can provide a gateway for attackers to gain access to sensitive information.</p>
<p>API security involves implementing measures to protect the integrity and confidentiality of data that is transmitted through APIs. This includes securing the API endpoint, authenticating and authorizing API requests, and encrypting data in transit.</p>
<p><img src="https://www.indusface.com/wp-content/uploads/2021/12/Application-and-API-Attack-Patterns.png" alt="Top 6 API Security Best Practices | Indusface Blog" /></p>
<p>API security is important for several reasons:</p>
<ol>
<li><p>Protection of sensitive data: APIs are often used to access and transmit sensitive information, such as financial data, personal information, and confidential business information. Without proper security measures in place, this data can be vulnerable to theft, alteration, or unauthorized access.</p>
</li>
<li><p>Compliance: Many industries have strict regulations and compliance requirements, such as HIPAA and PCI-DSS, that require organizations to implement robust security measures to protect sensitive data.</p>
</li>
<li><p>Maintaining the integrity of systems and applications: Unauthorized access to APIs can allow attackers to modify or disrupt the functionality of systems and applications. This can result in service disruption, data loss, and reputational damage.</p>
</li>
<li><p>Preventing identity theft: APIs are also used to authenticate users and authorize access to resources. Insecure APIs can allow attackers to steal the identities of legitimate users and gain unauthorized access to sensitive information.</p>
</li>
</ol>
<p>To ensure API security, organizations should implement a comprehensive security strategy that includes:</p>
<ul>
<li><p>Strong authentication and authorization mechanisms</p>
</li>
<li><p>Encryption of data in transit</p>
</li>
<li><p>Regularly testing and monitoring APIs for vulnerabilities</p>
</li>
<li><p>Use of API gateways, firewalls, and web application firewalls</p>
</li>
<li><p>Regularly updating and patching systems and applications</p>
</li>
<li><p>Regularly train employees on API security best practices and how to identify and report suspicious activity.</p>
</li>
</ul>
<p>In conclusion, API security is a critical component of an organization's overall security strategy. It is important to implement robust security measures to protect sensitive information and maintain the integrity of systems and applications. By staying informed and taking proactive measures, organizations can reduce the risk of falling victim to API-based attacks.</p>
<p>[Karthikeyan S](<a target="_blank" href="https://www.linkedin.com/in/herbie36/">https://www.linkedin.com/in/herbie36/</a>)</p>
<p><em>Community and Social Footprints :</em></p>
<p><em>- [GitHub](</em><a target="_blank" href="https://github.com/cloudnloud"><em>https://github.com/cloudnloud</em></a><em>)</em></p>
<p><em>- [Twitter](</em><a target="_blank" href="https://twitter.com/cloudnloud"><em>https://twitter.com/cloudnloud</em></a><em>)</em></p>
<p><em>- [YouTube Cloud DevOps Free Trainings](</em><a target="_blank" href="https://www.youtube.com/c/CloudnLoud"><em>https://www.youtube.com/c/CloudnLoud</em></a><em>)</em></p>
<p><em>- [Linkedin Page](</em><a target="_blank" href="https://www.linkedin.com/company/cloudnloud/"><em>https://www.linkedin.com/company/cloudnloud/</em></a><em>)</em></p>
<p><em>- [Linkedin Group](</em><a target="_blank" href="https://www.linkedin.com/groups/9124202/"><em>https://www.linkedin.com/groups/9124202/</em></a><em>)</em></p>
<p><em>- [Discord Channel](</em><a target="_blank" href="https://discord.com/invite/vbjRQGVhuF"><em>https://discord.com/invite/vbjRQGVhuF</em></a><em>)</em></p>
<p><em>- [Dev](</em><a target="_blank" href="https://dev.to/cloudnloud"><em>https://dev.to/cloudnloud</em></a><em>)</em></p>
]]></content:encoded></item><item><title><![CDATA[Well spent weekend @ AWS Reinvent 2022 Recap]]></title><description><![CDATA[Well after months of deep slumber, away from social media networks, I had to return to Pune and adapt to hybrid working.
I was browsing through meetup.com and was intrigued by one event from AWS user group Pune. It is AWS Reinvent 2022 -Recap on 21st...]]></description><link>https://blog.cloudnloud.com/well-spent-weekend-aws-reinvent-2022-recap</link><guid isPermaLink="true">https://blog.cloudnloud.com/well-spent-weekend-aws-reinvent-2022-recap</guid><category><![CDATA[AWS]]></category><category><![CDATA[Meetup]]></category><category><![CDATA[aws reinvent 2022]]></category><category><![CDATA[networking]]></category><category><![CDATA[pune]]></category><dc:creator><![CDATA[Padmini Tadikonda]]></dc:creator><pubDate>Mon, 23 Jan 2023 21:20:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1674506410894/eb97bc55-c070-4ffd-8151-390f53f4a76b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Well after months of deep slumber, away from social media networks, I had to return to Pune and adapt to hybrid working.</p>
<p>I was browsing through meetup.com and was intrigued by one event from AWS user group Pune. It is AWS Reinvent 2022 -Recap on 21st Jan. Having missed the actual event in December 2022, I felt this is an opportunity to get a sneak peek into what else AWS has to introduce to us. I immediately RSVPd and here is how it went!!</p>
<p>EPAM Systems Pune hosted the event with  200+ participants and organized it very well. I could meet and share a conversation with Sr. Solutions architect Mayur Bhagia and AWS community builders and professionals from the city.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674506717731/d83e0967-dae0-4e96-94e0-f14a59f95a66.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://www.linkedin.com/in/dheeraj-choudhary/">Dheeraj Chaudhary</a> kickstarted the session with the <strong>AWS application composer</strong> which is still in preview. this service is a visual builder and makes it easier to design a  serverless application architecture with options like dragging, grouping and connecting various AWS services. It is like a Canvas where you can drag and drop, group and configure various AWS services.</p>
<p>Then came a question <strong>why did AWS introduce application composer when  AWS cloud formation is already in place?</strong></p>
<p>AWS cloud formation is difficult and complex for beginners to use. Hence AWS introduced the service to simplify IaaS. An attempt towards ‘No-Code’ IaaS.</p>
<p><strong>Amazon Code Whisperer(in preview):</strong></p>
<p>This is gonna be a developer's best friend. This service recommends inbuilt functions and gives suggestions based on a developer's style of coding for which AWS uses AI ML in the backend. It supports Java, Python,JavaScript,C# and Typescript. This service is now available in VScode,Cloud9,Jetbrains. One needs to install the AWS toolkit and integrate with AWS IAM identity centre and log in to be able to use this feature</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674501816578/a2e47fd9-9547-4580-b503-b89c7b4f6fb9.jpeg" alt class="image--center mx-auto" /></p>
<p>Then, Mayur Bhagia introduced us to many other services from AWS.</p>
<p><strong>Amazon ECS Service Connect(GA)</strong> can simplify the Service Discovery, Connectivity and traffic observability for ECS. This service is in General Availability now and can be used in Prod applications</p>
<p><strong>Amazon RDS Blue/Green Deployments</strong> (GA): This feature lets us make DB-related changes like upgrades, modifying parameters or schema in the staging environments without impacting the Production and then cut over the staging environment into Production with minimal downtime(usually less than 60s).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674501895003/013c3c49-95f9-4261-bc57-c31cc0075774.jpeg" alt class="image--center mx-auto" /></p>
<p>Mayur elaborately discussed AWS Elastic Disaster Recovery Automated Failback(GA), AWS Cloudwatch Internet monitor(Preview), Amazon Cloudwatch Logs Data Protection(GA), S3 Multi-Region Access points Failover Controls(GA), Amazon Route 53 ARC-Zonal Shift(Preview).</p>
<p>He had explained in detail</p>
<ul>
<li><p>The difference between Blue/green deployment and Canary deployment and</p>
</li>
<li><p>How an on-prem application/ data is migrated onto the cloud.</p>
</li>
<li><p>Difference between snapshot and AMI</p>
</li>
<li><p>Difference between Cloudwatch and Cloud Trail</p>
</li>
<li><p>HAR- HTTP ARchive format,</p>
</li>
<li><p>RUM : Real User Monitoring (Lol..I knew only one RUM.. good to know another meaning to it :P)</p>
</li>
<li><p>Synthetics monitoring</p>
</li>
</ul>
<p><a target="_blank" href="https://www.linkedin.com/in/somesh-srivastava-5a746113/">Somesh Srivastava</a>, spoke on networking related topics like Amazon VPC Lattice(Preview) and AWS Verified Access( Preview)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674503620612/d82c4fb0-d218-4144-812d-0056bf0bd4a9.jpeg" alt class="image--center mx-auto" /></p>
<p>AWS VPC lattice is not a new service altogether. It is a construct within VPC servie. This is integrated with IAM. He had given a demo on how to make use this service</p>
<p>Somesh also explained how cross-zone loadbalancing works, different use cases and addressed queries from the group.</p>
<p>As the session came to an end, it is quiz time to check how much we have grasped from this event and guess what! I stood second and was given some nice cute goodies...Lol!!!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674510067162/00b871dc-5db0-453f-8043-c0816357f3da.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674507255834/adae9ef7-fadc-477e-b8d1-1fa2da07eeb2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674508315304/4ab66120-1d76-4ea6-b316-fa53bc774385.png" alt class="image--center mx-auto" /></p>
<p>All in all, it was a great event where you get to interact with professionals, students and IT aspirants. I got some key advice from seniors on my next steps and I advised some freshers and entry-level professionals on their career paths. Sometimes you counsel others, and sometimes you get counseled!! Good either ways!</p>
<p>The sequel to this event, Part 2 would be conducted on 4th Feb. I will keep you all posted about that as well. So follow me on <a target="_blank" href="https://www.linkedin.com/in/padministack/">my linkedin</a> . Happy learning until then!! Bub-byee!!</p>
]]></content:encoded></item><item><title><![CDATA[Cyber Security Series]]></title><description><![CDATA[📌 Cyber Security
Cybersecurity is a defensive method to defend devices and services from cyber-attacks/cyber criminals and ensure the data integrity/confidentiality/availability is maintained. Today the complete World is having data stored across th...]]></description><link>https://blog.cloudnloud.com/cyber-security-series-1</link><guid isPermaLink="true">https://blog.cloudnloud.com/cyber-security-series-1</guid><category><![CDATA[#cybersecurity]]></category><category><![CDATA[Security]]></category><category><![CDATA[CyberSec]]></category><dc:creator><![CDATA[Kannammal G]]></dc:creator><pubDate>Tue, 17 Jan 2023 10:21:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673950359796/ee337f41-3d10-4a38-ba54-fdcaedca33b4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>📌 <strong>Cyber Security</strong></p>
<p>Cybersecurity is a defensive method to defend devices and services from cyber-attacks/cyber criminals and ensure the data integrity/confidentiality/availability is maintained. Today the complete World is having data stored across the globe (Cloud/Data centers) and with the emerging trend of cyber-attacks around the world every organization must maintain data security and privacy through a strong defensive security model.</p>
<p>📌 <strong>How Cyber security helps?</strong></p>
<p>• Security to safeguard the computing environment</p>
<p>• Ensuring strong controls and security is implemented across the environment</p>
<p>• Protection of computer resources and data from theft/damage/attacks from external or unauthorized sources</p>
<p>• Protecting the network from digital attacks that aims to destroy the system and its information</p>
<p> 📌 <strong>How Does Cyber Security Work? The Challenges of Cyber Security</strong></p>
<p>Cyber security works in a series of subdomains within Security – like Application Security, Cloud Security /Identity Management and Data Security/Mobile Security/Network Security/Disaster Recovery and Business Continuity Planning. Each of these is segregated to maintain the security measures within its limit, and ensure strong and robust security is implemented to make it the attackers impossible to initiate the attack.</p>
<p>📌 <strong>Types of Cyber Threats (Listed some but not limited to)</strong></p>
<p> •   Malware attack</p>
<p>•   Trojan</p>
<p>•   Botnets</p>
<p>•   Adware</p>
<p>•   SQL injection</p>
<p>•   Phishing</p>
<p>•   Man in The Middle (MITM) attack</p>
<p>•   Denial Of Service (DOS) attack</p>
<p>•   Ransomware</p>
<p>•   Crypto jacking</p>
<p>•   Social engineering</p>
<h3 id="heading-different-domains-in-cyber-security">📌Different Domains in Cyber security?</h3>
<ul>
<li><p><strong>Access control systems &amp; methodologies</strong> – dealing with the protection of systems and it's resources from unauthorized access. Ex: MFA, SSO etc</p>
</li>
<li><p><strong>Telecommunication and Network Security</strong> – dealing with network communications, protocols, services, and threats/vulnerabilities associated with those services.</p>
</li>
<li><p><strong>Security Management practices</strong> – dealing and managing the systems against system failures, cyber-attacks, natural disasters, and other interruptions to the services.</p>
</li>
<li><p><strong>Security Architecture and Engineering</strong> -Policies and Procedures to facilitate Security controls and involves in policy planning for every type of security issues.</p>
</li>
<li><p><strong>Law, Investigation and Ethics</strong> -Legal issues associated with security of the system, such as dealing with cyber-attacks is dealt in this domain</p>
</li>
<li><p><strong>Application and system development security</strong> - Security during the application development and database security models and implementation of security during the testing and development phase including code security etc.</p>
</li>
<li><p><strong>Cryptography</strong> – helps to understand how, when, and what encryptions to be used and covers various types of encryption and logic behind the same.</p>
</li>
<li><p><strong>Computer operations security – day-to-</strong>day security operations on computers. Preventing malware attacks/incidents reported.</p>
</li>
<li><p><strong>Physical security</strong> – deals with Physical access to the computer resources servers, workstations etc.</p>
</li>
</ul>
<h3 id="heading-how-does-cyber-security-work-the-challenges-of-cyber-security">📌How Does Cyber Security Work? The Challenges of Cyber Security</h3>
<p>Cyber security works in a series of subdomains within Security – like Application Security, Cloud Security /Identity Management Data Security/Mobile Security/Network Security/Disaster Recovery and Business Continuity Planning. Each of these is segregated to maintain the security measures within its limit and ensure robust security implemented to make attackers impossible to initiate the attack.</p>
<p>Happy learning folks 🙂</p>
<h1 id="heading-community-and-social-footprints"><em>Community</em> and <em>Social</em> Footprints :</h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/KannamGCyber/">Kannammal G</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[New ML Governance Tools for Amazon SageMaker]]></title><description><![CDATA[Introduction
As companies increasingly adopt machine learning (ML) for their business applications, they are looking for ways to improve governance of their ML projects with simplified access control and enhanced visibility across the ML lifecycle. A...]]></description><link>https://blog.cloudnloud.com/new-ml-governance-tools-for-amazon-sagemaker</link><guid isPermaLink="true">https://blog.cloudnloud.com/new-ml-governance-tools-for-amazon-sagemaker</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Governance]]></category><category><![CDATA[sagemaker ]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[access control]]></category><dc:creator><![CDATA[Sampath Kumar Basa]]></dc:creator><pubDate>Tue, 17 Jan 2023 09:34:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673947677910/3d411af4-3399-40bf-a418-a947cd6251ae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction"><strong>Introduction</strong></h1>
<p>As companies increasingly adopt machine learning (ML) for their business applications, they are looking for ways to improve governance of their ML projects with simplified access control and enhanced visibility across the ML lifecycle. A common challenge in that effort is managing the right set of user permissions across different groups and ML activities. For example, a data scientist in your team that builds and trains models usually requires different permissions than an MLOps engineer that manages ML pipelines. Another challenge is improving visibility over ML projects. For example, model information, such as intended use, out-of-scope use cases, risk rating, and evaluation results, is often captured and shared via emails or documents. In addition, there is often no simple mechanism to monitor and report on your deployed model behavior.</p>
<p>That’s why I’m excited to announce a <a target="_blank" href="https://aws.amazon.com/sagemaker/ml-governance"><strong>new set of ML governance tools for Amazon SageMaker</strong></a>.</p>
<p>As an ML system or platform administrator, you can now use <strong>Amazon SageMaker Role Manager</strong> to define custom permissions for SageMaker users in minutes, so you can onboard users faster. As an ML practitioner, business owner, or model risk and compliance officer, you can now use <strong>Amazon SageMaker Model Cards</strong> to document model information from conception to deployment and <strong>Amazon SageMaker Model Dashboard</strong> to monitor all your deployed models through a unified dashboard.</p>
<p>Let’s dive deeper into each tool, and I’ll show you how to get started.</p>
<p><strong>Amazon SageMaker Role Manager  
</strong>SageMaker Role Manager lets you define custom permissions for SageMaker users in minutes. It comes with a set of predefined policy templates for different personas and ML activities. Personas represent the different types of users that need permissions to perform ML activities in SageMaker, such as data scientists or MLOps engineers. ML activities are a set of permissions to accomplish a common ML task, such as running SageMaker Studio applications or managing experiments, models, or pipelines. You can also define additional personas, add ML activities, and your managed policies to match your specific needs. Once you have selected the persona type and the set of ML activities, SageMaker Role Manager automatically creates the required <a target="_blank" href="https://aws.amazon.com/iam/">AWS Identity and Access Management</a> (IAM) role and policies that you can assign to SageMaker users.</p>
<p><strong>A Primer on SageMaker and IAM Roles  
</strong>A role is an IAM identity that has permissions to perform actions with AWS services. Besides user roles that are assumed by a user via federation from an Identity Provider (IdP) or the AWS Console, Amazon SageMaker requires service roles (also known as execution roles) to perform actions on behalf of the user. SageMaker Role Manager helps you create these service roles:</p>
<ul>
<li><p><strong>SageMaker Compute Role</strong> – Gives SageMaker compute resources the ability to perform tasks such as training and inference, typically used via <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html">PassRole</a>. You can select the <code>SageMaker Compute Role</code> persona in SageMaker Role Manager to create this role. Depending on the ML activities you select in your SageMaker service roles, you will need to create this compute role first.</p>
</li>
<li><p><strong>SageMaker Service Role</strong> – Some AWS services, including SageMaker, require a service role to perform actions on your behalf. You can select the <code>Data Scientist</code>, <code>MLOps</code>, or <code>Custom</code> persona in SageMaker Role Manager to start creating service roles with custom permissions for your ML practitioners.</p>
</li>
</ul>
<p>Now, let me show you how this works in practice.</p>
<p>There are two ways to get to SageMaker Role Manager, either through <strong>Getting started</strong> in the <a target="_blank" href="https://console.aws.amazon.com/sagemaker">SageMaker console</a> or when you select <strong>Add user</strong> in the SageMaker Studio Domain control panel.</p>
<p>I start in the SageMaker console. Under <strong>Configure role</strong>, select <strong>Create a role</strong>. This opens a workflow that guides you through all required steps.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/01/sm-admin-hub-01.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/01/sm-admin-hub-01.png" alt="Amazon SageMaker Admin Hub - Getting Started" /></a></p>
<p>Let’s assume I want to create a SageMaker service role with a specific set of permissions for my team of data scientists. In Step 1, I select the predefined policy template for the <strong>Data Scientist</strong> persona.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/07/sm-role-manager-1.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/07/sm-role-manager-1.png" alt="Amazon SageMaker Role Manager - Select persona" /></a></p>
<p>I can also define the network and encryption settings in this step by selecting <a target="_blank" href="https://aws.amazon.com/vpc/">Amazon Virtual Private Cloud</a> (Amazon VPC) subnets, security groups, and encryption keys.</p>
<p>In Step 2, I select what ML activities data scientists in my team need to perform.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/01/sm-admin-hub-03.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/01/sm-admin-hub-03.png" alt="Amazon SageMaker Admin Hub - Configure ML activities" /></a></p>
<p>Some of the selected ML activities might require you to specify the <a target="_blank" href="https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html">Amazon Resource Name</a> (ARN) of the SageMaker Compute Role so SageMaker compute resources have the ability to perform the tasks.</p>
<p>In Step 3, you can attach additional IAM policies and add tags to the role if needed. Tags help you identify and organize your AWS resources. You can use tags to add attributes such as project name, cost center, or location information to a role. After a final review of the settings in Step 4, select <strong>Submit</strong>, and the role is created.</p>
<p>In just a few minutes, I set up a SageMaker service role, and I’m now ready to onboard data scientists in SageMaker with custom permissions in place.</p>
<p><strong>Amazon SageMaker Model Cards</strong><br />SageMaker Model Cards helps you streamline model documentation throughout the ML lifecycle by creating a single source of truth for model information. For models trained on SageMaker, SageMaker Model Cards discovers and autopopulates details such as training jobs, training datasets, model artifacts, and inference environment. You can also record model details such as the model’s intended use, risk rating, and evaluation results. For compliance documentation and model evidence reporting, you can export your model cards to a PDF file and easily share them with your customers or regulators.</p>
<p>To start creating SageMaker Model Cards, go to the <a target="_blank" href="https://console.aws.amazon.com/sagemaker">SageMaker console</a>, select <strong>Governance</strong> in the left navigation menu, and select <strong>Model cards</strong>.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards.png" alt="Amazon SageMaker Model Cards" /></a></p>
<p>Select <strong>Create model card</strong> to document your model information.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards-2.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards-2.png" alt="Amazon SageMaker Model Card" /></a></p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards-6-1.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/11/sm-modelcards-6-1.png" alt="Amazon SageMaker Model Cards" /></a></p>
<p><strong>Amazon SageMaker Model Dashboard  
</strong>SageMaker Model Dashboard lets you monitor all your models in one place. With this bird’s-eye view, you can now see which models are used in production, view model cards, visualize model lineage, track resources, and monitor model behavior through an integration with <a target="_blank" href="https://aws.amazon.com/sagemaker/model-monitor/">SageMaker Model Monitor</a> and <a target="_blank" href="https://aws.amazon.com/sagemaker/clarify">SageMaker Clarify</a>. The dashboard automatically alerts you when models are not being monitored or deviate from expected behavior. You can also drill deeper into individual models to troubleshoot issues.</p>
<p>To access SageMaker Model Dashboard, go to the <a target="_blank" href="https://console.aws.amazon.com/sagemaker">SageMaker console</a>, select <strong>Governance</strong> in the left navigation menu, and select <strong>Model dashboard</strong>.</p>
<p><a target="_blank" href="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/17/sm-model-dashboard.png"><img src="https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/11/17/sm-model-dashboard.png" alt="Amazon SageMaker Model Dashboard" /></a></p>
<p>Note: The risk rating shown above is for illustrative purposes only and may vary based on input provided by you.</p>
<h1 id="heading-community-and-social-footprints"><strong>Community and Social Footprints :</strong></h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/samtechno/"><strong>Sampath Kumar Basa</strong></a></p>
</li>
<li><p><a target="_blank" href="https://github.com/samtechlab"><strong>GitHub</strong></a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud"><strong>Twitter</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1"><strong>YouTube Cloud DevOps Free Trainings</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/"><strong>Linkedin Page</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/"><strong>Linkedin Group</strong></a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF"><strong>Discord Channel</strong></a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud"><strong>Dev</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Cyber Security Series]]></title><description><![CDATA[Public Key Infrastructure
What is PKI?

Set of processes/rules/procedures/policies which create/manage/store/revoke the digital signature and manage public key encryption.

It uses two sets of keys - Public key - shared with anyone you connect  and P...]]></description><link>https://blog.cloudnloud.com/cyber-security-series</link><guid isPermaLink="true">https://blog.cloudnloud.com/cyber-security-series</guid><category><![CDATA[#cybersecurity]]></category><category><![CDATA[CybersecurityAwareness]]></category><category><![CDATA[encryption]]></category><category><![CDATA[pki]]></category><dc:creator><![CDATA[Kannammal G]]></dc:creator><pubDate>Tue, 17 Jan 2023 08:03:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673941515413/645fab53-5575-4020-8fe6-e32321586721.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-public-key-infrastructure">Public Key Infrastructure</h1>
<h2 id="heading-what-is-pki">What is PKI?</h2>
<ul>
<li><p>Set of processes/rules/procedures/policies which create/manage/store/revoke the digital signature and manage public key encryption.</p>
</li>
<li><p>It uses two sets of keys - Public key - shared with anyone you connect  and Private Key -shared only to a known person</p>
</li>
<li><p>It is the most common method of encrypting the data between web server and browsers</p>
</li>
</ul>
<h2 id="heading-components-of-pki"><strong>Components of PKI</strong></h2>
<p><strong>Certificate:</strong> A digital document, signed by a CA, and used to prove the owner of a public key, within a PKI. The certificate has several attributes, such as usage of the key, Client authentication, Server authentication or Digital signature and the public key. The certificate also contains the subject name which is information identifying the owner. This could be, for example, a DNS name or IP address.</p>
<p><strong>Certificate Authority (CA) -</strong>  is an authority in a network that issues and manages security credentials and public keys for message encryption. a CA checks with a registration authority (RA) to verify information provided by the requestor of a digital certificate. If the RA verifies the requestor's information, the CA can then issue a certificate. CA maintains a directory of digital certificates for the reference of those receiving them. It manages the certificate life cycle, including certificate directory maintenance and certificate revocation list maintenance and publication.</p>
<p> <strong>Registration Authority (RA)-</strong> A person or organization responsible for the identification and authentication of an applicant for a digital certificate. An RA does not issue or sign certificates. RA verifies information supplied by the subject requesting a certificate. A registration authority (RA) is an entity that is trusted by the certificate authority (CA) to register or vouch for the identity of users to a CA and is a component of PKI.</p>
<p><strong>Certificate Revocation List (CRL):</strong> Checks the continued validity of the certificates for which the CA has responsibility. The CRL details Digital Certificates that are no longer valid because they were revoked by the CA.</p>
<p><strong>Certificate Repository:</strong> A location where all certificates are stored as well as their public keys, validity details, revocation lists, and root certificates. These locations are accessible through LDAP, FTP or web servers.</p>
<p><strong>Certification Practice Statement (CPS) -</strong> is a PKI element that provides detailed descriptions for dealing with a compromised private key.</p>
<p><strong>Validation Authority:</strong> A VA allows an entity to check that a certificate has not been revoked. The VA role is often carried out by an online facility hosted by an organization that operates the PKI. A validation authority will often use OCSP or CRL to advertise revoked certificates.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673942434511/c161ac66-9f20-4697-8c47-4711abcbb0a3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-benefits"><strong>Benefits</strong></h2>
<p>Confidentiality (only authorized person can read an encrypted message) - Authenticity (Senders sign/encrypt the message so it ensures the recipient that message is not altered during transit) - Non-repudiation (senders can't deny the message/contents sent)</p>
<h1 id="heading-community-and-social-footprints"><em>Community</em> and <em>Social</em> Footprints :</h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/KannamGCyber/">Kannammal G</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS Developer Tools - Part 2]]></title><description><![CDATA[It's a continuation of AWS Developer Tools - Part 1 . Previous blog link is attached here AWS Developer Tools - part 1


In this blog, We are going to discuss the following services

AWS CodeStar

AWS Fault Injection Simulator

AWS X-Ray




AWS Code...]]></description><link>https://blog.cloudnloud.com/aws-developer-tools-part-2</link><guid isPermaLink="true">https://blog.cloudnloud.com/aws-developer-tools-part-2</guid><category><![CDATA[AWS]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Full Stack Development]]></category><dc:creator><![CDATA[Veera solaiyappan]]></dc:creator><pubDate>Sun, 15 Jan 2023 11:57:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673783779634/cc3d0434-fc41-45e7-bdf9-24678e67d2f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It's a continuation of AWS Developer Tools - Part 1 . Previous blog link is attached here <a target="_blank" href="https://blog.cloudnloud.com/aws-developer-tools-part-1">AWS Developer Tools - part 1</a></p>
<iframe src="https://giphy.com/embed/mgqefqwSbToPe" width="480" height="360" class="giphy-embed"></iframe>

<p>In this blog, We are going to discuss the following services</p>
<ul>
<li><p>AWS CodeStar</p>
</li>
<li><p>AWS Fault Injection Simulator</p>
</li>
<li><p>AWS X-Ray</p>
</li>
</ul>
<iframe src="https://giphy.com/embed/AgWQwLTByaABsBQ9Zf" width="480" height="480" class="giphy-embed"></iframe>

<h2 id="heading-aws-codestar"><strong>AWS CodeStar</strong></h2>
<p>AWS CodeStar is a fully managed service that makes it easy to develop, build, and deploy applications on AWS. It provides a unified user interface to easily manage the full application development life cycle, including source control, build and test, and deployment.</p>
<p>CodeStar provides a variety of project templates for popular languages and frameworks, such as Java, Python, Ruby, and more, that include preconfigured build and deployment settings for AWS services such as Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda.</p>
<p>CodeStar also provides an integrated development environment (IDE) called CodeStar WorkSpaces, which is a cloud-based IDE that makes it easy to write, run, and debug code.</p>
<p>CodeStar integrates with other AWS services such as CodeCommit, CodeBuild, CodeDeploy, and CodePipeline, which enables you to create a complete end-to-end continuous integration and continuous delivery (CI/CD) pipeline.</p>
<p>Additionally, CodeStar provides access to a variety of tools for monitoring and troubleshooting your application, such as AWS CloudWatch, AWS X-Ray, and AWS CloudTrail.</p>
<p>CodeStar is designed to be highly scalable and can handle multiple projects, developers and teams. It also supports role-based access control (RBAC) to give developers access to the resources they need, while keeping your applications secure.</p>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws codestar create-project --name sample-project --project-template-id aws-java-maven --region us-west-2

aws codestar list-projects
</code></pre>
<h2 id="heading-aws-fault-injection-simulator"><strong>AWS Fault Injection Simulator</strong></h2>
<p>AWS Fault Injection Simulator (FIS) is a fully managed service that allows you to simulate various types of failures in your applications, such as network failures, latency, and throttling. This helps you test and validate the resiliency of your applications, without affecting production environments or incurring additional costs.</p>
<p>AWS FIS enables you to create a set of failure scenarios that you can apply to your applications, and then run those scenarios in a controlled environment. You can use the service to test various failure scenarios in your applications, such as network failures, service failures, and resource failures.</p>
<p>AWS FIS supports the ability to simulate failures across different layers of your application, including the network, application, and infrastructure layers. This allows you to test the resiliency of your applications across different components, and identify potential issues before they affect production.</p>
<p>AWS FIS supports a variety of configurations, including custom failure scenarios, targeted failure scenarios, and scheduled failure scenarios. This allows you to simulate different types of failures, such as network latency and packet loss, and to schedule those failures at specific times.</p>
<p>AWS FIS integrates with AWS CloudWatch and CloudTrail, allowing you to monitor and troubleshoot the results of your failure scenarios. This helps you identify any issues and make necessary adjustments to improve the resiliency of your applications.</p>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws fis create-failure-injection --type network --failure-rate 50 --target-type ec2 --target-id i-1234567890abcdef0

aws fis list-failure-injections
</code></pre>
<h2 id="heading-aws-x-ray"><strong>AWS X-Ray</strong></h2>
<p>AWS X-Ray is a fully-managed service that allows you to analyze and debug production, distributed applications, such as microservices or serverless applications. It allows you to trace requests and data flows through your application, and identify performance bottlenecks and errors.</p>
<p>AWS X-Ray provides an interactive console that allows you to view and analyze traces, create service maps, and view performance metrics. It also provides a SDK that can be integrated with a variety of languages, such as Java, .NET, Node.js, and more, to instrument your application and automatically generate traces.</p>
<p>AWS X-Ray also provides a variety of features to help you analyze and troubleshoot issues in your applications, such as:</p>
<ul>
<li><p>Distributed Tracing: Allows you to trace requests and data flows through your application, and identify performance bottlenecks and errors.</p>
</li>
<li><p>Service Maps: Allows you to view a map of all the services in your application, and how they interact with each other.</p>
</li>
<li><p>Anomaly Detection: Allows you to identify and troubleshoot performance issues in your application.</p>
</li>
<li><p>Error reporting: Allows you to view and analyze errors and exceptions in your application.</p>
</li>
<li><p>Search and filter: Allows you to search and filter traces and service maps.</p>
</li>
</ul>
<p>AWS X-Ray integrates with other AWS services, such as AWS Lambda, Amazon Elastic Container Service (ECS), and Amazon Elastic Container Service for Kubernetes (EKS), to provide deeper visibility into your applications.</p>
<p>AWS X-Ray is designed to be highly scalable and can handle high volumes of requests and data. It also supports role-based access control (RBAC) to give developers access to the resources they need, while keeping your applications secure.</p>
<p>Terraform code :</p>
<p>example of Terraform code that enables X-Ray for an Elastic Beanstalk application</p>
<pre><code class="lang-plaintext">resource "aws_elastic_beanstalk_environment" "x-ray" {
  name = "example-environment"
  application = "example-application"
  solution_stack_name = "64bit Amazon Linux 2 v3.3.3 running Multi-container Docker 2.3.5"
  xray_tracing = true
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws xray get-sampling-rules --sampling-rule-name example-sampling-rule
</code></pre>
<p>That's it guys. We completed our AWS developer tools series. Let's celebrate.</p>
<iframe src="https://giphy.com/embed/6oMKugqovQnjW" width="480" height="360" class="giphy-embed"></iframe>

<p><strong>Keep Learning Keep Growing !!!</strong></p>
<h3 id="heading-community-and-social-footprints">Community and Social Footprints :</h3>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/veera26/">Veerasolaiyappan</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud">YouTube Cloud DevOps Free Training</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/cloudnloud/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.com/invite/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS Developer Tools - Part 1]]></title><description><![CDATA[AWS Developer Tools are a set of fully managed services that help developers build, deploy, and debug applications in the cloud. These tools provide a range of services to help developers at every stage of the development process, from building and t...]]></description><link>https://blog.cloudnloud.com/aws-developer-tools-part-1</link><guid isPermaLink="true">https://blog.cloudnloud.com/aws-developer-tools-part-1</guid><category><![CDATA[AWS]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Veera solaiyappan]]></dc:creator><pubDate>Sun, 15 Jan 2023 11:17:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673780408927/23944d57-ea78-4d9b-89d2-e718a2ea278f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS Developer Tools are a set of fully managed services that help developers build, deploy, and debug applications in the cloud. These tools provide a range of services to help developers at every stage of the development process, from building and testing to deploying and monitoring applications.</p>
<iframe src="https://giphy.com/embed/FUncLC0uG07Dw1upZ4" width="480" height="480" class="giphy-embed"></iframe>

<p><strong>List of Developer Tools</strong></p>
<ul>
<li><p>Amazon Corretto</p>
</li>
<li><p>AWS Cloud9</p>
</li>
<li><p>AWS CloudShell</p>
</li>
<li><p>AWS CodeArtifact</p>
</li>
<li><p>AWS CodeBuild</p>
</li>
<li><p>AWS CodeCommit</p>
</li>
<li><p>AWS CodeDeploy</p>
</li>
<li><p>AWS CodePipeline</p>
</li>
<li><p>AWS CodeStar</p>
</li>
<li><p>AWS Fault Injection Simulator</p>
</li>
<li><p>AWS X-Ray</p>
</li>
</ul>
<p>Let's discuss each service in detail</p>
<iframe src="https://giphy.com/embed/FRT9eogpwgTou7LEF6" width="480" height="480" class="giphy-embed"></iframe>

<h2 id="heading-amazon-corretto"><strong>Amazon Corretto</strong></h2>
<p>Amazon Corretto is a free, open-source, and production-ready distribution of the Open Java Development Kit (OpenJDK). It is developed and maintained by Amazon Web Services (AWS) and is designed to provide a consistent and reliable runtime environment for Java applications.</p>
<p>One of the key features of Amazon Corretto is that it provides long-term support for both Java 8 and 11 versions, with security patches and updates provided at no additional cost. This allows developers to run their applications on a stable and supported version of Java for extended periods of time.</p>
<p>Amazon Corretto also includes performance enhancements, such as the ability to use the Aarch64 (Arm64) architecture and support for the latest version of the Linux kernel, which can improve the performance of Java applications running on AWS.</p>
<p>Another important feature of Amazon Corretto is that it is fully compatible with the Java SE standard and has passed the Java SE TCK. This means that Java applications written to run on the standard OpenJDK will run unchanged on Amazon Corretto.</p>
<p>Amazon Corretto is also easy to use, developers can simply download the Amazon Corretto distribution and use it as a drop-in replacement for their current JDK.</p>
<p>Terraform code:</p>
<pre><code class="lang-plaintext">
resource "aws_instance" "correto" {
  ami           = "ami-0ff8a91507f77f867"
  instance_type = "t2.micro"

  user_data = &lt;&lt;-EOF
    #!/bin/bash
    yum install -y java-1.8.0-amazon-corretto-devel
    EOF
}
</code></pre>
<p>AWS CLI code</p>
<pre><code class="lang-bash">aws ssm send-command --document-name <span class="hljs-string">"AWS-RunShellScript"</span> --instance-ids <span class="hljs-string">"i-1234567890abcdef0"</span> --parameters commands=<span class="hljs-string">"yum install -y java-1.8.0-amazon-corretto-devel"</span>
</code></pre>
<p>This command uses the AWS Systems Manager (SSM) to run a shell script on the specified EC2 instance (i-1234567890abcdef0) that installs the Amazon Corretto JDK</p>
<h2 id="heading-aws-cloud9"><strong>AWS Cloud9</strong></h2>
<p>AWS Cloud9 is a cloud-based integrated development environment (IDE) that makes it easy to write, run, and debug code. It provides a web-based development environment that can be accessed from anywhere and supports a wide range of languages and frameworks, including JavaScript, Python, Ruby, and C++.</p>
<p>One of the key features of Cloud9 is its built-in collaboration capabilities. Developers can share their development environments with others and work together in real-time, regardless of their location. This makes it easy for teams to collaborate and work together on the same codebase.</p>
<p>Cloud9 also includes a wide range of development tools and features, such as code completion, debugging, and version control integration. This makes it easy for developers to write, test, and debug their code all within the same environment.</p>
<p>Cloud9 also integrates with other AWS services, such as AWS Lambda and Elastic Beanstalk, making it easy to develop, test, and deploy applications in the cloud. This allows developers to easily build and deploy their applications without having to leave the Cloud9 environment.</p>
<p>Terraform code</p>
<pre><code class="lang-plaintext">resource "aws_cloud9_environment_ec2" "example" {
  name = "example-environment"
  instance_type = "t2.micro"
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws cloud9 create-environment-ec2 --name "example-environment" --instance-type "t2.micro"
</code></pre>
<h2 id="heading-aws-cloudshell"><strong>AWS CloudShell</strong></h2>
<p>AWS CloudShell is a browser-based shell that allows developers to easily interact with AWS services and resources. It is a fully managed service that provides a temporary, secure environment with the necessary tools and permissions to interact with AWS resources.</p>
<p>One of the key features of AWS CloudShell is its built-in access to the AWS Management Console, AWS CLI, and other development tools. This allows developers to quickly and easily perform common actions, such as creating and managing resources, without having to install or configure any tools on their local machine.</p>
<p>AWS CloudShell also includes pre-configured environments for popular programming languages and frameworks, such as Python, Node.js, and .NET Core. This allows developers to quickly set up a development environment and start coding, without having to spend time installing dependencies or configuring the environment.</p>
<p>AWS CloudShell also integrates with other AWS services, such as AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy, making it easy to automate the development process. This allows developers to easily build, test, and deploy their applications on AWS, without having to leave the CloudShell environment.</p>
<h2 id="heading-aws-codeartifact"><strong>AWS CodeArtifact</strong></h2>
<p>AWS CodeArtifact is a fully managed artifact repository service for storing, publishing, and sharing software packages. It makes it easy for teams to manage, share, and use software packages across their organization.</p>
<p>One of the key features of AWS CodeArtifact is its support for multiple package formats, such as npm, Maven, and PyPI, and it can store multiple versions of a package. This allows teams to use the package manager and format that they are already familiar with and also to use the same package across different projects and applications.</p>
<p>AWS CodeArtifact also provides fine-grained access control, allowing teams to specify who can access and consume packages, and also to configure permissions at the package, repository, and domain level.</p>
<p>AWS CodeArtifact also integrates with other AWS services such as AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline, making it easy to automate the software release process. This allows teams to easily build, test, and deploy their applications using the packages stored in CodeArtifact</p>
<p>Terraform Code</p>
<pre><code class="lang-plaintext">
resource "aws_codeartifact_repository" "example" {
  domain = "cloudnloud.com"
  repository = "cloudnloud-repo"
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws codeartifact create-repository --domain cloudnloud.com --repository cloudnloud-repo
</code></pre>
<h2 id="heading-aws-codebuild"><strong>AWS CodeBuild</strong></h2>
<p>AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. It supports multiple programming languages and build environments, and can be easily integrated with other AWS services such as CodeCommit, CodeDeploy, and CodePipeline. CodeBuild scales automatically to meet the needs of your builds, and it can be used to build and test code in a continuous integration and continuous delivery (CI/CD) pipeline. Additionally, CodeBuild provides a variety of features to help you optimize your build process, such as caching, environment variables, and build logs.</p>
<p>Terraform Code</p>
<pre><code class="lang-plaintext">resource "aws_codebuild_project" "example" {
  name            = "example-project"
  source {
    type     = "GITHUB"
    location = "https://github.com/user/repo.git"
  }
  artifacts {
    type = "S3"
    location = "example-bucket"
  }
  environment {
    compute_type = "BUILD_GENERAL1_SMALL"
    image = "aws/codebuild/standard:2.0"
    type = "LINUX_CONTAINER"
  }
  service_role = aws_iam_role.codebuild_role.arn
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws codebuild create-project --name example-project --source https://github.com/user/repo.git --artifacts-type S3 --artifacts-location example-bucket --environment-type LINUX_CONTAINER --environment-image aws/codebuild/standard:2.0 --service-role arn:aws:iam::&lt;ACCOUNT_ID&gt;:role/example-codebuild-role
</code></pre>
<h2 id="heading-aws-codecommit"><strong>AWS CodeCommit</strong></h2>
<p>AWS CodeCommit is a fully-managed, source control service that hosts private Git repositories. It is a native Git service that makes it easy for developers to store, manage, and track code changes. CodeCommit is integrated with other AWS services such as CodeBuild, CodeDeploy, and CodePipeline, which enables you to create a complete end-to-end continuous integration and continuous delivery (CI/CD) pipeline.</p>
<p>CodeCommit supports standard Git functionality and provides additional features such as Git-based authentication, repository access control, and an API for programmatic access. It also integrates with IAM, allowing you to control access to your repositories at a granular level.</p>
<p>One of the advantages of CodeCommit is that it allows you to store your code in a centralized and secure location that is backed by AWS's highly available and scalable infrastructure. Also, it is cost-effective, as you only pay for what you use and there are no upfront costs or long-term commitments</p>
<p>Terraform Code</p>
<pre><code class="lang-plaintext">
resource "aws_codecommit_repository" "example" {
  repository_name = "cloudnloud-repo"
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws codecommit create-repository --repository-name cloudnloud-repo
</code></pre>
<h2 id="heading-aws-codedeploy"><strong>AWS CodeDeploy</strong></h2>
<p>AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and on-premises servers. It helps you rapidly release new features, handle network and application outages, and roll back when necessary, with minimal downtime.</p>
<p>CodeDeploy supports both in-place and blue/green deployment options. In-place deployments replace the existing instances with new ones, while blue/green deployments create a parallel environment and then switch traffic to the new instances. This allows for a more controlled and predictable deployment process, and it makes it easy to roll back to a previous version if necessary.</p>
<p>CodeDeploy can be integrated with other AWS services such as CodeBuild, CodeCommit, and CodePipeline, which enables you to create a complete end-to-end continuous integration and continuous delivery (CI/CD) pipeline. It also supports integration with third-party tools such as Jenkins, GitHub, and Bitbucket.</p>
<p>CodeDeploy supports multiple platforms including Windows and Linux, making it easy to deploy applications written in any language, including .NET, Java, Ruby, and more. It also provides built-in support for deploying applications to Amazon EC2 instances, AWS Lambda functions, and on-premises servers.</p>
<p>AWS CodeDeploy provides detailed tracking and visibility into the deployment process, and it can be integrated with AWS CloudWatch, AWS Config, and AWS CloudTrail for monitoring and auditing purposes.</p>
<h2 id="heading-aws-codepipeline"><strong>AWS CodePipeline</strong></h2>
<p>AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. It enables you to rapidly and reliably deliver features and updates, while minimizing the chance of failed deployments.</p>
<p>CodePipeline is a highly customizable service that allows you to integrate with a variety of tools and services such as AWS CodeCommit, GitHub, CodeBuild, CodeDeploy, Jenkins, and more. You can use it to create a pipeline that builds, tests, and deploys your code every time there is a change in your source repository.</p>
<p>CodePipeline allows you to easily visualise your pipeline stages, actions and the status of each stage. It also provides detailed tracking and visibility into the pipeline process, and it can be integrated with AWS CloudWatch, AWS Config, and AWS CloudTrail for monitoring and auditing purposes.</p>
<p>CodePipeline also supports manual approvals, which allows you to add a human approval step to your pipeline, for example, you can use it to add a manual approval step for your production deployments.</p>
<p>AWS CodePipeline is designed to be highly scalable and can handle multiple pipelines, thousands of actions and many parallel actions. It also integrates with other AWS services, such as AWS CodeStar, AWS CloudFormation, and AWS Elastic Beanstalk, making it easy to build and deploy applications on AWS.</p>
<p>Terraform Code</p>
<pre><code class="lang-plaintext">resource "aws_codepipeline" "codepipeline" {
  name     = "example-pipeline"
  role_arn = aws_iam_role.codepipeline_role.arn

  artifact_store {
    location = "code-bucket"
    type     = "S3"
  }

  stage {
    name = "Source"

    action {
      name            = "Source"
      category        = "Source"
      owner           = "ThirdParty"
      provider        = "GitHub"
      version         = "1"
      output_artifacts = ["example"]

      configuration = {
        Repo = "example-repo"
        Branch = "main"
        OAuthToken = "example-token"
      }
    }
  }
}

resource "aws_iam_role" "codepipeline_role" {
  name = "example-codepipeline-role"

  assume_role_policy = &lt;&lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codepipeline.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}
</code></pre>
<p>AWS CLI</p>
<pre><code class="lang-plaintext">aws codepipeline create-pipeline --name example-pipeline --role-arn arn:aws:iam::&lt;ACCOUNT_ID&gt;:role/example-codepipeline-role --artifact-store type=S3,location=example-bucket --stage-name Source --action-name Source --action-type Source --action-configuration Repo=example-repo,Branch=main,OAuthToken=example-token,Owner=ThirdParty,Provider=GitHub
</code></pre>
<p>Okay. That's it. Part 1 is completed. We will discuss the remaining services in Developer Tools - Part 2. Stay tuned</p>
<iframe src="https://giphy.com/embed/SPMQbGOW6K0MM" width="480" height="270" class="giphy-embed"></iframe>

<p><strong>Keep Learning Keep Growing !!!</strong></p>
<h3 id="heading-community-and-social-footprints">Community and Social Footprints :</h3>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/veera26/">Veerasolaiyappan</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud">YouTube Cloud DevOps Free Training</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/cloudnloud/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.com/invite/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Why Your Business Should Consider AWS Managed Blockchain]]></title><description><![CDATA[Hi everyone,
Do you have any idea about AWS Managed blockchain services and why you should consider them for your business?


Don't have any idea. No issues. Let's discuss on further,


Blockchain technology has the potential to revolutionize many in...]]></description><link>https://blog.cloudnloud.com/why-your-business-should-consider-aws-managed-blockchain</link><guid isPermaLink="true">https://blog.cloudnloud.com/why-your-business-should-consider-aws-managed-blockchain</guid><category><![CDATA[AWS]]></category><category><![CDATA[Blockchain]]></category><category><![CDATA[Ethereum]]></category><category><![CDATA[HYPERLEDGER 2.0]]></category><category><![CDATA[dapps]]></category><dc:creator><![CDATA[Veera solaiyappan]]></dc:creator><pubDate>Tue, 10 Jan 2023 07:52:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673336776425/7ecd2bcf-ab5e-4707-a621-70e3d7d99548.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi everyone,</p>
<p>Do you have any idea about AWS Managed blockchain services and why you should consider them for your business?</p>
<iframe src="https://giphy.com/embed/woqo2w8RoxQbdAWC0l" width="480" height="480" class="giphy-embed"></iframe>

<p>Don't have any idea. No issues. Let's discuss on further,</p>
<iframe src="https://giphy.com/embed/yidUzkciDTniZ7OHte" width="480" height="418" class="giphy-embed"></iframe>

<p>Blockchain technology has the potential to revolutionize many industries, but setting up and managing a blockchain network can be a complex and time-consuming process. That's where Amazon Web Services (AWS) Managed Blockchain comes in.</p>
<p>AWS Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using popular open-source frameworks like Ethereum and Hyperledger Fabric. With AWS Managed Blockchain, you can set up and deploy a blockchain network in a matter of minutes and easily scale it as your needs change.</p>
<p>One of the key benefits of AWS Managed Blockchain is that it takes care of the underlying infrastructure, including the network nodes, storage, and networking. This means you can focus on building your applications and not worry about the maintenance and management of the network.</p>
<p>AWS Managed Blockchain also offers built-in security measures to protect your network, including encryption of data in transit and at rest, secure access controls, and monitoring and logging of network activity.</p>
<p>In addition to its ease of use and security features, AWS Managed Blockchain can be easily integrated with other AWS services, such as Amazon S3 and Amazon Kinesis, to build and deploy decentralized applications. This makes it a powerful and flexible solution for businesses of all sizes.</p>
<p>Another advantage of AWS Managed Blockchain is its flexible pricing model. As a fully managed service, you only pay for the resources you use, such as the number of nodes and the amount of storage you need. This makes it an affordable and scalable solution that can grow with your business.</p>
<h3 id="heading-features">Features:</h3>
<ol>
<li><p>Quick setup</p>
</li>
<li><p>Scalability</p>
</li>
<li><p>Network management</p>
</li>
<li><p>Security</p>
</li>
<li><p>Integration with other AWS services</p>
</li>
<li><p>Flexible pricing</p>
</li>
</ol>
<p>I hope, Now You understand AWS-managed blockchain services benefits</p>
<iframe src="https://giphy.com/embed/GCvktC0KFy9l6" width="450" height="480" class="giphy-embed"></iframe>

<p>Anyways we should know about the basics of Blockchain. Without knowing the basics and fundamentals, we can't go deeper into any technologies.</p>
<iframe src="https://giphy.com/embed/I16U5AfBWqgJYJum6i" width="480" height="480" class="giphy-embed"></iframe>

<p>So, Let's learn some basics of Blockchain</p>
<h3 id="heading-what-is-the-blockchain">What is the blockchain</h3>
<p>A blockchain is a decentralized, distributed database that consists of a series of interconnected blocks of data. It is a digital ledger of transactions that are secured and validated by a network of computers, rather than a central authority.</p>
<p>There are several different types of blockchain networks, including public and private blockchains. Public blockchains, such as Bitcoin, are open to anyone and are secured by a network of miners who compete to validate transactions and create new blocks. Private blockchains, on the other hand, are restricted to a specific group of users and are often used by organizations to manage internal processes and data.</p>
<h3 id="heading-blockchain-components">Blockchain components</h3>
<p>There are several key components of a blockchain:</p>
<ol>
<li><p><strong>Blocks</strong>: As mentioned earlier, a block is a unit of data that contains a list of transactions and a unique hash code that links it to the previous block.</p>
</li>
<li><p><strong>Nodes</strong>: A node is a computer or device that participates in the blockchain network and helps to validate and process transactions.</p>
</li>
<li><p><strong>Miners</strong>: Miners are nodes that perform the computationally intensive task of creating new blocks and adding them to the blockchain. They are typically rewarded with cryptocurrency for their efforts.</p>
</li>
<li><p><strong>Consensus algorithm:</strong> A consensus algorithm is a set of rules that allows the nodes in a blockchain network to agree on the state of the blockchain. This ensures the integrity and security of the blockchain.</p>
</li>
<li><p><strong>Smart contracts:</strong> A smart contract is a self-executing contract with the terms of the agreement between buyer and seller being directly written into lines of code. Smart contracts are used to automate complex processes and can be implemented on a blockchain platform.</p>
</li>
<li><p><strong>Cryptocurrency:</strong> Cryptocurrency is a digital or virtual currency that uses cryptography for secure financial transactions. It is often used as a form of payment or reward for participating in a blockchain network.</p>
</li>
</ol>
<p>That's it all about basics. Now let's get into some hands-on part</p>
<iframe src="https://giphy.com/embed/KffdTQfewxdbKTGEJY" width="480" height="354" class="giphy-embed"></iframe>

<h3 id="heading-how-to-create-hyperledger-fabric-private-blockchain">How to create Hyperledger Fabric private blockchain</h3>
<p><strong>Do you know what is Hyperledger Fabric?</strong></p>
<iframe src="https://giphy.com/embed/LRHDKHs1GtKETJVKm7" width="480" height="266" class="giphy-embed"></iframe>

<p>Hyperledger Fabric is a permissioned blockchain platform that is well-suited for enterprise use cases. It allows businesses to build and deploy decentralized applications, or "smart contracts," in a private and secure environment. Hyperledger Fabric uses a modular architecture that allows businesses to choose the components they need and easily customize their blockchain network to meet their specific requirements.</p>
<p>you need a AWS account. Go the Console panel and search AWS Managed blockchain services</p>
<p>Select <strong>create private networks option</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186033949/9ba056e0-6a0b-45a8-9060-e9cadd50594a.png" alt class="image--center mx-auto" /></p>
<p>Select Hyperledger fabric option, version, network edition</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186045601/a17f1a26-8dc4-45f6-a35a-2ac82a7de1ca.png" alt class="image--center mx-auto" /></p>
<p>Add name, description, Voting policy (default) and tags</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186072528/a19f87ee-6c46-41a7-9b6a-d9c1bfaa43fc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186094321/e7e69499-30b9-45ee-a923-ba488bc15646.png" alt class="image--center mx-auto" /></p>
<p>Now your chain is created successfully.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186102953/86133690-608f-40ca-8960-ae81a429c75c.png" alt class="image--center mx-auto" /></p>
<p>You can add an admin member and invite other members</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186107221/bdde1715-4da5-4b00-b24a-1647e035e0b2.png" alt class="image--center mx-auto" /></p>
<p>Here, you can create a new vpc endpoint to interact with your blockchain from your application</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186141147/b203ee9c-76b5-4a77-8054-45c7d0bae0de.png" alt class="image--center mx-auto" /></p>
<p>that's it guys. We created our private blockchain using AWS. Using this VPC endpoint we can interact with blockchain for deploying our smart contract and query details from blockchain</p>
<p>Hope you understood.</p>
<iframe src="https://giphy.com/embed/XMBJ0l20sNWEM" width="480" height="320" class="giphy-embed"></iframe>

<p>Okay. Let's proceed further to create a public blockchain node</p>
<h3 id="heading-how-to-create-ethereum-public-blockchain">How to create Ethereum Public Blockchain?</h3>
<p><strong>what is Ethereum public blockchain?</strong></p>
<iframe src="https://giphy.com/embed/jTZUCcQpoPyikI77Bj" width="480" height="480" class="giphy-embed"></iframe>

<p>Ethereum is a decentralized blockchain platform that establishes a peer-to-peer network that securely executes and verifies application code, called smart contracts. Smart contracts allow participants to transact with each other without a trusted central authority. Transaction records are immutable, verifiable, and securely distributed across the network, giving participants full ownership and visibility into transaction data. Transactions are sent from and received by user-created Ethereum accounts. A sender must sign transactions and spend Ether, Ethereum's native cryptocurrency, as a cost of processing transactions on the network.</p>
<p>Select <strong>join public network</strong> option</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186094321/e7e69499-30b9-45ee-a923-ba488bc15646.png" alt class="image--center mx-auto" /></p>
<p>and select Ethereum Testnet Rinkeby network for testing purposes. Please go with Ethereum mainnet for production</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673185975708/a0cfd4f1-d203-4f97-bd90-4d5c006fe348.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673186003246/a5c71869-3b93-4d9b-b823-279b9f31d3fa.png" alt class="image--center mx-auto" /></p>
<p>That's it guys. We created our public blockchain node also. We can deploy our smart contrat and interact with chain using Node ID.</p>
<p>Hope you get some insightful knowledge from this blog.</p>
<iframe src="https://giphy.com/embed/11ZAUfeJHojWlW" width="470" height="480" class="giphy-embed"></iframe>

<p><strong>Keep Learning Keep Growing !!!</strong></p>
<h3 id="heading-community-and-social-footprints">Community and Social Footprints :</h3>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/veera26/">Veerasolaiyappan</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/cloudnloud/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.com/invite/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[RSA Encryption]]></title><description><![CDATA[🎯What is RSA encryption?
👉RSA (Rivest-Shamir-Adleman) is an algorithm used for secure data transmission. It is an asymmetric encryption that is widely used for secure data transmission.👉In RSA, each person has a pair of keys: a public key and a pr...]]></description><link>https://blog.cloudnloud.com/rsa-encryption</link><guid isPermaLink="true">https://blog.cloudnloud.com/rsa-encryption</guid><category><![CDATA[RSA]]></category><category><![CDATA[encryption]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[decryption]]></category><category><![CDATA[public key]]></category><dc:creator><![CDATA[iamswetha7]]></dc:creator><pubDate>Sat, 07 Jan 2023 21:13:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673124090841/d9708ab8-0704-4fcf-b029-93f05e59b4a4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🎯<strong>What is RSA encryption?</strong></p>
<p>👉RSA (Rivest-Shamir-Adleman) is an algorithm used for secure data transmission. It is an asymmetric encryption that is widely used for secure data transmission.<br />👉In RSA, each person has a pair of keys: a public key and a private key.<br />👉The public key can be shared with anyone, while the private key must be kept secret.<br />👉When a message is sent using RSA, the sender encrypts the message using the recipient's public key.<br />👉The recipient then decrypts the message using their private key.<br />👉This ensures that only the intended recipient can read the message, as only they have the private key needed to decrypt it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672664604949/c29cdad8-ce93-420d-9fab-7b1747a9f4d7.jpeg" alt class="image--center mx-auto" /></p>
<p>📌<strong>A simple demonstration of how the RSA algorithm works</strong></p>
<p>🌲Suppose Swetha wants to send a secure message to Vijay.<br />🌲Vijay generates a pair of keys using the RSA algorithm and sends his public key to Swetha.<br />🌲Swetha uses Vijay's public key to encrypt her message and sends the encrypted message (ciphertext) to Vijay.<br />🌲Vijay then uses his private key to decrypt the ciphertext and read the original message (plaintext).  </p>
<p>📌<strong>Key generation:</strong></p>
<p>🌲Vijay selects two prime numbers, p, and q, and calculates n = p *<em>q and in this example, p = 5 and q = 11, so n = 5</em> *11 = 55.<br />🌲Vijay calculates a value called the totient, denoted as φ(n), which is the number of positive integers less than n that are relatively prime to n. In this case, φ(55) = (p - 1) <em>(q - 1) = 4</em> *10 = 40.<br />🌲The factors of 40 are 2*2*2*5.<br />🌲Vijay selects a public key, e, that is relatively prime to φ(n) and none of the factors of 2 and 5. In this example, let's say e =7.<br />🌲Vijay calculates his private key, d, such that e <em>d ≡ 1 (mod φ(n)) or we can derive d= (1+ x</em> φ(n))/e while x can be 0,1,2, 3, etc.<br />After some calculations using Excel, d is calculated as d = (1+4*40)/7 = 23.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672664730999/1b2313c8-cf7c-4620-ad8e-e9fc6e264d04.jpeg" alt class="image--center mx-auto" /></p>
<p>🌲We now have n=55, e=7, d=23<br />🌲Vijay's public key is the pair (e, n), and his private key is the pair (d, n). Vijay sends his public key to Swetha.  </p>
<p>📌<strong>Encryption:</strong></p>
<p>🌲Swetha wants to send the message "HELLO" to Vijay.<br />🌲She converts the message using a predetermined scheme (e.g., A=1, B=2, till Z=26.).<br />🌲Swetha calculates the ciphertext, c, using the formula c ≡ m^e (mod n), where m is the plain text.<br />🌲In this case, c is calculated as c = 8^7 (mod 55) 5^7 (mod 55) 12^7 (mod 55) 12^7 (mod 55) 15^7 (mod 55).<br />🌲c= 2 25 23 23 5. Swetha sends the ciphertext to Vijay.  </p>
<p>📌<strong>Decryption:</strong></p>
<p>🌲Vijay receives Swetha's ciphertext, 2 25 23 23 5.<br />🌲Vijay calculates the plaintext, m, using the formula m = c^d (mod n).<br />🌲In this case, m = 2^23 (mod 55) 25^23 (mod 55)23^23 (mod 55)23^23 (mod 55)5^23 (mod 55) = 8 5 12 12 15.<br />🌲Vijay converts the number back to the message "HELLO" using the predetermined scheme.<br />👉The link for the calculation is a <a target="_blank" href="https://www.calculator.net/big-number-calculator.html?cx=128&amp;cy=437&amp;cp=20&amp;co=mod">calculator</a>.</p>
<p>🎯<strong>Below are the screenshots of the Python code used to create the RSA public, private, encryption, and decryption algorithms.</strong></p>
<p>👉Please refer to the link to generate RSA keys using python <a target="_blank" href="https://pycryptodome.readthedocs.io/en/latest/src/examples.html#generate-an-rsa-key">RSA Keygen</a></p>
<p>🎯<strong>The Public and Private key Generation</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673121979707/02a585bd-cd78-4e03-8f24-533406e729c4.jpeg" alt class="image--center mx-auto" /></p>
<p>🎯<strong>Encryption Algorithm</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673121946539/09cc06e0-169b-4dc0-a44e-7875893cfea3.jpeg" alt class="image--center mx-auto" /></p>
<p>🎯<strong>Decryption Algorithm</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673122113282/626bc72e-67ff-4eaf-b538-4432d9bfd388.jpeg" alt class="image--center mx-auto" /></p>
<p>🌲<strong>Strengths</strong><br />👉When compared to symmetric encryption, there is no need to exchange the keys ahead of time.<br />👉"Non-repudiation" is achieved because data cannot be changed during communication.<br />👉It's a one-way function, so knowing one prime key won't get you the other primes.  </p>
<p>🌲<strong>Weakness</strong><br />👉The difficulty of generating keys.<br />👉The RSA algorithm is relatively slow when compared to symmetric algorithms.</p>
<ul>
<li><p><strong>Community and Social Footprints:</strong></p>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/swethamudunuri/">Swetha Mudunuri</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/swethamudunuri07">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/cloudnloud/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.com/invite/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Managing Docker Containers]]></title><description><![CDATA[1. Container Commands
1. Create a new container using the below command
sudo docker run -it ubuntu /bin/bash


The "docker run" command provides all launching capabilities for docker to create a container. We use docker run to create new containers.
...]]></description><link>https://blog.cloudnloud.com/managing-docker-containers</link><guid isPermaLink="true">https://blog.cloudnloud.com/managing-docker-containers</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Rajiv C R]]></dc:creator><pubDate>Thu, 05 Jan 2023 08:23:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672836461462/bd845908-36ef-4ba3-9895-12b4e87e16fa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-container-commands">1. Container Commands</h2>
<h3 id="heading-1-create-a-new-container-using-the-below-command">1. Create a new container using the below command</h3>
<pre><code class="lang-plaintext">sudo docker run -it ubuntu /bin/bash
</code></pre>
<ul>
<li><p>The "docker run" command provides all launching capabilities for docker to create a container. We use docker run to create new containers.</p>
</li>
<li><p>-i: open STDIN from the container</p>
</li>
<li><p>-t: tells docker to assign a pseudo-tty to the container</p>
</li>
<li><p>-it : provides an interactive shell</p>
</li>
<li><p>ubuntu: ubuntu is an image and also called as "stock image" or "base image". This image will be downloaded from Docker hub when we run 'docker run' command first time</p>
</li>
<li><p>/bin/bash: 'shell program' that will be installed in the terminal</p>
</li>
</ul>
<h3 id="heading-2-inspect-the-new-container">2. Inspect the new container</h3>
<p>Let's believe that it's a separate machine by using the below commands</p>
<ol>
<li><p>hostname</p>
</li>
<li><p>cat /etc/hosts</p>
</li>
<li><p>hostname -i</p>
</li>
<li><p>ps -ef</p>
</li>
</ol>
<h3 id="heading-3-ssh-setup-for-containers">3. SSH setup for containers</h3>
<p>Setup SSH in the containers so that they can communicate with each other</p>
<ol>
<li><p>Create two containers having IP addresses - 172.17.0.2, 172.17.0.3</p>
</li>
<li><p>Try to connect to 172.17.0.3 from 172.17.0.2 using the below command. You will get an error.</p>
<p> <code>$ ssh demo@172.17.0.3</code> (you won't be able to connect by default)</p>
</li>
<li><p>Install ssh server</p>
<p> <code>$ apt-get update</code></p>
<p> <code>$ apt-get install openssh-server</code></p>
</li>
<li><p>Start the server</p>
<p> <code>$ service ssh start</code> (status/stop/restart)</p>
</li>
<li><p>Create a user and set up a password</p>
<p> <code>$ useradd -m -d /home/demo -s /bin/bash demo</code></p>
<p> <code>$ passwd demo</code></p>
</li>
<li><p>Connect to the container using ssh from 172.17.0.2 or any other machine.</p>
<p> <code>$ ssh demo@172.17.0.3</code></p>
</li>
<li><p>Enable root user over ssh</p>
<p> Add the below line under "# Authentication:" in "/etc/ssh/sshd_config" <code>PermitRootLogin yes</code></p>
</li>
</ol>
<h3 id="heading-4-shutdown-a-container">4. Shutdown a container</h3>
<p>"exit" to stop the container</p>
<h3 id="heading-5-log-in-to-a-stopped-container">5. Log in to a stopped container</h3>
<p><code>$docker start</code></p>
<p><code>$docker attach</code></p>
<h3 id="heading-6-list-all-containersstopped-and-running">6. List all containers(stopped and running)</h3>
<p><code>$ docker container ls -a</code></p>
<p><code>$ docker ps -a</code></p>
<h3 id="heading-7-list-given-no-of-containers">7. List given no. of containers</h3>
<p><code>$ docker ps -a -n1</code></p>
<h3 id="heading-8-list-running-containers-only">8. List running containers only</h3>
<p><code>$ docker container ls</code></p>
<p><code>$ docker ps</code></p>
<h3 id="heading-9-list-stopped-containers-only">9. List Stopped containers only</h3>
<p><code>$ docker container ls -f status=exited</code> (Where Status can be exited/running)</p>
<p>The "docker container ls" command output shows:</p>
<p>- Image name from which container is created</p>
<p>- ID - the container can be identified using short UUID, longer UUID Or name.</p>
<p>- Status of the container (Up / Exited)</p>
<p>- Name of the container</p>
<h3 id="heading-10-show-the-last-container-which-you-have-created-stoppedrunning">10. Show the last container which you have created (stopped/running)</h3>
<p><code>$ docker container ls -l</code></p>
<h3 id="heading-11-naming-the-container">11. Naming the container</h3>
<p><code>$ docker run --name demo -it ubuntu /bin/bash</code></p>
<p>Note: Two containers can't have the same name.</p>
<h3 id="heading-12-rename-a-container">12. Rename a container</h3>
<p><code>$ docker rename container_name_1 container_name_2</code></p>
<h3 id="heading-13-deleting-a-container-by-giving-its-name-or-id">13. Deleting a container by giving its name or ID</h3>
<p><code>$ docker rm ID/name</code></p>
<h3 id="heading-14-delete-all-runningstopped-containers-at-once">14. Delete all (running/stopped) containers at once</h3>
<p><code>$ docker rm -f $(docker container ls -a -q)</code></p>
<p><code>$ docker rm -f $(docker ps -a -q)</code></p>
<h3 id="heading-15-delete-running-containers-only">15. Delete running containers only</h3>
<p><code>$ docker rm -f $(docker container ls -q)</code></p>
<p><code>$ docker rm -f $(docker ps -q)</code></p>
<h3 id="heading-16-list-stopped-containers-only">16. List stopped containers only</h3>
<p><code>$ docker container ls -a -f status=exited</code></p>
<h3 id="heading-17-starting-a-stopped-container">17. Starting a stopped container</h3>
<p><code>$ docker start container_name</code></p>
<h3 id="heading-18-attaching-to-a-running-container">18. Attaching to a running container</h3>
<p><code>$ docker attach container_name (OR)</code></p>
<p><code>$ docker attach b1b1c8dc1939</code></p>
<h3 id="heading-19-run-a-linux-command-remotely-in-a-container-or-get-an-independent-terminal-from-a-container-remotely-from-the-host">19. Run a Linux command remotely in a container Or Get an independent terminal from a container remotely (from the Host)</h3>
<p><code>$ docker exec -it tomcat-server ps -ef</code></p>
<h3 id="heading-20-stopping-a-container-from-the-host-machine">20. Stopping a container from the 'host machine'</h3>
<p><code>$ docker stop container_name</code>(Gracefully stop the container)</p>
<p><code>$ docker kill container_name</code>(Forcibly stop the container)</p>
<h3 id="heading-21-inspecting-the-containers-processes-from-the-host-machine">21. Inspecting the container's processes from the host machine</h3>
<p><code>$ docker top container_name</code></p>
<h3 id="heading-22-show-the-last-4-containers-stoppedrunning">22. Show the last 4 containers (stopped/running)</h3>
<p><code>$ docker ps -n4</code></p>
<h3 id="heading-23-create-a-container-in-a-background-mode-without-terminal-access">23. Create a container in a background mode ( without terminal access )</h3>
<p><code>$ docker run -it -d ubuntu /bin/bash</code></p>
<h3 id="heading-24-find-more-about-the-container">24. Find More About The Container</h3>
<p>The 'docker inspect' command interrogates the container and returns complete information about it.</p>
<p>Ex: image name, IP, Memory details, hostname ..etc</p>
<p>Examples:</p>
<p><code>$ docker inspect container_name</code></p>
<p><code>$ docker inspect -f '{{.Config.Hostname}}' container_name</code></p>
<p><code>$ docker inspect -f '{{.NetworkSettings.Networks.bridge.IPAddress}}' container_name</code></p>
<p>Note: use --format (OR) -f</p>
<h3 id="heading-25-list-all-container-names">25. List all container names</h3>
<p><code>$ docker inspect --format "{{.Name}}" $(docker ps -a -q) | tr -d '/'</code></p>
<h2 id="heading-2-stats">2. STATS:</h2>
<h3 id="heading-1-display-usage-statistics-of-a-container">1. Display usage statistics of a container</h3>
<p><code>$ docker stats --no-stream container_name</code></p>
<p><code>$ docker stats --no-stream --all</code></p>
<p><code>$ docker stats --no-stream --format {{.MemUsage}} container_name</code></p>
<p><code>$ docker stats --no-stream --format {{.CPUPerc}} container_name</code></p>
<h3 id="heading-2-allocating-memory-for-a-container-below-command-allocates-1-gb-ram">2. Allocating memory for a container (below command allocates 1 GB RAM)</h3>
<p><code>$ docker run -it --name container_name -m 1g ubuntu /bin/bash</code></p>
<p><code>$ docker run -it --name container_name -m 1024m ubuntu /bin/bash</code></p>
<h3 id="heading-3-updating-memory-of-an-existing-container">3. Updating memory of an existing container</h3>
<p><code>$ docker update -m 2024m container_name</code></p>
<h3 id="heading-4-cpu-allocation">4. CPU Allocation</h3>
<p><code>$ docker run -it --cpus="2" --name container_name ubuntu /bin/bash</code></p>
<p><code>$ docker update --cpus="2" conatiner_name</code></p>
<h1 id="heading-community-and-social-footprints">Community and Social Footprints :</h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/rajivtech">Rajiv Ravi</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Installation of Prometheus]]></title><description><![CDATA[In this blog, we will cover how to install Prometheus.
Following are the three ways to run the Prometheus

Script.

Systemd service.

Docker container.
 This blog will cover only two methods run as a script and systemd service.


Environment used:
RA...]]></description><link>https://blog.cloudnloud.com/installation-of-prometheus</link><guid isPermaLink="true">https://blog.cloudnloud.com/installation-of-prometheus</guid><category><![CDATA[#prometheus]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Sanjay Surwase]]></dc:creator><pubDate>Sun, 01 Jan 2023 05:46:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672414288798/ed1b3947-48a4-4c7a-ae58-b28daa60a8fb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, we will cover how to install Prometheus.</p>
<h3 id="heading-following-are-the-three-ways-to-run-the-prometheus">Following are the three ways to run the Prometheus</h3>
<ol>
<li><p>Script.</p>
</li>
<li><p>Systemd service.</p>
</li>
<li><p>Docker container.</p>
<p> This blog will cover only two methods run as a script and systemd service.</p>
</li>
</ol>
<h3 id="heading-environment-used">Environment used:</h3>
<p>RAM=2GB</p>
<p>CPU=2cores</p>
<p>Disk=20GB free</p>
<p>OS= RHEL9</p>
<h3 id="heading-1-run-as-a-script">1. Run as a script</h3>
<p>Prometheus supports multiple Operating Systems so, we can download the Prometheus files as per the OS and architecture.</p>
<ol>
<li><p>Download the Prometheus using wget command.</p>
<p> <a target="_blank" href="https://prometheus.io/">Download Prometheus</a></p>
<p> <a target="_blank" href="https://github.com/prometheus/prometheus/releases/download/v2.40.5/prometheus-2.40.5.linux-amd64.tar.gz">https://github.com/prometheus/prometheus/releases/download/v2.40.5/prometheus-2.40.5.linux-amd64.tar.gz</a></p>
</li>
<li><p>Extract the TAR file.</p>
<pre><code class="lang-plaintext"> tar -xvf prometheus-2.40.5.linux-amd64.tar.gz
</code></pre>
</li>
<li><p>Run Prometheus</p>
</li>
</ol>
<pre><code class="lang-plaintext">./prometheus
</code></pre>
<ol>
<li><p>Access the Prometheus UI</p>
<p> <a target="_blank" href="http://IPaddress">http://IPaddress</a> or hostname:9090/</p>
<p> example: <a target="_blank" href="http://192.168.0.102:9090/">http://192.168.0.102:9090/</a></p>
</li>
</ol>
<h3 id="heading-2-run-as-a-systemd-service">2. Run as a systemd service.</h3>
<ol>
<li><p>Download the Prometheus using wget command.</p>
<p> <a target="_blank" href="https://github.com/prometheus/prometheus/releases/download/v2.40.5/prometheus-2.40.5.linux-amd64.tar.gz">https://github.com/prometheus/prometheus/releases/download/v2.40.5/prometheus-2.40.5.linux-amd64.tar.gz</a></p>
</li>
<li><p>Extract the TAR file.</p>
<pre><code class="lang-plaintext"> tar -xvf prometheus-2.40.5.linux-amd64.tar.gz
</code></pre>
</li>
<li><p>Create user, directories and change ownership.</p>
<pre><code class="lang-plaintext"> useradd --no-create-home -s /bin/false prometheus
 mkdir /etc/prometheus       ## configuration file
 mkdir /var/lib/prometheus   ## libraries 
 chown prometheus:prometheus /etc/prometheus
 chown prometheus:prometheus /var/lib/prometheus
</code></pre>
</li>
<li><p>copy extracted files to /var/lib/prometheus</p>
<pre><code class="lang-plaintext"> cp -r prometheus-2.40.5.linux-amd64/* /var/lib/prometheus/
</code></pre>
</li>
<li><p>Change ownership.</p>
<pre><code class="lang-plaintext"> chown -R prometheus:prometheus /var/lib/prometheus
</code></pre>
</li>
<li><p>Move config file to /etc/prometheus</p>
<pre><code class="lang-plaintext"> mv /var/lib/prometheus/prometheus.yml /etc/prometheus/
</code></pre>
</li>
<li><p>Check configuration file</p>
<pre><code class="lang-plaintext"> grep -v '#' /etc/prometheus/prometheus.yml
</code></pre>
</li>
<li><p>Create a symbolic link for Prometheus at /usr/bin directory to make it globally executable from any path.</p>
<pre><code class="lang-plaintext">  cp -s /var/lib/prometheus/prometheus /usr/bin
  cp -s /var/lib/prometheus/promtool /usr/bin
</code></pre>
</li>
<li><p>Create Systemd Service Unit:-</p>
<pre><code class="lang-plaintext"> vim /usr/lib/systemd/system/prometheus.service

 [Unit]
 Description=Prometheus
 Wants=network-online.target
 After=network-online.target

 [Service]
 User=prometheus
 Group=prometheus
 Type=simple
 ExecStart=/usr/bin/prometheus \
 --config.file /etc/prometheus/prometheus.yml \
 --storage.tsdb.path /var/lib/prometheus/ \
 --web.console.templates=/var/lib/prometheus/consoles \
 --web.console.libraries=/var/lib/prometheus/console_libraries

 [Install]
 WantedBy=multi-user.target
</code></pre>
</li>
<li><p>Configure the firewall if you have enabled it.</p>
<pre><code class="lang-plaintext">firewall-cmd --permanent --add-port=9090/tcp
firewall-cmd --reload
</code></pre>
</li>
<li><p>Enable and start service.</p>
<pre><code class="lang-plaintext">systemctl enable --now prometheus.service
</code></pre>
</li>
<li><p>Access the prometheus UI</p>
<p><a target="_blank" href="http://IPaddress">http://IPaddress</a> or hostname:9090/</p>
<p>example: <a target="_blank" href="http://192.168.0.102:9090/">http://192.168.0.102:9090/</a></p>
</li>
</ol>
<p>Follow the each and every steps carefully to successfull installation.</p>
]]></content:encoded></item><item><title><![CDATA[AWS SageMaker MLOps Project Walkthrough]]></title><description><![CDATA[This walkthrough uses the template MLOps template for model building, training, and deployment to demonstrate using MLOps projects to create a CI/CD system to build, train, and deploy models.
Prerequisites
To complete this walkthrough, you need:

An ...]]></description><link>https://blog.cloudnloud.com/aws-sagemaker-mlops-project-walkthrough</link><guid isPermaLink="true">https://blog.cloudnloud.com/aws-sagemaker-mlops-project-walkthrough</guid><category><![CDATA[mlops]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Sampath Kumar Basa]]></dc:creator><pubDate>Sun, 18 Dec 2022 11:41:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671363361378/erQ3rP8l1.PNG" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This walkthrough uses the template <strong>MLOps template for model building, training, and deployment</strong> to demonstrate using MLOps projects to create a CI/CD system to build, train, and deploy models.</p>
<p><strong>Prerequisites</strong></p>
<p>To complete this walkthrough, you need:</p>
<ul>
<li><p>An IAM account or IAM Identity Center to sign in to Studio. For information, see <a target="_blank" href="https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html">Onboard to Amazon SageMaker Domain</a>.</p>
</li>
<li><p>Permission to use SageMaker-provided project templates. For information, see <a target="_blank" href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-studio-updates.html">SageMaker Studio Permissions Required to Use Projects</a>.</p>
</li>
<li><p>Basic familiarity with the Studio user interface. For information, see <a target="_blank" href="https://docs.aws.amazon.com/sagemaker/latest/dg/studio-ui.html">Amazon SageMaker Studio UI Overview</a>.</p>
</li>
</ul>
<p><strong>Topics</strong></p>
<ul>
<li><p>Step 1: Create the Project</p>
</li>
<li><p>Step 2: Clone the Code Repository</p>
</li>
<li><p>Step 3: Make a Change in the Code</p>
</li>
<li><p>Step 4: Approve the Model</p>
</li>
<li><p>(Optional) Step 5: Deploy the Model Version to Production</p>
</li>
<li><p>Step 6: Clean Up Resources</p>
</li>
</ul>
<h2 id="heading-step-1-create-the-project"><strong>Step 1: Create the Project</strong></h2>
<p>In this step, you create a SageMaker MLOps project by using a SageMaker-provided project template to build, train, and deploy models.</p>
<p><strong>To create the SageMaker MLOps project</strong></p>
<ol>
<li><p>Sign in to Studio. For more information, see <a target="_blank" href="https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html">Onboard to Amazon SageMaker Domain</a>.</p>
</li>
<li><p>In the Studio sidebar, choose the <strong>Home</strong> icon ( ).</p>
</li>
<li><p>Select <strong>Deployments</strong> from the menu, and then select <strong>Projects</strong>.</p>
</li>
<li><p>Choose <strong>Create project</strong>.</p>
<p>The <strong>Create project</strong> tab appears.</p>
</li>
<li><p>If not selected already, choose <strong>SageMaker templates</strong>, then choose <strong>MLOps template for model building, training, and deployment</strong>.</p>
</li>
<li><p>For <strong>Project details</strong>, enter a name and description for your project.</p>
</li>
</ol>
<p>When the project appears in the <strong>Projects</strong> list with a <strong>Status</strong> of <strong>Create completed</strong>, move on to the next step.</p>
<h2 id="heading-step-2-clone-the-code-repository"><strong>Step 2: Clone the Code Repository</strong></h2>
<p>After you create the project, two CodeCommit repositories are created in the project. One of the repositories contains code to build and train a model, and one contains code to deploy the model. In this step, you clone the repository to your local SageMaker project that contains the code to build and train the model to the local Studio environment so that you can work with the code.</p>
<p><strong>To clone the code repository</strong></p>
<ol>
<li><p>In the Studio sidebar, choose the <strong>Home</strong> icon ( ).</p>
</li>
<li><p>Select <strong>Deployments</strong> from the menu, and then select <strong>Projects</strong>.</p>
</li>
<li><p>Select the project you created in the previous step to open the project tab for your project.</p>
</li>
<li><p>In the project tab, choose <strong>Repositories</strong>, and in the <strong>Local path</strong> column for the repository that ends with <strong>modelbuild</strong>, choose <strong>clone repo...</strong>.</p>
</li>
<li><p>In the dialog box that appears, accept the defaults and choose <strong>Clone repository</strong>.</p>
<p><img src="https://docs.aws.amazon.com/images/sagemaker/latest/dg/images/projects/projects-walkthrough-clone-details.png" alt /></p>
<p>When clone of the repository is complete, the local path appears in the <strong>Local path</strong> column. Choose the path to open the local folder that contains the repository code in Studio.</p>
</li>
</ol>
<h2 id="heading-step-3-make-a-change-in-the-code"><strong>Step 3: Make a Change in the Code</strong></h2>
<p>Now make a change to the pipeline code that builds the model and check in the change to initiate a new pipeline run. The pipeline run registers a new model version.</p>
<p><strong>To make a code change</strong></p>
<ol>
<li><p>In Studio, choose the file browser icon ( ), and navigate to the <code>pipelines/abalone</code> folder. Double-click <code>pipeline.py</code> to open the code file.</p>
</li>
<li><p>In the <code>pipeline.py</code> file, find the line that sets the training instance type.</p>
<pre><code class="lang-plaintext">training_instance_type = ParameterString(
        name="TrainingInstanceType", default_value="ml.m5.xlarge"
</code></pre>
<p>Change <code>ml.m5.xlarge</code> to <code>ml.m5.large</code>, then type <code>Ctrl+S</code> to save the change.</p>
</li>
<li><p>Choose the <strong>Git</strong> icon ( ). Stage, commit, and push the change in <code>pipeline.py</code>. Also, enter a summary in the <strong>Summary</strong> field and an optional description in the <strong>Description</strong> field.</p>
<p><img src="https://docs.aws.amazon.com/images/sagemaker/latest/dg/images/projects/projects-walkthrough-commit.png" alt /></p>
</li>
</ol>
<p>After pushing your code change, the MLOps system initiates a run of the pipeline that creates a new model version. In the next step, you approve the new model version to deploy it to production.</p>
<h2 id="heading-step-4-approve-the-model"><strong>Step 4: Approve the Model</strong></h2>
<p>Now you approve the new model version that was created in the previous step to initiate a deployment of the model version to a SageMaker endpoint.</p>
<p><strong>To approve the model version</strong></p>
<ol>
<li><p>In the Studio sidebar, choose the <strong>Home</strong> icon ( ).</p>
</li>
<li><p>Select <strong>Deployments</strong> from the menu, and then select <strong>Projects</strong>.</p>
</li>
<li><p>Select the name of the project you created in the first step to open the project tab for your project.</p>
</li>
<li><p>In the project tab, choose <strong>Model groups</strong>, then double-click the name of the model group that appears.</p>
<p>The model group tab appears.</p>
</li>
<li><p>In the model group tab, double-click <strong>Version 1</strong>. The <strong>Version 1</strong> tab opens. Choose <strong>Update status</strong>.</p>
</li>
<li><p>In the model <strong>Update model version status</strong> dialog box, in the <strong>Status</strong> dropdown list, select <strong>Approve</strong>, then choose <strong>Update status</strong>.</p>
<p>Approving the model version causes the MLOps system to deploy the model to staging. To view the endpoint, choose the <strong>Endpoints</strong> tab on the project tab.</p>
</li>
</ol>
<h2 id="heading-optional-step-5-deploy-the-model-version-to-production"><strong>(Optional) Step 5: Deploy the Model Version to Production</strong></h2>
<p>Now you can deploy the model version to the production environment.</p>
<p><strong>Note</strong></p>
<p>To complete this step, you need to be an administrator in your Studio domain. If you are not an administrator, skip this step.</p>
<p><strong>To deploy the model version to the production environment</strong></p>
<ol>
<li><p>Log in to the CodePipeline console at <a target="_blank" href="https://console.aws.amazon.com/codepipeline/">https://console.aws.amazon.com/codepipeline/</a></p>
</li>
<li><p>Choose <strong>Pipelines</strong>, then choose the pipeline with the name <strong>sagemaker-</strong><code>projectname</code>-<code>projectid</code>-modeldeploy, where <code>projectname</code> is the name of your project, and <code>projectid</code> is the ID of your project.</p>
</li>
<li><p>In the <strong>DeployStaging</strong> stage, choose <strong>Review</strong>.</p>
</li>
<li><p>In the <strong>Review</strong> dialog box, choose <strong>Approve</strong>.</p>
<p>Approving the <strong>DeployStaging</strong> stage causes the MLOps system to deploy the model to production. To view the endpoint, choose the <strong>Endpoints</strong> tab on the project tab in Studio.</p>
</li>
</ol>
<h2 id="heading-step-6-clean-up-resources"><strong>Step 6: Clean Up Resources</strong></h2>
<p>To stop incurring charges, clean up the resources that were created in this walkthrough. To do this, complete the following steps.</p>
<p><strong>Note</strong></p>
<p>To delete the AWS CloudFormation stack and the Amazon S3 bucket, you need to be an administrator in Studio. If you are not an administrator, ask your administrator to complete those steps.</p>
<ol>
<li><p>In the Studio sidebar, choose the <strong>Home</strong> icon ( ).</p>
</li>
<li><p>Select <strong>Deployments</strong> from the menu, and then select <strong>Projects</strong>.</p>
</li>
<li><p>Select the target project from the dropdown list. If you don’t see your project, type the project name and apply the filter to find your project.</p>
</li>
<li><p><strong>You can delete a Studio project in one of the following ways:</strong></p>
<ol>
<li><p><strong>You can delete the project from the projects list.</strong></p>
<p>Right-click the target project and choose <strong>Delete</strong> from the dropdown list.</p>
</li>
<li><p><strong>You can delete a project from the Project details section.</strong></p>
<ol>
<li><p>When you've found your project, double-click it to view its details in the main panel.</p>
</li>
<li><p>Choose <strong>Delete</strong> from the <strong>Actions</strong> menu.</p>
</li>
</ol>
</li>
</ol>
</li>
<li><p>Confirm your choice by choosing <strong>Delete</strong> from the <strong>Delete Project</strong> window.</p>
<p>This deletes the AWS Service Catalog provisioned product that the project created. This includes the CodeCommit, CodePipeline, and CodeBuild resources created for the project.</p>
</li>
<li><p>Delete the AWS CloudFormation stacks that the project created. There are two stacks, one for staging and one for production. The names of the stacks are <strong>sagemaker-</strong><code>projectname</code>-<code>project-id</code>-deploy-staging and <strong>sagemaker-</strong><code>projectname</code>-<code>project-id</code>-deploy-prod, where <code>projectname</code> is the name of your project, and <code>project-id</code> is the ID of your project.</p>
<p>For information about how to delete a AWS CloudFormation stack, see <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html">Deleting a stack on the AWS CloudFormation console</a> in the <em>AWS CloudFormation User Guide</em>.</p>
</li>
<li><p>Delete the Amazon S3 bucket that the project created. The name of the bucket is <strong>sagemaker-project-</strong><code>project-id</code>, where <code>project-id</code> is the ID of your project.</p>
</li>
</ol>
<h1 id="heading-conclusion">Conclusion</h1>
<p>I hope this blog has been helpful. I'll see you in the next blog.</p>
<h1 id="heading-community-and-social-footprints">Community and Social Footprints :</h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/samtechno/">Sampath Kumar Basa</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/samtechlab">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Retail Banking - Real time Use cases]]></title><description><![CDATA[This is Episode 3 of data engineering where I will be covering real-time use case in the retail banking sector. This forms the core topic of this series, something that I have worked extensively in my past.
In this episode, I would like to define the...]]></description><link>https://blog.cloudnloud.com/retail-banking-real-time-use-cases</link><guid isPermaLink="true">https://blog.cloudnloud.com/retail-banking-real-time-use-cases</guid><category><![CDATA[dataengineering]]></category><category><![CDATA[Retail Banking]]></category><category><![CDATA[agile]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[Azure Databricks]]></category><dc:creator><![CDATA[Srinath Babu Kunka Suburam Dwaraganath]]></dc:creator><pubDate>Sat, 17 Dec 2022 04:19:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671245325876/uKEPYgJeY.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is Episode 3 of data engineering where I will be covering real-time use case in the retail banking sector. This forms the core topic of this series, something that I have worked extensively in my past.</p>
<p>In this episode, I would like to define the problem statements &amp; Agile overview. The next episode will focus on solution architecture and extensive story building methodologies to achieve the desired outcome.</p>
<p>This one in particular, is my favorite episode as it sets up the tone to the entire series.</p>
<p>Before I commence, I would like to thank the product owners, business analysts, Agile coaches, and scrum masters with whom I have worked throughout my career and the amount of knowledge I have gathered with numerous personals with similar interests, during coffee catch-ups and external meet-ups over a period of time.</p>
<p>From now on, the key focus is more on problem statements and solutions. To keep it more interactive, I would request the readers to kindly comment and ask questions wherever they see fit.</p>
<h3 id="heading-overview-on-agile"><strong>Overview on Agile</strong></h3>
<p>Before getting into more details, I would like to give a brief introduction to some of the agile jargons</p>
<p>The hierarchy of Agile works like this: Epic -&gt; Features -&gt; User Stories/Backlog Items -&gt; Tasks</p>
<p>An <strong>Epic</strong> is a big chunk of requirement that business wants to accomplish.</p>
<p>A <strong>Feature</strong> is a subdivision of an Epic which will define the <strong><em>MVP</em></strong> business wants to achieve. MVP is nothing but a minimum or most viable product that can be shipped or can be used by Business.</p>
<p>An <strong>User story</strong> or a <strong>Backlog</strong> Item will be smaller pieces of a feature breakdown, which will help shape up or complete a feature.</p>
<p>A real world example can be:</p>
<p>Epic - Business wants to build a car.</p>
<ol>
<li><p>Feature#1 – Build the Interior Components</p>
<ul>
<li><p>Story#1 – Design the car seat layout</p>
</li>
<li><p>Story#2 – Design the car seat belt mechanism</p>
</li>
<li><p>Story#3 – Design the color of the car seat</p>
</li>
<li><p>Story#4 – Design the floor mat</p>
</li>
</ul>
</li>
<li><p>Feature#2 – Build the exterior Components</p>
<ul>
<li><p>Story#1 – Design the car locking mechanism</p>
</li>
<li><p>Story#2 – Design the car’s wiper system</p>
</li>
<li><p>Story#3 – Design the color of the car</p>
</li>
<li><p>Story#4 – Design the roof antenna system</p>
</li>
</ul>
</li>
<li><p>Feature#3 – Build the Electrical system/components</p>
</li>
<li><p>Feature#4 – Build the Engine Components</p>
</li>
</ol>
<p>Features can be worked in parallel and integrated. Better control, flexibility, superior quality, continuous improvement, efficiency, and high team morale are some of the key benefits of working in an Agile driven project.</p>
<p>Now that we’ve got ourselves familiarized with the Agile Jargons, it is time we see some examples from risk, compliance &amp; KYC perspective.</p>
<h3 id="heading-real-time-use-case"><strong>Real time use case</strong></h3>
<p>In the below section(s), I have defined a few Epics, from a business point of view, with the assumption that you will be having discussions with your respective AML and CTF, Marketing and Campaign and KYC teams.</p>
<ul>
<li><strong><mark>Anti-Money Laundering and Counter-Terrorism Financing</mark></strong></li>
</ul>
<p>As the Transaction due diligence manager,</p>
<p>I want to track transactions real time which meets the following two criteria:</p>
<p>a.       Cash withdrawal using credit card</p>
<p>b.      Money transfer to ultra-high- or high-risk countries as per FATF (Financial Action Task Force)</p>
<p>So that we can mitigate Fraud, AML and CFT activities.</p>
<ul>
<li><strong><mark>Marketing and Campaign</mark></strong></li>
</ul>
<p>a.       As the marketing and campaigns manager,</p>
<p>I want to identify the customers who were on-boarded in the last 3 months, who can be potential candidates and are open for discussions about new product and services through phone calls or emails,</p>
<p>So that these customers will benefit from new product offers and recommendations, thus improving the overall customer experience.</p>
<p>b.      As the marketing and campaign manager,</p>
<p>I want to list customers who closed their accounts in the last 3 months</p>
<p>So that we can reach out to seek their feedback on the reason for closure</p>
<ul>
<li><strong><mark>Know Your Customer</mark></strong></li>
</ul>
<p>As the KYC Country lead,</p>
<p>I want to find out how many (summary) and detailed customers missing residential, mailing, and digital (mobile and email id) addresses which are critical for any customer due diligence process,</p>
<p>So that a DQ remediation plan can be worked out by an appropriate team to help improve the DQ and thus the E2E process itself.</p>
<p>We now have a very good understanding of the source system (from Episode 2), and problem statements are well defined with the current episode.</p>
<p>The original plan was to have the problem statement and solution architecture covered in a single episode.</p>
<p>However, I realized it would become cumbersome to cover them both in one go.</p>
<p>Hope you enjoyed this episode and got a better understanding of how Agile works by going through some of the real time use cases.</p>
<p>In the next episode we will focus more on solution design architecture and write the corresponding user stories to fulfil the required business requirements.</p>
<p>In case you have missed out on my previous episodes refer to <a target="_blank" href="https://lnkd.in/giq5YPku">https://lnkd.in/giq5YPku</a></p>
<h1 id="heading-community-and-social-footprints"><strong><em>Community</em> and <em>Social</em> Footprints :</strong></h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/srinathksd/"><strong>Srinath Babu Kunka Suburam Dwaraganath</strong></a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudnloud"><strong>GitHub</strong></a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud"><strong>Twitter</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1"><strong>YouTube Cloud DevOps Free Trainings</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/"><strong>Linkedin Page</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/"><strong>Linkedin Group</strong></a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF"><strong>Discord Channel</strong></a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud"><strong>Dev</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Service - NodePort, ClusterIP, LoadBalancer]]></title><description><![CDATA[What is meant by Service in Kubernetes ?
By default your application running in the pods are not available for outside world in order to make your application available to outside services are being used which routes the traffic to container into the...]]></description><link>https://blog.cloudnloud.com/kubernetes-service-nodeport-clusterip-loadbalancer</link><guid isPermaLink="true">https://blog.cloudnloud.com/kubernetes-service-nodeport-clusterip-loadbalancer</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><dc:creator><![CDATA[Deactivated User]]></dc:creator><pubDate>Mon, 05 Dec 2022 11:39:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670239002025/26-UFbvsm.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-meant-by-service-in-kubernetes">What is meant by Service in Kubernetes ?</h1>
<p>By default your application running in the pods are not available for outside world in order to make your application available to outside <strong>services</strong> are being used which routes the traffic to container into the pod. Services is a mechanism which exposes you pod on a network. There are three service which are used :</p>
<ul>
<li><p>NodePort</p>
</li>
<li><p>ClusterIP</p>
</li>
<li><p>LoadBalancer</p>
</li>
</ul>
<h1 id="heading-nodeport">NodePort</h1>
<p>In this type of service you are allowing the external traffic by opening TCP port of your worker node and via kube-proxy which available in all node will proxy requests from the TCP port to the pod on this node. Behind it will create the ClusterIP which will route the traffic from this ClusterIP to the port. Internally load balancing will performed to divert traffic to different pods in the node.</p>
<p><img src="https://user-images.githubusercontent.com/69069614/205437185-296cf220-fd10-43e6-a778-36b08aa23292.png" alt="image" /></p>
<p><strong>NodePort Type:</strong></p>
<ul>
<li><p>Will be tied up with you host eg: EC2.</p>
</li>
<li><p>Depends on host, if host is not available then this service won't work.</p>
</li>
<li><p>it will provide access to the pod only to same worker node.</p>
</li>
<li><p>NodePort will allocate the port the port range 30000-32767</p>
</li>
</ul>
<p><strong>NodePort Manifest File:</strong></p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  ports:
  - port: 80
    nodePort: 30123
  selector:
    app: myapp
  type: NodePort
</code></pre>
<p>Now you can simply apply the above YAML file to create service. To check the service type :</p>
<pre><code class="lang-plaintext">kubectl get svc mysvc
</code></pre>
<h1 id="heading-clusterip">ClusterIP</h1>
<p>This is the default type of service when you create any service and if yo didn't specify any type ClusterIP will be the default allocated but this will be available internally to the pod to communicate with each other. Application can internally communicate within cluster without any access from outside world.</p>
<p><img src="https://user-images.githubusercontent.com/69069614/205437234-86c5d379-422c-4296-b0e0-d6ec6b6f0f5b.png" alt="image" /></p>
<p>It will use IP's from the IP-pool and will be accessible via a DNS-name in the cluster’s scope.</p>
<p><strong>ClusterIP Manifest file:</strong></p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  ports:
  - port: 80
  selector:
    app: myapp
  type: ClusterIP
</code></pre>
<p>Here type is optional even if you won't specify it will create ClusterIP default.</p>
<p>Yo can apply the above yaml file and Check:</p>
<pre><code class="lang-plaintext">kubectl get svc mysvc
</code></pre>
<h1 id="heading-loadbalancer">LoadBalancer</h1>
<p>This is the most common used service but this can used only with the managed Kubernetes like GKS, AKS, EKS. In case of AWS it will create a classic load balancer in which it will route the traffic to the EC2 node and via Nodeport service to all pods. There’s no automatic filtering or routing. Traffic to the external IP and port will be sent straight to your service. This means that they’re suitable for all traffic types.</p>
<p><img src="https://user-images.githubusercontent.com/69069614/205437209-44a2dcb9-ae8f-4a07-8c00-15318387b861.png" alt="image" /></p>
<p><strong>LoadBalancer Type:</strong></p>
<ul>
<li><p>Will provide external access to pod.</p>
</li>
<li><p>provides the load balancing to the nodes.</p>
</li>
</ul>
<p><strong>LoadBalncer Manifest File:</strong></p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  ports:
  - port: 80
  selector:
    app: myapp
  type: LoadBalancer
</code></pre>
<p>If you are not using any managed Kubernetes and creating service with loadbalancer then it will simply create the service and won't show any error but it can't be used.</p>
<p>By applying the above file you will get external IP form the cloud provider IP pool which you can access. To check the service created :</p>
<pre><code class="lang-plaintext">kubectl get svc mysvc
</code></pre>
<h1 id="heading-faqs">FAQs</h1>
<ul>
<li><strong>What is the difference between NodePort and LoadBalncer ?</strong></li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>NodePort</td><td>LoadBalncer</td></tr>
</thead>
<tbody>
<tr>
<td>By creating a NodePort service, you are saying to Kubernetes reserve a port on all its nodes and forwards incoming connections to the pods that are part of the service.</td><td>There is no such port reserve with Load balancer on each node in the cluster.</td></tr>
<tr>
<td>NodePort service can be accessed not only through the service’s internal cluster IP, but also through any node’s IP and the reserved node port.</td><td>Only accessible by Load balancer public IP</td></tr>
<tr>
<td>Specifying the port isn’t mandatory. Kubernetes will choose a random port if you omit it( default range 30000 - 32767).</td><td>Load balancer will have its own unique, publicly accessible IP address and will redirect all connections to your service</td></tr>
<tr>
<td>If you only point your clients to the first node, when that node fails, your clients can’t access the service anymore</td><td>With Load balancer in front of the nodes to make sure you’re spreading requests across all healthy nodes and never sending them to a node that’s offline at that moment.</td></tr>
</tbody>
</table>
</div><ul>
<li><strong>Does ClusterIP have LoadBalnce ?</strong></li>
</ul>
<p>The ClusterIP provides a load-balanced IP address. One or more pods that match a label selector can forward traffic to the IP address. The ClusterIP service must define one or more ports to listen on with target ports to forward TCP/UDP traffic to containers.</p>
<ul>
<li><strong>Does LoadBalancer use NodePort?</strong></li>
</ul>
<p>The service can then be accessed through the IP address provided by the Cloud Service load balancer, which will route the request to a NodePort and from there forwarded to a ClusterIp. So, LoadBalancer builds upon NodePort and ClusterIp.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>ClusterIPs, NodePorts, LoadBalncers route the external traffic to your pod in the cluster. Each one has its own different use-cases. They enable the network access to your services make them publicly accessible.</p>
<h1 id="heading-community-and-social-footprints"><em>Community</em> and <em>Social</em> Footprints :</h1>
<ul>
<li><p><a target="_blank" href="https://www.linkedin.com/in/shubhcloud/">Shubh Dadhich</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/sdshubhcom">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></p>
</li>
<li><p><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Azure Storage]]></title><description><![CDATA[Introduction
The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers highly available, scalable and secure storage for a variety of data objects in the cloud. Azure Storage data objects...]]></description><link>https://blog.cloudnloud.com/azure-storage</link><guid isPermaLink="true">https://blog.cloudnloud.com/azure-storage</guid><category><![CDATA[storage]]></category><category><![CDATA[azure-storage]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[cloud-storage]]></category><dc:creator><![CDATA[Sathish Jayabalan]]></dc:creator><pubDate>Mon, 28 Nov 2022 15:06:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1669646878316/4h7mQWYgb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers highly available, scalable and secure storage for a variety of data objects in the cloud. Azure Storage data objects are accessible from anywhere in the world over HTTP or HTTPS via a REST API. Azure Storage Explorer provide user-interface tools for interacting with Azure Storage.</p>
<h1 id="heading-azure-storage-data-services">Azure Storage data services:</h1>
<ul>
<li>Azure Blobs:  scalable object store for text and binary data. </li>
<li>Azure Files: Managed file shares for cloud or on-premises deployments.</li>
<li>Azure Queues: A messaging store for reliable messaging between application components.</li>
<li>Azure Tables: A NoSQL store for schema less storage of structured data.</li>
<li>Azure Disks: Block-level storage volumes for Azure VMs.
Each service is accessed through a storage account. </li>
</ul>
<h2 id="heading-storage-types-available-in-azure">Storage Types Available in Azure:</h2>
<h1 id="heading-blob-storage">Blob storage:</h1>
<p>Azure Blob storage is Microsoft object storage solution for the cloud. Blob storage is optimized for storing unstructured data, such as text or binary data.
Blob storage is ideal for:</p>
<ul>
<li>Serving images or documents directly to a browser.</li>
<li>Storing files for distributed access.</li>
<li>Streaming video and audio.</li>
<li>Storing data for backup and restore, disaster recovery, and archiving.</li>
</ul>
<h1 id="heading-blob-types">Blob Types:</h1>
<ul>
<li>Block blobs: store text and binary data. Block blobs are made up of blocks of data that can be managed individually. Block blobs can store up to about 190.7 TiB.</li>
<li>Append blobs: are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines.</li>
<li>Page blobs store random access files up to 8 TiB in size. Page blobs store virtual hard drive (VHD) files and serve as disks for Azure virtual machines.</li>
</ul>
<p>Objects in Blob storage can be accessed from anywhere using HTTP or HTTPS.</p>
<h1 id="heading-azure-files">Azure Files:</h1>
<p>Azure Files used for network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access.
One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.</p>
<p>File shares can be used for many common scenarios:</p>
<ul>
<li>Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.</li>
<li>Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them, and that they use the same version.</li>
<li>Logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.</li>
</ul>
<h1 id="heading-queue-storage">Queue storage:</h1>
<p>Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.</p>
<h1 id="heading-azure-table">Azure Table:</h1>
<p>Azure Table storage stores large amounts of structured data. The service is a NoSQL datastore which accepts authenticated calls from inside and outside the Azure cloud. Azure tables are ideal for storing structured, non-relational data. Common uses of Table storage include:</p>
<ul>
<li>Storing TBs of structured data capable of serving web scale applications</li>
<li>Storing datasets that don't require complex joins, foreign keys, or stored procedures and can be denormalized for fast access</li>
<li>Quickly querying data using a clustered index</li>
<li>Accessing data using the OData protocol and LINQ queries You can use </li>
<li>Table storage to store and query structured, non-relational data, and your tables will scale as demand increases.</li>
</ul>
<h1 id="heading-disk-storage">Disk storage:</h1>
<p>An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises server but, virtualized. Azure-managed disks are stored as page blobs, which are a random IO storage object in Azure. We call a managed disk 'managed' because it is an abstraction over page blobs, blob containers, and Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of the rest.</p>
<h1 id="heading-redundancy">Redundancy:</h1>
<p>To ensure that your data is durable, Azure Storage stores multiple copies of your data. When you set up your storage account, you select a redundancy option. </p>
<p>Transfer data to and from Azure Storage.</p>
<p>You have several options for moving data into or out of Azure Storage. Which option you choose depends on the size of your dataset and your network bandwidth.</p>
<h1 id="heading-migration">Migration:</h1>
<p>There is several options for migrating data  into or out of Azure Storage. Which option you choose depends on the size of your dataset and your network bandwidth.
Data transfer can be offline or over the network connection. Choose your solution depending on your:</p>
<ul>
<li>Data size - Size of the data intended for transfer,</li>
<li>Transfer frequency - One-time or periodic data ingestion, and</li>
<li>Network – Bandwidth available for data transfer in your environment.</li>
<li>There are several native and 3rd party Tools available to migrate the data. Below are the Azure Native Migration Tools.</li>
<li>Data Box, Azure Storage Explorer, Data Box Gateway and AZ-Copy.</li>
</ul>
<h1 id="heading-community-and-social-footprints">Community and Social Footprints :</h1>
<ul>
<li><a target="_blank" href="https://www.linkedin.com/in/satjayab/">Sathish Jeyabalan</a></li>
<li><a target="_blank" href="https://github.com/samtechlab">GitHub</a></li>
<li><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></li>
<li><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></li>
<li><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></li>
<li><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></li>
<li><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></li>
<li><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Introduction of Prometheus]]></title><description><![CDATA[What is Prometheus?

Prometheus is an open-source monitoring tool used for event monitoring and alerting. 
Prometheus is written in go language and it is licensed under Apache 2.0 license. 
it is used for infrastructure and customized services monito...]]></description><link>https://blog.cloudnloud.com/introduction-of-prometheus</link><guid isPermaLink="true">https://blog.cloudnloud.com/introduction-of-prometheus</guid><category><![CDATA[#prometheus]]></category><category><![CDATA[Devops]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[Collaboration]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Sanjay Surwase]]></dc:creator><pubDate>Mon, 21 Nov 2022 16:44:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1669048189746/t_rRO7kKJ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-prometheus">What is Prometheus?</h1>
<ul>
<li>Prometheus is an open-source monitoring tool used for event monitoring and alerting. </li>
<li>Prometheus is written in go language and it is licensed under Apache 2.0 license. </li>
<li>it is used for infrastructure and customized services monitoring.</li>
<li>it allows you to analyze how your applications and infrastructure are performing from the metric.</li>
<li>It uses a multi-dimensional data model with time series data identified by metric name and key values pairs. </li>
<li>It uses very simple query language PromQL</li>
<li>To monitor the customized services, you can add instrumentation to your code, via Prometheus client libraries like Go, python, java, Scala.</li>
<li>This is full-fledged monitoring system with its own alert manager. </li>
<li>It can handle the millions of metrics ingestions per second. </li>
</ul>
<h1 id="heading-what-are-the-alternative-monitoring-tools">What are the alternative monitoring tools?</h1>
<p><strong>Below are the alternative monitoring tools for Prometheus</strong></p>
<p>1.Graphite </p>
<p>2.influxdb</p>
<p>3.opentsdb</p>
<p>4.nagios</p>
<p>5.sensu</p>
<h2 id="heading-so-question-comes-why-we-should-go-for-prometheus">So, question comes why we should go for Prometheus?</h2>
<p>It has more feature than any other tools  </p>
<ol>
<li>Provides the flexible query language</li>
<li>Push gateway for collecting metrics from short lived batch jobs</li>
<li>Wide range of exporters are available.</li>
<li>3rd party tools provide endpoint to tools like Graphana.</li>
</ol>
<p><strong>What can we monitor with Prometheus:</strong></p>
<p>1.Service Metrics</p>
<p>2.Host Metrics </p>
<p>3.Uptime/Status of the website</p>
<p><strong>There are 5 Stages monitoring of Prometheus:</strong></p>
<ol>
<li>Data collection</li>
<li>Data storage </li>
<li>Alerting </li>
<li>Visualization </li>
<li>Analytics and monitoring. </li>
</ol>
<p><strong>What are the components of Prometheus?</strong></p>
<p><strong>1.Monitoring</strong>- 
it is the systematic process of collecting and encoding the activities taking place into target project.</p>
<p><strong>2.Alert/Alerting-</strong>
 Alert is the outcome of an alerting rule in Prometheus that I actively firing. Alert are sent from Prometheus to the alert manager. </p>
<p><strong>3.Alert manager-</strong>
 Prometheus servers generate the alert if thing go out the rules it sends to alert manager
The job of the alert manager is grouping that alert and apply filter on them and send it via email, pagerduty, slack.</p>
<p><strong>4.Target-</strong>
Target is an object whose metrics are to be monitored. Target can be anything windows, Linux, or own application.</p>
<p><strong>5.Instance-</strong>
A endpoint you can scrape is called an instance.  </p>
<p>Ex.  3.4.9.5:8790, 8.4.5.9:5678</p>
<p>Instance denoted after the colon, means 8790 and 5678 are instance. </p>
<p><strong>6.Job:-</strong>
Job is a collection of targets /instance.</p>
<p><strong>7.Sample:-</strong> 
Sample is a single value at a point in the time series.</p>
<h1 id="heading-architecture-of-prometheus">Architecture of Prometheus</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669046507809/uD4r2ZOyr.png" alt="Architecture.png" /></p>
<p><strong>Prometheus server</strong></p>
<p>Prometheus servers collect the multi-dimensional data in time series and then analyze and aggregates the collected data, this process of collecting metrics is called scraping. 
It pulls the metrics automatically from the targets. Hence user don’t need to push the metrics for analysis. The only we need to do expose metrics such that Prometheus can access them. 
For that we must create the HTTP endpoint with metrics which returns the complete metrics. </p>
<ol>
<li><p>Retrival: - it helps to retrieve the data from the application endpoint. </p>
</li>
<li><p>TSDB: - Stores the metrics data so that alter it can be retrieved and analyzed.</p>
</li>
<li><p>HTTP Server: - HTTP server is responsible to push the collected data to the dashboard.</p>
</li>
</ol>
<p><strong>Pull Metrics</strong></p>
<p>Prometheus pull the data from targets via pull method and store on the TSDB. </p>
<p><strong>Pushgateway</strong></p>
<p>Prometheus works on pull base model but some of the component metric cannot be pulled.
Short lived job cannot be scarped through pulling metric. For monitoring these job Prometheus uses push gateway. once the push gateway fetches the metrics from Prometheus use pull method for scraping. Once push gateway push the data Prometheus server the get the data via pull method.</p>
<p><strong>Service Discovery</strong> </p>
<p>Discovers the targets.</p>
<p><strong>Alert manager</strong></p>
<p>Prometheus servers push the alerts to alert manger, alert manger apply the filter on it send it via 
email, slack.</p>
<p><strong>HTTP Server</strong></p>
<p>HTTP server fetch the metrics from TSDB and push the collected metrics to the dashboard.</p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Cluster In Simple Steps]]></title><description><![CDATA[Does that word cluster amaze you? If you are trying to get an Kubernetes cluster running by yourself. Then you are at the right place. The article covers simple to do steps to understand kubernetes by setting up a simple cluster with a master and 2 w...]]></description><link>https://blog.cloudnloud.com/kubernetes-cluster-in-simple-steps</link><guid isPermaLink="true">https://blog.cloudnloud.com/kubernetes-cluster-in-simple-steps</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[cloudnative]]></category><category><![CDATA[technology]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><dc:creator><![CDATA[Vijayalakshmi Bakthavachalam]]></dc:creator><pubDate>Thu, 17 Nov 2022 09:13:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1668675973602/yUiYvB5jJ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Does that word cluster amaze you? If you are trying to get an Kubernetes cluster running by yourself. Then you are at the right place. The article covers simple to do steps to understand kubernetes by setting up a simple cluster with a master and 2 worker nodes on Google Cloud Platform. The use of GCP is a personal choice but the same procedure can be followed in other vendors as the steps are to be followed on Ubuntu Virtual Machines on the cloud platform.</p>
<p>Kubernetes is a widely used container orchestration tool. It has its own pros and cons but having a solid understanding of how it works is going to help you a lot to stay with technology trends in the <strong>cloud native space</strong>. Spend some time getting yourself familiarized and you would not regret it.</p>
<p><strong>Prerequisites:</strong></p>
<ol>
<li>Virtual Machine 1 (named as MASTER) with 2cpu and 4 GB RAM Ubuntu Machine</li>
<li>Virtual Machine 2 (named as WorkerNode1) with 2cpu and 4GB RAM Ubuntu Machine</li>
<li>Virtual Machine 3 (named as WorkerNode2) with 2cpu and 4GB RAM Ubuntu Machine</li>
</ol>
<p>Once we have the 3 Machines set up, the next step is to install the requisite packages and software on the Machines so they are ready to function as a Kubernetes Cluster.</p>
<h2 id="heading-commands-to-be-run-on-all-3-machines-unless-stated-otehrwise">COMMANDS TO BE RUN ON ALL 3 MACHINES UNLESS STATED OTEHRWISE</h2>
<h3 id="heading-install-docker-engine-on-all-nodes">INSTALL DOCKER ENGINE ON ALL NODES:</h3>
<pre><code>apt-get update &amp;&amp; apt-get install -y apt-transport-https curl
</code></pre><pre><code>curl -s https:<span class="hljs-comment">//packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -</span>
</code></pre><pre><code>cat &lt;&lt;EOF &gt;<span class="hljs-regexp">/etc/</span>apt/sources.list.d/kubernetes.list
&gt; deb https:<span class="hljs-comment">//apt.kubernetes.io/ kubernetes-xenial main</span>
&gt; EOF
</code></pre><pre><code>apt-get update
</code></pre><pre><code>apt-get install -y kubelet kubeadm kubectl
</code></pre><p>(TO BE RUN ONLY ON THE WORKER NODES):</p>
<pre><code>apt-mark hold kubelet kubeadm kubectl
</code></pre><h3 id="heading-install-docker-packages-on-all-the-nodes">INSTALL DOCKER PACKAGES ON ALL THE NODES</h3>
<p>As a root user execute the below commands on master and worker nodes</p>
<pre><code>sudo apt-get remove docker docker-engine docker.io containerd runc
</code></pre><pre><code>sudo apt-get update
</code></pre><pre><code>sudo apt-get install     ca-certificates     curl     gnupg     lsb-release
</code></pre><pre><code>sudo mkdir -p /etc/apt/keyrings
</code></pre><pre><code>curl -fsSL https:<span class="hljs-comment">//download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg</span>
echo \
  <span class="hljs-string">"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable"</span> | sudo tee /etc/apt/sources.list.d/docker.list &gt; <span class="hljs-regexp">/dev/</span><span class="hljs-literal">null</span>
</code></pre><pre><code>sudo apt-get update
</code></pre><pre><code>sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
</code></pre><pre><code>mkdir -p /etc/systemd/system/docker.service.d
</code></pre><pre><code>systemctl daemon-reload
systemctl restart docker
systemctl enable docker
</code></pre><pre><code>docker -v
</code></pre><pre><code>rm /etc/containerd/config.toml
</code></pre><pre><code>systemctl restart containerd
</code></pre><h3 id="heading-initialization-of-kube-master-to-be-run-only-on-the-master-node">INITIALIZATION OF KUBE MASTER: TO BE RUN ONLY ON THE MASTER NODE</h3>
<p>As a root user on the master node, execute the below commands</p>
<pre><code>kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr=<span class="hljs-number">192.168</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>/<span class="hljs-number">16</span>
</code></pre><p>Once we have run the above command, it produces output that contains the information about the kubernetes cluster and the information has to be kept safe and secure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668674113888/tYrqznvau.png" alt="kubeadminit_op1.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668674029624/Ez-QkJawX.png" alt="kubeaminit_op2.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668674243444/Hc8MK16S-.png" alt="imagefinal.png" /></p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
<span class="hljs-keyword">export</span> kubever=$(kubectl version | base64 | tr -d <span class="hljs-string">'\n'</span>)
</code></pre><p><strong>(Kube Proxy Addon Installation):</strong></p>
<pre><code>kubectl apply -f https:<span class="hljs-comment">//github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml</span>
</code></pre><pre><code>kubectl get nodes
</code></pre><p>When we run the kubectl get nodes on the Master Machine, the output just displays the master machine.</p>
<p>To add the worker nodes on to the kubernetes cluster, run the output from the kubeadm init command on the worker nodes as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668674545989/vcq169Wt7.png" alt="kubernetesworkernode1.png" /></p>
<p>Now when we run the “kubectl get nodes” on the master node, we will be able to see the master and workernode1 listed as nodes to the kubernetes cluster. The same has to be repeated on worker node 2 to join to the cluster.</p>
<p><em>So why would should we try setting this up manually when there are lot of managed kubernetes instance provided by cloud providers. Setting up the kubernetes cluster help us understand the various components that are installed while we run the kubeadm init command and how a node is joined as worker to the master. This helps in creating a Mind Map that retains in the memory for a longer time than while reading</em>.</p>
<p>If you are someone who would like to fork a github repo and try out the commands, please refer to </p>
<p><a target="_blank" href="Link">https://github.com/cloudnloud/Kubernetes_Admin_Training/blob/main/class3-k8s-installation/installation.md</a></p>
<p>If video format makes it impressive, please refer to </p>
<p><a target="_blank" href="Link">https://www.youtube.com/watch?v=md2BtnJYtt8&amp;list=PLh_VNk4-EHTMhIR-NIgI4tCEHdO9U-A8F&amp;index=3</a></p>
<p>Happy Learning!!!</p>
<p>Please feel free to post any queries/clarifications.</p>
<h1 id="heading-community-and-social-footprints"><em>Community</em> and <em>Social</em> Footprints :</h1>
<ul>
<li><a target="_blank" href="https://www.linkedin.com/in/vijayatech">Vijayalakshmi Bakthavachalam</a></li>
<li><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></li>
<li><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></li>
<li><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></li>
<li><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></li>
<li><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></li>
<li><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></li>
<li><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Retail Banking - Data Models]]></title><description><![CDATA[Retail Banking services offered by a financial institution for individual consumers and serve the following functions:
Deposits Accounts – Surplus from individual customers as savings and pay interest.
Loan / Credit Accounts – Banks offer loans or cr...]]></description><link>https://blog.cloudnloud.com/retail-banking-data-models</link><guid isPermaLink="true">https://blog.cloudnloud.com/retail-banking-data-models</guid><category><![CDATA[Retail Banking]]></category><category><![CDATA[Data Architecture]]></category><category><![CDATA[Azure Databricks]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[data structures]]></category><dc:creator><![CDATA[Srinath Babu Kunka Suburam Dwaraganath]]></dc:creator><pubDate>Tue, 15 Nov 2022 06:19:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1668431978078/mMjq7BMIa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Retail Banking services offered by a financial institution for individual consumers and serve the following functions:</p>
<p><strong>Deposits Accounts</strong> – Surplus from individual customers as savings and pay interest.</p>
<p><strong>Loan / Credit Accounts</strong> – Banks offer loans or credit to individual customers and earn interest.</p>
<p><strong>Cash management</strong> – Banks offers variety of services to manage and transact their money. For instance – ATM, Cards, UPI, Online Transfers etc.</p>
<p> <strong>Note</strong> – Retail banking is vast, and in this blog series am trying to narrow down and build a use case so that I can explain in detail the architecture, design principles, data model, data engineering, governance and visualization concepts E2E</p>
<h2 id="heading-retail-banking-in-data-perspective"><u>Retail Banking in Data perspective</u></h2>
<h3 id="heading-accounts-details-accountdmpnghttpscdnhashnodecomreshashnodeimageuploadv1668433172233nuavz8anvpng-alignleft"><code>Accounts Details</code> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668433172233/NUAVz8AnV.png" alt="AccountDM.png" /></h3>
<h3 id="heading-customer-details-customerdmpnghttpscdnhashnodecomreshashnodeimageuploadv1668433271308-wn6wevelpng-alignleft"><code>Customer Details</code> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668433271308/-Wn6WeVEl.png" alt="CustomerDM.png" /></h3>
<h3 id="heading-transaction-details-transactiondmpnghttpscdnhashnodecomreshashnodeimageuploadv1668433894801v53oujjggpng-alignleft"><code>Transaction Details</code> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1668433894801/v53OujJGg.png" alt="TransactionDM.png" /></h3>
<p>Assume <b>Mr. and Mrs. Dhoni</b> walks to nearby bank, below are step by step process:</p>
<p> Bank verifies the customers, request for identity documents, does background checks and generates CIF (Customer Information File). Understand the customer needs and recommends products as per needs of customer, for instance,</p>
<table>
<tr><b><th>Customer</th><th>Account</th><th>Product</th></b></tr>
<tr><th>Mr. Dhoni</th><th>A1</th><th>Savings Account</th></tr>
<tr><th>Mr. Dhoni</th><th>A2</th><th>Term Deposit</th></tr>
<tr><th>Mrs. Dhoni</th><th>A2</th><th>Term Deposit</th></tr>
<tr><th>Mrs. Dhoni</th><th>A3</th><th>Savings Account</th></tr>
</table>

<p>As per above table,</p>
<ul>
<li><strong> Mr and Mrs Dhoni</strong>  have opened a personal saving account for day-to-day transactional activities.</li>
<li><strong> Mr and Mrs Dhoni</strong> have opened a joint term deposit account, to lock an amount of money for an agreed length of time (the ‘term’) to get a guaranteed rate of interest for the term selected.
From data perspective, we understand that Customer Account is many to many relationship</li>
</ul>
<p><strong> Mr. Dhoni </strong> now wants to transfer money to his friend <strong> Mr. Sachin </strong> hence adds a payee. Payee is related with Mr Dhoni. </p>
<p><strong>Mr Dhoni</strong> performs the following tasks,</p>
<ol>
<li>Deposits cash into his newly created account “A1”</li>
<li>Performs a payment from his account to newly added payee.</li>
<li>Payee, can be to account details (or) mobile (or) email-id etc.</li>
</ol>
<p>Payment table captures from customer, from account, payee and amount transferred details.</p>
<p>Both cash deposit and account to account transfer are two separate transactions, entries are made in Transaction table.</p>
<p><strong>While going through data model, understand, considering below points :</strong> </p>
<ul>
<li><p>Please keep in mind retail banking is a vast subject. Have tried my best to make data model easy to understand and almost look complete from data engineer perspective.</p>
</li>
<li><p>Customer can originate through mobile app, customer care (telephonic), branch, web site, or others (broker) etc.</p>
</li>
<li><p>Data is stored in database and data modelling plays a critical part in data management, governance, and intelligence. Have defined normalized, simple, well defined, and organized data model.</p>
</li>
<li><p>Relationship between the tables. Well defined primary and foreign keys.</p>
</li>
<li><p>Audit fields for monitoring activities and compliance.</p>
</li>
<li><p>For CUSTOMER, limiting my scope to only “INDIVIDUALS”. Other possible CUSTOMER_PARTY_TYPE “ORGANIZATION”, “VISITOR” etc. and the data model can expand by addition of new customer party.</p>
</li>
<li><p>For ACCOUNT, limiting my scope to only “SAVINGS”, “PERSONAL LOAN”, “CREDIT ACCOUNT” (Credit card account) and “MORTGAGE” (Home Loan).</p>
</li>
<li><p>Finally, the Transaction and Product are other entities.</p>
</li>
</ul>
<p>I understand the content is theoretical but please spend sometime and understand. In upcoming episodes when we start tasks technically, good understanding of source system and requirements will make it easier and the process enjoyable.</p>
<p>Upcoming episode, we will play architect role build data pipeline and understand, how the data flows from source data model (OLTP) to data lake and then into data warehouse (OLAP). Also, will define real time use cases from perspective of KYC, marketing and campaign and Anti-money Laundering teams which is common to most of banks.</p>
<p>In case you have missed out on my previous episodes refer to https://lnkd.in/gk5DuJbT</p>
<h1 id="heading-community-and-social-footprints"><em>Community</em> and <em>Social</em> Footprints :</h1>
<ul>
<li><a target="_blank" href="https://www.linkedin.com/in/srinathksd/">Srinath Babu Kunka Suburam Dwaraganath</a></li>
<li><a target="_blank" href="https://github.com/cloudnloud">GitHub</a></li>
<li><a target="_blank" href="https://twitter.com/cloudnloud">Twitter</a></li>
<li><a target="_blank" href="https://www.youtube.com/c/CloudnLoud?sub_confirmation=1">YouTube Cloud DevOps Free Trainings</a></li>
<li><a target="_blank" href="https://www.linkedin.com/company/80359681/">Linkedin Page</a></li>
<li><a target="_blank" href="https://www.linkedin.com/groups/9124202/">Linkedin Group</a></li>
<li><a target="_blank" href="https://discord.gg/vbjRQGVhuF">Discord Channel</a></li>
<li><a target="_blank" href="https://dev.to/cloudnloud">Dev</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Docker Episode-3: Docker Networking]]></title><link>https://blog.cloudnloud.com/docker-episode-3-docker-networking</link><guid isPermaLink="true">https://blog.cloudnloud.com/docker-episode-3-docker-networking</guid><category><![CDATA[Docker]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[containers]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><dc:creator><![CDATA[Bhanu Prasad]]></dc:creator><pubDate>Sun, 06 Nov 2022 20:07:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1667752102914/J4mZTQO8d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667750896445/GH_x6XY7A.png" alt="image.png" /></p>
]]></content:encoded></item><item><title><![CDATA[Amazon S3]]></title><description><![CDATA[Amazon S3
is easy-to-use object storage with a simple web service interface that you can use to store and retrieve any amount of data from anywhere on the web. Amazon S3 also allows you to pay only for the storage you actually use.
Advantage of Amazo...]]></description><link>https://blog.cloudnloud.com/amazon-s3</link><guid isPermaLink="true">https://blog.cloudnloud.com/amazon-s3</guid><category><![CDATA[simple storage services ]]></category><category><![CDATA[awe ]]></category><category><![CDATA[Amazon S3]]></category><category><![CDATA[S3]]></category><category><![CDATA[AWS s3]]></category><dc:creator><![CDATA[ALa Al-Din Al-Sharabi]]></dc:creator><pubDate>Sun, 06 Nov 2022 20:06:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1667742236145/9GSg_qz17.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-amazon-s3">Amazon S3</h3>
<p>is easy-to-use object storage with a simple web service interface that you can use to store and retrieve any amount of data from anywhere on the web. Amazon S3 also allows you to pay only for the storage you actually use.</p>
<h3 id="heading-advantage-of-amazon-s3">Advantage of Amazon S3</h3>
<p>Create Buckets.
Store data in Buckets.
Download data.
Permissions.
Standard interfaces.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667743028991/RqyZAMZAyZ.jpg" alt="s3 binfet.jpg" /></p>
<h3 id="heading-creating-2-buckets">Creating 2 Buckets</h3>
<p>Services =&gt; Storage =&gt; S3 =&gt; Create Bucket 
Bucket Name =&gt; Bucket1 &amp; Bucket2 (Bucket Name should be unique globally).</p>
<p>regions  =&gt; Mumbai &amp; Singapore.</p>
<p>Next =&gt; Next =&gt; uncheck  block public access =&gt; Next=&gt;Create bucket.</p>
<h3 id="heading-uploading-object-into-bucket">Uploading Object into Bucket</h3>
<p>click on the  test bucket =&gt; Upload =&gt; Add files =&gt; select any image =&gt;Next =&gt; Manage public permissions =&gt; Grant public read access=&gt;Next =&gt;Next=&gt; Upload</p>
<p><strong>Note:</strong>  As bucket is public,  and object is also public,   anyone in the world can access the content.
Click on the Object=&gt; Get Object URL<br />Using Object URL, anyone can access.</p>
<h3 id="heading-features-of-s3">Features of S3</h3>
<ol>
<li>Versioning</li>
<li>Static  website hosting</li>
<li>storage classes/ tiers</li>
<li>Cross region replication (CRR )</li>
<li>Transfer Acceleration</li>
<li>encryption</li>
<li>Metadata and Tags</li>
<li>ACL &amp; Bucket policies</li>
<li>Life cycle management</li>
</ol>
<ol>
<li><strong>Versioning</strong>
is a means of keeping multiple variants of an object in the same bucket.</li>
</ol>
<blockquote>
<p>Create new bucket
Bucket Name=&gt; ( bucket1 ) =&gt; Region - Mumbi =&gt;Next=&gt;Next =&gt;un check Block all public access =&gt; Next =&gt; create bucket.</p>
</blockquote>
<p>enable versioning </p>
<blockquote>
<p>Click on the bucket =&gt;Properties tab( Observation: By default version is disabled )
=&gt;Edit =&gt;Enable  =&gt; Save Changes.</p>
</blockquote>
<p>upload one object</p>
<blockquote>
<p>Upload the file from Desktop=&gt;Next =&gt; Grant public access =&gt; Next =&gt; Next =&gt;Upload</p>
</blockquote>
<p><strong>Advantage of versioning</strong></p>
<p><strong>- recover deleted object.</strong></p>
<p>Delete the object =&gt; (Select the check box=&gt; Actions=&gt; Delete<br />Recover  the object =&gt; Enable list version We can see the object and its delete marker.
select the delete marker check box=&gt; Actions=&gt;Delete=&gt; Delete =&gt;Disable list version.</p>
<p><strong>Note:</strong>  When we delete, object is not deleted. It is marked as deleted.
    So, If you remove the delete marker, We will get the  object.</p>
<p><strong>- We can maintain different versions of the file.</strong></p>
<p>Upload the same file again.
Get the object URL, and check from browser, we get the latest file.
Even if you delete the file, we can recover both the versions.
Select the object =&gt; actions =&gt; delete =&gt; delete</p>
<p>select show button We can see both the versions of the file.</p>
<p><strong>2. Static  website hsoting</strong></p>
<p>Bucket name -  bucket1 =&gt;Next =&gt;Next =&gt; uncheck  block all public access =&gt;Next=&gt;Create bucket.</p>
<p>Select the bucket=&gt;Properties =&gt; Static website hosting=&gt;Edit =&gt; Enable =&gt; Host a static website =&gt;
index document -   index.html
error document  -  error.html
Save</p>
<p>Upload index.html  and error.html=&gt; Next=&gt; Next=&gt; Next =&gt; Upload</p>
<p>Now, go to the properties of the bucket =&gt; Static  website hosting =&gt; get URL of the website ( endpoint )</p>
<p><strong>Note:</strong>  Individual files should have public access.</p>
<p><strong>What is the use of error.html ?</strong></p>
<p>Incase of any reason,if index.html is not accessible then error page will be displayed.</p>
<p>Lets make the index.html page as private.
select index.html=&gt;ACL =&gt;Edit =&gt; public access =&gt; read =&gt; uncheck =&gt; Save Changes</p>
<p>Now, refresh the URL, we get error.html page!!!</p>
<ul>
<li>Delete the files =&gt;  Delete the bucket.</li>
</ul>
<p><strong>3. storage classes/ tiers</strong></p>
<p>Amazon S3 offers a range of storage classes that you can choose from based on the data access, resiliency, and cost requirements of your workloads.</p>
<p><strong>S3 Storage Classes</strong> can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.</p>
<p><strong>4. Cross region replication (CRR )</strong></p>
<p>Lets say, we have two buckets ( 1st bucket in Mumbi &amp; 2nd  bucket in Sidney)</p>
<p>We we upload the object in Mumbi,  the object should also be available in Sidney).</p>
<p>As we are replicating an object in another region, it is called cross region replication.</p>
<p>vice-versa will not happen.</p>
<p>If we delete object in Mumbi, it will not be deleted in Sidney.
If we edit object in Mumbi, it will not de edited in Sidney.</p>
<p>Lets  create  bucket
bucket name -  Mumbi-bucket
Region - Mumbi=&gt;Next =&gt; Next =&gt; uncheck block all public access =&gt;
Next =&gt; create bucket.</p>
<p>Lets create 2nd bucket in Sidney
Lets  create  bucket
bucket name -  Sidney-bucket
Region - Sidney
Next =&gt;Next =&gt; uncheck block all public access =&gt;Next=&gt;create bucket.</p>
<p><strong>Enable cross region replication in Mumbi bucket</strong></p>
<p>Select Mumbi bucket =&gt; Management =&gt; Replication Rules =&gt;Create Replication Rule =&gt; Enable Bucket versioning =&gt; Replication Rule Name -  CRR1</p>
<p> Destination bucket =&gt;Sydney bucket =&gt; Enable versioning=&gt;IAM Role=&gt;</p>
<p>( TO establish connection between two regions, we need role )</p>
<p>IAM Role - Create new role =&gt; Save.</p>
<p>Now, lets upload object in Mumbi  bucket, it will be replicated in Sydni bucket!!!!</p>
<p><strong>5. Transfer Acceleration</strong></p>
<p>When we enable transfer acceleration, data will be transferred to edge location and then  from edge location data will be transferred to bucket.
( Look at the image )</p>
<p>Select Mumbi bucket =&gt; Properties=&gt; Transfer acceleration =&gt;Edit=&gt; Enabled =&gt; Save =&gt; Changes.</p>
<p><strong>6. encryption</strong></p>
<p>There are two types of encryption</p>
<ul>
<li>AES - 256   ( Advanced Encryption standard )  - Single encryption</li>
<li>AWS - KMS ( Key management service )  - Double encryption  ( More secured )</li>
</ul>
<p>Select the required encryption.</p>
<p>Select the bucket =&gt; Properties =&gt; Default Encryption =&gt; Edit =&gt; Enable</p>
<p><strong>7. Metadata and Tags</strong></p>
<p><strong>Metadata</strong> =&gt; To provide more information about the object in key-value pairs.
Keys are predefined. eg: Content-type, Content-language  etc</p>
<p><strong>Tags</strong> =&gt; To provide more information about the object in key-value pairs.
Keys and values we need to provide.</p>
<p>Select the object =&gt; Properties, we can see the metadata and tags.</p>
<p><strong>8. Access Control List &amp; Bucket policies</strong></p>
<p>Select the bucket =&gt; Permissions tab =&gt; ACL Edit=&gt;Add grantee
Enter canonical ID =&gt; Save Changes</p>
<p><strong>Note: </strong> ACL  we can apply  at bucket level and object level Select the Object and provide the access by entering canonical ID </p>
<p><strong>Note: </strong> Bucket policy, we can apply only to  bucket.</p>
<p>Select the bucket =&gt; Permission , We can see bucket policy.
Bucket Policy are written in JSON Code.</p>
<p>Bucket policy should be defined in JSON code.
Its the job of AWS administrator.</p>
<p>+++++++++++
Select any object =&gt; Permissions tab
Observe: We do not have bucket policy.
As bucket policy, we need to apply at  bucket level only.</p>
<p><strong>9. Life cycle management</strong></p>
<p>Lets create a new bucket</p>
<p>Select the bucket  =&gt; Management tab  =&gt; Create lifecycle  rule</p>
<p>Rule name - Myrule
This rule applies to all objects  =&gt; I Acknowledge
Transit  current version of objects between storage classes</p>
<p>Standard 1A  =&gt;30 Days<br />Add transition</p>
<p>One Zone-IA    =&gt; 60 Days</p>
<p>Create Rule</p>
<p>From now,  any object uploaded in the bucket will follow the rule for transition.</p>
]]></content:encoded></item></channel></rss>