<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Israel Orenuga's blog]]></title><description><![CDATA[Israel Orenuga's blog]]></description><link>https://isrxl.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 21:21:01 GMT</lastBuildDate><atom:link href="https://isrxl.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Troubleshooting XFS Filesystem Issues: Duplicate UUID and Mounting Errors]]></title><description><![CDATA[When working with Linux filesystems, particularly XFS, you may occasionally run into issues when attempting to mount partitions. Recently, I encountered a problem with mounting an XFS filesystem, which provided valuable insights into troubleshooting ...]]></description><link>https://isrxl.com/troubleshooting-linux-xfs-filesystem-duplicate-uuid-and-mounting-errors</link><guid isPermaLink="true">https://isrxl.com/troubleshooting-linux-xfs-filesystem-duplicate-uuid-and-mounting-errors</guid><category><![CDATA[Linux]]></category><category><![CDATA[troubleshooting]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[xfs]]></category><category><![CDATA[file system]]></category><category><![CDATA[volume mount]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Sun, 08 Sep 2024 10:19:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4Mw7nkQDByk/upload/28f1f488de442fef9b27174edb5ca72e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When working with Linux filesystems, particularly XFS, you may occasionally run into issues when attempting to mount partitions. Recently, I encountered a problem with mounting an XFS filesystem, which provided valuable insights into troubleshooting filesystem issues. In this post, I'll walk you through the issue, the troubleshooting process, and the steps I followed to resolve it.</p>
<h4 id="heading-the-problem">The Problem</h4>
<p>I attempted to mount an XFS partition using the following command:</p>
<pre><code class="lang-plaintext">sudo mount -t xfs /dev/sdm1 /mariadb/prddisk
</code></pre>
<p>However, I received the following error message:</p>
<pre><code class="lang-plaintext">mount: /mariadb/prddisk: wrong fs type, bad option, bad superblock on /dev/sdm1, missing codepage or helper program, or other error.
</code></pre>
<p>At first glance, this error could be due to several factors:</p>
<ul>
<li><p>Filesystem corruption</p>
</li>
<li><p>Incorrect filesystem type</p>
</li>
<li><p>Damaged superblock</p>
</li>
<li><p>Missing system utilities for XFS</p>
</li>
</ul>
<p>To investigate further, I used <code>dmesg</code> to check the kernel logs:</p>
<pre><code class="lang-plaintext">dmesg | grep sdm1
</code></pre>
<p>The output provided more specific information:</p>
<pre><code class="lang-plaintext">[9373848.668073] XFS (sdm1): Unmounting Filesystem cbcecfa4-54ad-48c3-9dbf-f7cc49151913
[9603307.038109] XFS (sdm1): Filesystem has duplicate UUID dce22b56-4bc9-48da-927a-72ce29c5dcff - can't mount
</code></pre>
<p>It turned out that the issue was a <strong>duplicate UUID</strong> in the XFS filesystem, which was preventing the system from mounting the partition.</p>
<h3 id="heading-understanding-the-error-duplicate-uuid-in-xfs">Understanding the Error: Duplicate UUID in XFS</h3>
<p>UUIDs (Universally Unique Identifiers) are intended to uniquely identify filesystems, allowing the operating system to reference partitions even when their underlying device names (e.g. <code>/dev/sdm1</code>) change. If two filesystems have the same UUID, this creates a conflict, and Linux refuses to mount the filesystem.</p>
<p>In this case, the <code>dmesg</code> logs clearly indicated that the partition's UUID (<code>dce22b56-4bc9-48da-927a-72ce29c5dcff</code>) was duplicated.</p>
<h3 id="heading-solution-steps">Solution Steps</h3>
<h4 id="heading-1-check-the-filesystem-uuid">1. <strong>Check the Filesystem UUID</strong></h4>
<p>To confirm the UUID, I used the <code>blkid</code> command:</p>
<pre><code class="lang-plaintext">sudo blkid /dev/sdm1
</code></pre>
<p>This confirmed that the UUID of the filesystem was indeed duplicated with another existing filesystem, which can happen if a disk was cloned or a snapshot was created without updating the UUID.</p>
<h4 id="heading-2-change-the-filesystem-uuid">2. <strong>Change the Filesystem UUID</strong></h4>
<p>The easiest way to resolve this issue is by generating a new UUID for the filesystem using the <code>xfs_admin</code> tool:</p>
<pre><code class="lang-plaintext">sudo xfs_admin -U generate /dev/sdm1
</code></pre>
<p>This command generates a new UUID for the XFS partition, resolving the duplication issue.</p>
<h4 id="heading-3-check-for-filesystem-corruption">3. <strong>Check for Filesystem Corruption</strong></h4>
<p>Before mounting the filesystem again, I decided to check for any corruption that might have occurred during previous mounting attempts. I used the <code>xfs_repair</code> utility:</p>
<pre><code class="lang-plaintext">sudo xfs_repair /dev/sdm1
</code></pre>
<p>If any corruption is found, <code>xfs_repair</code> will attempt to fix the issue. In my case, the filesystem was in good health. (You might need to use the -L flag to clear the log for the command to be successful)</p>
<h4 id="heading-4-mount-the-filesystem">4. <strong>Mount the Filesystem</strong></h4>
<p>After changing the UUID and ensuring the filesystem's integrity, I retried mounting the partition:</p>
<pre><code class="lang-plaintext">sudo mount -t xfs /dev/sdm1 /mariadb/prddisk
</code></pre>
<p>This time, the filesystem mounted successfully without any errors.</p>
<h4 id="heading-5-updating-etcfstab">5. <strong>Updating /etc/fstab</strong></h4>
<p>If you have an entry for this partition in <code>/etc/fstab</code>, don't forget to update it with the new UUID. You can use <code>blkid</code> again to retrieve the new UUID and update <code>/etc/fstab</code> accordingly:</p>
<pre><code class="lang-plaintext">UUID=&lt;new_uuid&gt; /mariadb/prddisk xfs defaults 0 0
</code></pre>
<h3 id="heading-handling-bad-superblock-errors">Handling "Bad Superblock" Errors</h3>
<p>In some cases, the error message about a "bad superblock" might indicate that the filesystem's superblock is corrupted. If you suspect this, you can use the <code>xfs_db</code> tool to inspect the superblock:</p>
<pre><code class="lang-plaintext">sudo xfs_db -c "sb 0" -r /dev/sdm1
</code></pre>
<p>If the superblock is damaged, you can attempt to repair it using <code>xfs_repair</code>. Always ensure that the filesystem is unmounted before performing these operations.</p>
<h3 id="heading-additional-workarounds">Additional Workarounds</h3>
<p>If you continue to encounter mounting issues due to the duplicate UUID error, you can try mounting the filesystem with the <code>-o nouuid</code> option. This option tells the system to ignore the UUID check temporarily, which might be useful for troubleshooting:</p>
<pre><code class="lang-plaintext">sudo mount -t xfs -o nouuid /dev/sdm1 /mariadb/prddisk
</code></pre>
<p>However, this is not a permanent solution and should only be used as a temporary workaround. After mounting, it's still important to address the UUID conflict.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>XFS is a robust filesystem, but like any system, it can run into issues such as duplicate UUIDs or corruption. Fortunately, tools like <code>xfs_repair</code> and <code>xfs_admin</code> make it relatively easy to resolve these problems. By systematically checking for issues, repairing the filesystem, and generating a new UUID, I was able to resolve the duplicate UUID problem and successfully mount the partition.</p>
<p>If you're facing similar issues, I hope this guide helps you troubleshoot and resolve the problem. Always remember to back up important data before making changes to your filesystem!</p>
]]></content:encoded></item><item><title><![CDATA[Creating Custom Actions in GitHub Actions Using Docker]]></title><description><![CDATA[Introduction
GitHub Actions provides a powerful way to automate workflows in your repositories. By creating custom actions, you can encapsulate reusable workflows, scripts, and automation steps into a single unit.
In this article, we will walk throug...]]></description><link>https://isrxl.com/creating-custom-actions-in-github-actions</link><guid isPermaLink="true">https://isrxl.com/creating-custom-actions-in-github-actions</guid><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[github workflow]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Wed, 04 Sep 2024 15:37:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/wX2L8L-fGeA/upload/246657b7c012fb0b6fc5e71767f21d89.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction"><strong>Introduction</strong></h3>
<p>GitHub Actions provides a powerful way to automate workflows in your repositories. By creating custom actions, you can encapsulate reusable workflows, scripts, and automation steps into a single unit.</p>
<p>In this article, we will walk through the steps of creating a custom GitHub Action (based on Docker), using a practical example of an action that checks the number of public repositories in the Microsoft Azure organization.</p>
<h3 id="heading-step-by-step-guide-to-creating-a-custom-action">Step-by-Step Guide to Creating a Custom Action</h3>
<h4 id="heading-step-1-create-a-new-repository"><strong>Step 1: Create a New Repository</strong></h4>
<p>To get started, either create a new GitHub repository or navigate to an existing one where you want to create your custom action. The repository will contain all the necessary files for your action.</p>
<h4 id="heading-step-2-set-up-the-directory-structure"><strong>Step 2: Set Up the Directory Structure</strong></h4>
<p>Clone the repository for local development and begin to add files. Your repository should contain the following structure for the custom action:</p>
<pre><code class="lang-plaintext">my-github-action/
│
├── action.yml       # Metadata file
├── Dockerfile       # Docker configuration file
└── entrypoint.sh    # Script to execute inside the container
</code></pre>
<h4 id="heading-step-3-define-action-metadata-in-actionyml"><strong>Step 3: Define Action Metadata in</strong> <code>action.yml</code></h4>
<p>The <code>action.yml</code> file defines the inputs, outputs, and execution environment of your custom action. Here’s an example:</p>
<pre><code class="lang-plaintext">name: 'Azure Repo Counter Action'
description: 'Fetches the public repository count for the Microsoft Azure GitHub organization'
inputs:
  my-input:
    description: 'A placeholder input value'
    required: true
    default: 'default value'
outputs:
  my-output:
    description: 'The number of public repositories'
runs:
  using: 'docker'
  image: 'Dockerfile'
  args:
    - ${{ inputs.my-input }}
</code></pre>
<p>This file defines the name of the action, input parameters, and outputs. The <code>runs</code> section specifies that the action will be executed in a Docker container.</p>
<h4 id="heading-step-4-create-the-dockerfile"><strong>Step 4: Create the</strong> <code>Dockerfile</code></h4>
<p>The Dockerfile describes how to build the Docker image that will run the action. Here’s a simple example:</p>
<pre><code class="lang-plaintext">FROM ubuntu:20.04

RUN apt-get update &amp;&amp; apt-get install -y \
  curl \
  jq

COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
<p>In this file:</p>
<ul>
<li><p>We’re using an Ubuntu base image.</p>
</li>
<li><p>Installing <code>curl</code> to make HTTP requests and <code>jq</code> to process JSON data.</p>
</li>
<li><p>Copying and setting permissions for the <a target="_blank" href="http://entrypoint.sh"><code>entrypoint.sh</code></a> script.</p>
</li>
</ul>
<h4 id="heading-step-5-write-the-action-logic-in-entrypointshhttpentrypointsh"><strong>Step 5: Write the Action Logic in</strong> <a target="_blank" href="http://entrypoint.sh"><code>entrypoint.sh</code></a></h4>
<p>The <a target="_blank" href="http://entrypoint.sh"><code>entrypoint.sh</code></a> script will be executed when the action runs. In this case, it fetches the number of public repositories for the Microsoft Azure GitHub organization and sets the value as an output.</p>
<pre><code class="lang-plaintext">#!/bin/bash

# Get the input value
MY_INPUT=$1

# Fetch public repositories from the Microsoft Azure organization using GitHub API
DATA=$(curl -s https://api.github.com/orgs/Azure)

# Use jq to parse the data and get the public repository count
REPO_COUNT=$(echo "$DATA" | jq '.public_repos')

# Output the number of public repositories
echo "Microsoft Azure organization has $REPO_COUNT public repositories"

# Set an output variable using the environment file method (Optional)
echo "my-output=$REPO_COUNT" &gt;&gt; "$GITHUB_ENV"
</code></pre>
<p>This script fetches data from the GitHub API, processes it using <code>jq</code>, and then uses the <code>GITHUB_ENV</code> file to pass the result to the workflow.</p>
<h4 id="heading-step-6-push-your-action-to-github"><strong>Step 6: Push Your Action to GitHub</strong></h4>
<p>Once everything is set up, push your changes to the repository:</p>
<pre><code class="lang-plaintext">git add .
git commit -m "Add custom Docker GitHub Action"
git push origin main
</code></pre>
<h4 id="heading-step-7-use-the-action-in-a-workflow"><strong>Step 7: Use the Action in a Workflow</strong></h4>
<p>To use your custom action, create a new workflow file in the <code>.github/workflows/</code> directory. Here’s an example workflow:</p>
<pre><code class="lang-plaintext">name: Example Workflow

on: [push]

jobs:
  my-job:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Run Azure Repo Counter Action
        uses: ./  # Uses the custom action in the same repository
        with:
          my-input: "Fetching Azure repositories"

      - name: Print output
        run: echo "Public repositories in Microsoft Azure: ${{ env.my-output }}"
</code></pre>
<p>This workflow will:</p>
<ol>
<li><p>Checkout the repository code.</p>
</li>
<li><p>Execute your custom action.</p>
</li>
<li><p>Output the number of public repositories in the Microsoft Azure organization.</p>
</li>
</ol>
<h4 id="heading-step-8-trigger-the-workflow"><strong>Step 8: Trigger the Workflow</strong></h4>
<p>Push changes to your repository, and GitHub Actions will automatically trigger the workflow. You can view the workflow’s execution in the "Actions" tab and see how your custom action performs.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>By following this guide, you’ve created a reusable GitHub Action that runs in a Docker container, fetches data from the GitHub API, and returns a result to your workflow. You can expand this approach to create more sophisticated actions, integrate with other APIs, or automate any part of your DevOps pipelines. Container actions provide great flexibility, as they allow you to run any tool or script in a controlled environment.</p>
]]></content:encoded></item><item><title><![CDATA[How to Choose the Best Resource Allocation Strategy for Performance Testing]]></title><description><![CDATA[When conducting performance testing, resource allocation plays a critical role in ensuring that your application not only meets performance expectations but also operates efficiently. Deciding whether to start with high resource allocations and adjus...]]></description><link>https://isrxl.com/performance-testing-resource-allocation</link><guid isPermaLink="true">https://isrxl.com/performance-testing-resource-allocation</guid><category><![CDATA[Performance Testing]]></category><category><![CDATA[Performance Optimization]]></category><category><![CDATA[performance metrics]]></category><category><![CDATA[resource management]]></category><category><![CDATA[resource allocation]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Thu, 29 Aug 2024 01:25:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725810151156/cd1b2590-406f-468b-b3c1-d94ac46ff69d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When conducting performance testing, resource allocation plays a critical role in ensuring that your application not only meets performance expectations but also operates efficiently. Deciding whether to start with high resource allocations and adjust downwards or begin with low allocations and adjust upwards can significantly impact your testing outcomes. In this article, we'll explore both approaches, weigh their pros and cons, and discuss a hybrid method that may provide the best of both worlds.</p>
<h4 id="heading-starting-with-high-resource-allocations-and-adjusting-downwards"><strong>Starting with High Resource Allocations and Adjusting Downwards</strong></h4>
<p>Starting with higher resource allocations and gradually reducing them can be an effective strategy, particularly for performance-critical applications where you need to guarantee optimal performance from the outset. Here’s why:</p>
<h5 id="heading-pros"><strong>Pros:</strong></h5>
<ol>
<li><p><strong>Immediate Performance Validation:</strong></p>
<ul>
<li>By allocating ample resources from the start, you ensure that your application operates at its peak performance, minimizing the risk of initial performance bottlenecks. This approach allows you to quickly validate whether your application meets its performance goals under ideal conditions.</li>
</ul>
</li>
<li><p><strong>Identifying Over-Provisioning:</strong></p>
<ul>
<li>With high resources allocated, you can identify areas where resources are being over-provisioned. This helps you pinpoint the optimal resource levels that maintain performance while avoiding unnecessary overhead.</li>
</ul>
</li>
<li><p><strong>Avoiding Initial Performance Issues:</strong></p>
<ul>
<li>Under-provisioning can lead to performance degradation or application crashes during testing, which may obscure other issues or skew your testing results. Starting with high resources prevents these problems and provides a clearer picture of your application’s behavior under load.</li>
</ul>
</li>
</ol>
<h5 id="heading-cons"><strong>Cons:</strong></h5>
<ol>
<li><p><strong>Potential Resource Waste:</strong></p>
<ul>
<li>High initial allocations may lead to resource wastage, as you might be allocating more than what’s necessary for your application to perform effectively. This can be misleading when evaluating cost efficiency.</li>
</ul>
</li>
<li><p><strong>Longer Optimization Process:</strong></p>
<ul>
<li>Gradually reducing resources requires careful monitoring and multiple testing iterations to ensure performance remains stable. This can make the optimization process more time-consuming.</li>
</ul>
</li>
</ol>
<h4 id="heading-starting-with-low-resource-allocations-and-adjusting-upwards"><strong>Starting with Low Resource Allocations and Adjusting Upwards</strong></h4>
<p>Alternatively, beginning with lower resource allocations and increasing them as needed can be a more cost-efficient approach, particularly for applications where budget considerations are paramount.</p>
<h5 id="heading-pros-1"><strong>Pros:</strong></h5>
<ol>
<li><p><strong>Cost-Efficiency from the Start:</strong></p>
<ul>
<li>Starting with minimal resources allows you to see the lowest possible cost for running your application. You only scale up resources when necessary, helping you understand the minimum viable configuration for your application.</li>
</ul>
</li>
<li><p><strong>Stress Testing Early:</strong></p>
<ul>
<li>This approach helps you identify the minimum resource requirements needed to maintain acceptable performance. By starting with less, you can stress test your application and determine its lower bounds, providing valuable insights into its resilience and efficiency.</li>
</ul>
</li>
<li><p><strong>Clear Resource Boundaries:</strong></p>
<ul>
<li>As you incrementally increase resources, you can pinpoint the exact threshold where your application begins to perform optimally. This clarity is crucial for fine-tuning and ensuring you’re not over-allocating resources.</li>
</ul>
</li>
</ol>
<h5 id="heading-cons-1"><strong>Cons:</strong></h5>
<ol>
<li><p><strong>Risk of Performance Bottlenecks:</strong></p>
<ul>
<li>Starting with low resources might cause the application to underperform or even crash, which could complicate the initial stages of testing. These issues might be misinterpreted as application faults rather than resource limitations.</li>
</ul>
</li>
<li><p><strong>Slower Performance Validation:</strong></p>
<ul>
<li>It may take longer to reach a configuration that meets your performance needs, as you’ll need to incrementally test and adjust resources, potentially prolonging the testing phase.</li>
</ul>
</li>
</ol>
<h4 id="heading-which-approach-is-better"><strong>Which Approach is Better?</strong></h4>
<p>The choice between starting high and adjusting downwards or starting low and adjusting upwards depends on your application’s specific needs:</p>
<ul>
<li><p><strong>For Performance-Critical Applications:</strong> If ensuring optimal performance is your top priority and you want to avoid any potential disruptions, <strong>starting with high resources and adjusting downwards</strong> is generally the better approach. This strategy ensures that you start with the best possible performance and only scale down to find the most cost-effective allocation that still meets your performance requirements.</p>
</li>
<li><p><strong>For Cost-Sensitive Applications:</strong> If cost efficiency is more critical, and you’re comfortable with the possibility of encountering some initial performance issues, <strong>starting with low resources and adjusting upwards</strong> can be more effective. This approach allows you to identify the minimum resource allocation required to achieve satisfactory performance, thereby optimizing your costs.</p>
</li>
</ul>
<h4 id="heading-a-hybrid-approach-finding-balance"><strong>A Hybrid Approach: Finding Balance</strong></h4>
<p>In practice, a hybrid approach often provides the best balance between performance and cost efficiency:</p>
<ul>
<li><p><strong>Start with a Reasonable Baseline:</strong> Begin with a resource allocation that you estimate is close to what your application might need. This baseline could be informed by previous experience, application profiling, or educated guesses.</p>
</li>
<li><p><strong>Test and Adjust:</strong> Monitor performance carefully and adjust resources in the appropriate direction—upwards if performance is lacking, or downwards if resources seem over-allocated.</p>
</li>
</ul>
<p>This hybrid method allows you to fine-tune your resource allocations efficiently, ensuring your application performs well while maintaining cost-effectiveness.</p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>When it comes to resource allocation for performance testing, there’s no one-size-fits-all approach. Whether you start with high resources and adjust downwards or begin with low allocations and scale up, the key is to align your strategy with your application’s performance requirements and cost constraints. By considering the pros and cons of each method—and potentially adopting a hybrid approach—you can optimize both performance and resource utilization, ensuring your application runs smoothly and efficiently.</p>
]]></content:encoded></item><item><title><![CDATA[Converting Azure Key Vault from Access Policy to RBAC Permission Model]]></title><description><![CDATA[As businesses evolve and adopt cloud technologies, the management of sensitive data becomes increasingly crucial. Azure Key Vault, a cloud service provided by Microsoft, offers a secure storage solution for cryptographic keys, secrets, and certificat...]]></description><link>https://isrxl.com/converting-azure-key-vault-from-access-policy-to-rbac-permission-model</link><guid isPermaLink="true">https://isrxl.com/converting-azure-key-vault-from-access-policy-to-rbac-permission-model</guid><category><![CDATA[Azure Key Vault]]></category><category><![CDATA[Azure]]></category><category><![CDATA[rbac]]></category><category><![CDATA[azure rbac]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Sun, 21 May 2023 14:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724876865830/3d2570fb-ec66-4ef5-bf7d-e7a1b71d4503.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As businesses evolve and adopt cloud technologies, the management of sensitive data becomes increasingly crucial. Azure Key Vault, a cloud service provided by Microsoft, offers a secure storage solution for cryptographic keys, secrets, and certificates. With its powerful access control mechanisms, Azure Key Vault provides flexibility in managing permissions to safeguard sensitive information.</p>
<p>Previously, RBAC only granted access to a Key Vault's management plane and access policies were needed if a user needed to access the data plane. While this helped prevent the arbitrary access that RBAC's heirarchical nature could have granted to sensitive data, it has been found that this approach does not scale well as the number of Key Vaults increase. The reality is that most organisations will have a few tens to hundreds of key vaults and having to manage access policies across these key vaults per user any time there is a change in circumstances can be a nightmare. Thankfully, Microsoft subsequently introduced the RBAC model which has greatly improved how key vault access is managed</p>
<p>In this blog post, we will explore the need to convert from the traditional "Access Policy" model to the more recent "Role-Based Access Control" (RBAC) authorization model. We will also outline the essential steps involved in this conversion process.</p>
<h3 id="heading-the-need-for-conversion">The Need for Conversion</h3>
<p>While the Access Policy model has served as a solid foundation for managing access to Azure Key Vault, the RBAC authorization model brings a host of benefits that align with modern cloud security requirements. Here are a few reasons why you should consider migrating from the Access Policy model to RBAC:</p>
<ol>
<li><p>Centralized Access Management: RBAC enables centralized management of permissions across multiple Azure resources, making it easier to maintain consistency and control access more efficiently.</p>
</li>
<li><p>Granular Control: RBAC allows for fine-grained control by assigning roles at the subscription, resource group, or individual resource level. This granular control enables precise access management tailored to the specific needs of different teams or individuals.</p>
</li>
<li><p>Segregation of Duties: With RBAC, you can separate responsibilities by assigning different roles to different individuals or teams. This segregation of duties enhances security by preventing conflicts of interest and reducing the risk of unauthorized access.</p>
</li>
<li><p>Auditing and Compliance: RBAC provides detailed audit logs, allowing you to track and monitor access requests, changes in permissions, and other security-related activities. This feature aids in meeting regulatory requirements and enhancing overall compliance.</p>
</li>
</ol>
<h3 id="heading-steps-for-conversion">Steps for Conversion</h3>
<p>Step 1: Evaluate Access Policies: Review the existing access policies defined within the Azure Key Vault. Identify the roles and permissions assigned to various users or applications. This assessment will help in mapping the current access policies to RBAC roles accurately.</p>
<p>To get the access policies across several key vaults, you could run this powershell script to grab all the access policies and dump them in a CSV file that you can use for analysis and role mappings.</p>
<pre><code class="lang-Powershell"><span class="hljs-variable">$outputFile</span> = <span class="hljs-selector-tag">@</span>()

<span class="hljs-built_in">get-Azsubscription</span> | <span class="hljs-built_in">Where-Object</span> {<span class="hljs-variable">$_</span>.State <span class="hljs-operator">-eq</span> <span class="hljs-string">"Enabled"</span>} | <span class="hljs-built_in">ForEach-Object</span> {
    <span class="hljs-built_in">Set-AzContext</span> <span class="hljs-variable">$_</span> 

    <span class="hljs-variable">$vaultNames</span> = (<span class="hljs-built_in">Get-AzKeyVault</span>).VaultName

    <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$vaultName</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$vaultNames</span>) {
        <span class="hljs-variable">$accessPolicies</span> = (<span class="hljs-built_in">Get-AzKeyVault</span> <span class="hljs-literal">-VaultName</span> <span class="hljs-variable">$vaultName</span>).AccessPolicies

        <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$policy</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$accessPolicies</span>) {
            <span class="hljs-variable">$properties</span> = <span class="hljs-string">""</span> | <span class="hljs-built_in">Select-Object</span> Subscription, vaultName, oID, DisplayName
            <span class="hljs-variable">$properties</span>.oID = <span class="hljs-variable">$policy</span>.ObjectId
            <span class="hljs-variable">$properties</span>.DisplayName = <span class="hljs-variable">$policy</span>.DisplayName
            <span class="hljs-variable">$properties</span>.Subscription = <span class="hljs-variable">$_</span>.Name
            <span class="hljs-variable">$properties</span>.VaultName = <span class="hljs-variable">$vaultName</span>

            <span class="hljs-variable">$outputFile</span> += <span class="hljs-selector-tag">@</span>(<span class="hljs-variable">$properties</span>)
        }
    }
}

<span class="hljs-variable">$outputFile</span> | <span class="hljs-built_in">Select-Object</span> Subscription, vaultName, oID, DisplayName | <span class="hljs-built_in">Export-Csv</span> <span class="hljs-literal">-NoTypeInformation</span> <span class="hljs-string">"kvIDs.csv"</span>
</code></pre>
<p>Step 2: Define RBAC Roles: Determine the RBAC roles that align with the access requirements of your organization. Azure provides several built-in roles for Key Vaults like <em>Key Vault Administrator</em>, <em>Key Vault Reader, Key Vault Secrets User,</em> and many more. Additionally, custom roles can be created to suit specific needs. Assigning roles based on job responsibilities and required access levels ensures a more secure and manageable access control system. The Microsoft docs contain <a target="_blank" href="https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-migration?source=recommendations#access-policies-to-azure-roles-mapping">some recommended access policy to RBAC role mappings</a>.</p>
<p>Step 3: Assign RBAC Roles: Once the RBAC roles have been defined, assign them to the appropriate users or groups. This can be done at the subscription, resource group, or individual Key Vault level, depending on the desired level of control. Again, for organisations with lots of Key Vault, a step like this will have to be automated. The following is a sample powershell script that can be used to create RBAC role assignments across multiple Key Vaults</p>
<pre><code class="lang-Powershell"><span class="hljs-variable">$subs</span> = <span class="hljs-built_in">Get-AzSubscription</span> | <span class="hljs-built_in">Where-Object</span> { <span class="hljs-variable">$_</span>.State <span class="hljs-operator">-eq</span> <span class="hljs-string">"Enabled"</span> } | <span class="hljs-built_in">Sort-Object</span> Name <span class="hljs-comment"># You can add other filters as you wish</span>

<span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$sub</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$subs</span>) {
    <span class="hljs-variable">$subID</span> = <span class="hljs-variable">$sub</span>.Id
    <span class="hljs-variable">$subName</span> = <span class="hljs-variable">$sub</span>.Name
    <span class="hljs-built_in">set-azcontext</span> <span class="hljs-literal">-SubscriptionId</span> <span class="hljs-variable">$subID</span>

    <span class="hljs-built_in">write-host</span> <span class="hljs-string">"Subscription: <span class="hljs-variable">$subName</span>"</span>

    <span class="hljs-variable">$vaultNames</span> = (<span class="hljs-built_in">Get-AzKeyVault</span>).VaultName

    <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$vaultName</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$vaultNames</span>) {
        <span class="hljs-built_in">write-host</span> <span class="hljs-string">"Getting Access Policies for Sub"</span>
        <span class="hljs-variable">$accessPolicies</span> = (<span class="hljs-built_in">Get-AzKeyVault</span> <span class="hljs-literal">-VaultName</span> <span class="hljs-variable">$vaultName</span>).AccessPolicies
        <span class="hljs-variable">$kvRG</span> = (<span class="hljs-built_in">Get-AzKeyVault</span> <span class="hljs-literal">-VaultName</span> <span class="hljs-variable">$vaultName</span>).ResourceGroupName

        <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$policy</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$accessPolicies</span>) {
            <span class="hljs-comment">#Check if the role assignment already exists</span>
            <span class="hljs-variable">$getRBAC</span> = <span class="hljs-built_in">Get-AzRoleAssignment</span> `
                <span class="hljs-literal">-ObjectId</span> <span class="hljs-variable">$policy</span>.ObjectId `
                <span class="hljs-literal">-RoleDefinitionName</span> <span class="hljs-string">"Key Vault Secrets User"</span> `
                <span class="hljs-literal">-Scope</span> <span class="hljs-string">"/subscriptions/<span class="hljs-variable">$subID</span>/resourceGroups/<span class="hljs-variable">$kvRG</span>/providers/Microsoft.KeyVault/vaults/<span class="hljs-variable">$vaultName</span>"</span>
            <span class="hljs-keyword">If</span> (<span class="hljs-variable">$null</span> <span class="hljs-operator">-eq</span> <span class="hljs-variable">$getRBAC</span>) {
                <span class="hljs-built_in">Write-Host</span> <span class="hljs-string">"KV Role does not Exist. Creating Role Assignment"</span>

                <span class="hljs-built_in">New-AzRoleAssignment</span> `
                    <span class="hljs-literal">-ObjectId</span> <span class="hljs-variable">$policy</span>.ObjectId `
                    <span class="hljs-literal">-RoleDefinitionName</span> <span class="hljs-string">"Key Vault Secrets User"</span> `
                    <span class="hljs-literal">-Scope</span> <span class="hljs-string">"/subscriptions/<span class="hljs-variable">$subID</span>/resourceGroups/<span class="hljs-variable">$kvRG</span>/providers/Microsoft.KeyVault/vaults/<span class="hljs-variable">$vaultName</span>"</span>
            }
            <span class="hljs-keyword">else</span> {
                <span class="hljs-built_in">write-host</span> <span class="hljs-string">"KV Role Assignment Already Exists"</span>
            }

        }

    }
}
</code></pre>
<p>To actually convert your Key Vaults to the RBAC permission model, you can run the following powershell script:</p>
<pre><code class="lang-Powershell">
<span class="hljs-variable">$subs</span> = <span class="hljs-built_in">Get-AzSubscription</span> | <span class="hljs-built_in">Where-Object</span> { <span class="hljs-variable">$_</span>.State <span class="hljs-operator">-eq</span> <span class="hljs-string">"Enabled"</span> } | <span class="hljs-built_in">Sort-Object</span> Name <span class="hljs-comment"># You can add other filters as you wish</span>

<span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$sub</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$subs</span>) {
    <span class="hljs-built_in">set-azcontext</span> <span class="hljs-literal">-SubscriptionId</span> (<span class="hljs-variable">$sub</span>).Id

    <span class="hljs-variable">$kvNames</span> = (<span class="hljs-built_in">Get-AzKeyVault</span>).VaultName

    <span class="hljs-keyword">foreach</span> (<span class="hljs-variable">$kvName</span> <span class="hljs-keyword">in</span> <span class="hljs-variable">$kvNames</span>) {
        <span class="hljs-built_in">Write-Host</span> <span class="hljs-string">"Converting <span class="hljs-variable">$kvName</span> to RBAC Permission Model"</span> <span class="hljs-literal">-ForegroundColor</span> Green
        <span class="hljs-variable">$kvRG</span> = (<span class="hljs-built_in">Get-AzKeyVault</span> <span class="hljs-literal">-VaultName</span> <span class="hljs-variable">$kvName</span>).ResourceGroupName

        <span class="hljs-built_in">Update-AzKeyVault</span> <span class="hljs-literal">-VaultName</span> <span class="hljs-variable">$kvName</span> <span class="hljs-literal">-ResourceGroupName</span> <span class="hljs-variable">$kvRG</span> <span class="hljs-literal">-EnableRbacAuthorization</span> <span class="hljs-variable">$true</span>        
    }
}
</code></pre>
<p>Step 4: Remove Access Policies: As RBAC roles are assigned, gradually remove the existing access policies from Azure Key Vault.</p>
<blockquote>
<p><strong>Warning:</strong> <strong>Ensure that all necessary permissions have been properly mapped and assigned to the appropriate RBAC roles before removing the access policies to prevent any unintended access gaps!</strong></p>
</blockquote>
<p>Step 5: Test and Validate: Thoroughly test the new RBAC-based access control configuration to ensure that all required operations can be performed by the assigned roles. Validate that the access control changes do not disrupt any existing applications or workflows.</p>
<p>Step 6: Monitor and Maintain: Regularly review and update RBAC assignments to accommodate any changes in personnel or access requirements. Monitor access logs and regularly audit permissions to ensure ongoing compliance and security.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In the ever-changing landscape of cloud security, it is essential to adapt and implement advanced access control mechanisms. By transitioning from the traditional Access Policy model to the RBAC authorization model in Azure Key Vault, you can achieve greater control, segregation of duties, and auditing capabilities. With careful planning and execution of the outlined steps, you can ensure a smoother migration and strengthen the security of your sensitive data. The RBAC authorization model provides a robust framework for managing permissions across Azure resources, enabling you to streamline access control and mitigate potential security risks.</p>
<p>Remember, before starting the conversion process, it is crucial to thoroughly assess your organization's access requirements and consult with your security and compliance teams to ensure a seamless transition. With proper planning and execution, converting your Azure Key Vault from the Access Policy model to the RBAC authorization model can enhance your overall security posture and protect your valuable assets effectively.</p>
]]></content:encoded></item><item><title><![CDATA[Process Automation via Azure Automation Accounts]]></title><description><![CDATA[I am tempted to call the Azure Automation Service a "Task Scheduler on steroids". It is of course an oversimplification as the Azure Automation service is used for more than just scheduling tasks but I guess it can serve as a good introduction to wha...]]></description><link>https://isrxl.com/process-automation-via-azure-automation-accounts</link><guid isPermaLink="true">https://isrxl.com/process-automation-via-azure-automation-accounts</guid><category><![CDATA[automation]]></category><category><![CDATA[azure automation]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Tue, 18 Oct 2022 01:00:00 GMT</pubDate><content:encoded><![CDATA[<p>I am tempted to call the Azure Automation Service a "Task Scheduler on steroids". It is of course an oversimplification as the Azure Automation service is used for more than just scheduling tasks but I guess it can serve as a good introduction to what automation accounts are all about.</p>
<blockquote>
<p>According to Microsoft's <a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/overview">official documentation</a>, "Azure Automation delivers a cloud-based automation, operating system updates, and configuration service that supports consistent management across your Azure and non-Azure environments. It includes <strong>process automation</strong>, <strong>configuration management</strong>, <strong>update management</strong>, shared capabilities, and heterogeneous features."</p>
</blockquote>
<p><img src="https://www.isrxl.com/content/images/2022/10/image.png" alt /></p>
<p>Azure Automation Service Components</p>
<p>To get started working with Azure Automation, one would need to <a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal">create an Automation Account</a>.</p>
<p>The Azure Automation Service and specifically its "Process Automation" component, is a service that enables you to automate manual, time consuming, error prone, repetitive tasks in Azure (and also in hybrid environments) thereby freeing up time, reducing the risk of human errors and boosting efficiency.</p>
<p>Process automation makes use of runbooks primarily. A runbook is where you define the logic that controls how the task(s) you want to perform will be carried out. Call them scripts and you won't be far off. These runbooks could be graphical, PowerShell or Python runbooks.</p>
<p>Runbooks in Azure Automation can run on either an Azure sandbox or a <a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker">Hybrid Runbook Worker</a>. By default, runbooks run in Azure (or against azure resources). Another way to put this in relation to resources like virtual machines is that runbooks perform actions on the "outside" of a virtual machine. To run runbooks directly on (or "inside") a Windows or Linux virtual machine or against resources in an on-premises environment or other cloud environment, you can deploy a <a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker">Hybrid Runbook Worker</a>.</p>
<p>Once the runbooks are created, saved and published, they can be either be run manually as one-off jobs or can be triggered using a schedule or a webhook.</p>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-1.png" alt /></p>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-4.png" alt /></p>
<p>To run the runbook as a manual/one-off job, from the runbook's page in the portal you can click on the "start" icon and then choose whether you want the runbook to run on Azure or via a hybrid runbook worker.</p>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-5.png" alt /></p>
<p>For runbooks that you would like to run on a recurring basis, you would have to create a recurring schedule and link it to the runbook. To create a schedule,</p>
<ul>
<li><p>Go to the "shared resources" section of the automation account and click on schedules.</p>
</li>
<li><p>click on "Add a schedule" and fill in the details of the new schedule.</p>
</li>
<li><p>specify the start time for the schedule</p>
</li>
<li><p>choose whether the runbook runs once or on a recurring schedule</p>
</li>
<li><p>specify the recurrence frequency</p>
</li>
<li><p>specify whether the schedule expires or not.</p>
</li>
<li><p>click create.</p>
</li>
</ul>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-3.png" alt /></p>
<p>Once created, you can link the schedule to a runbook directly from the runbooks page.</p>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-6.png" alt /></p>
<p>To trigger a runbook using a webhook,</p>
<ul>
<li><p>click on "Add a webhook" from the Automation Account's overview page</p>
</li>
<li><p>click create a new webhook</p>
</li>
<li><p>On the dialogue page, enter the details of your webhook.</p>
</li>
<li><p>specify if you want it enabled or disabled.</p>
</li>
<li><p>Specify an expiry date</p>
</li>
<li><p>The webhook url will have been automatically generated</p>
</li>
<li><p>Specify the "Run on" settings to decide whether runbook runs on Azure or in a hybrid worker.</p>
</li>
<li><p>Click create.</p>
</li>
</ul>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-8.png" alt /></p>
<p><img src="https://www.isrxl.com/content/images/2022/10/image-7.png" alt /></p>
<p>Once the webhook is created, the webhook url can be called to trigger the runbook. A common use case is to add the webhook to the action group of azure alerts.</p>
<p>This has hopefully been a good (gentle) introduction to Azure Automation and especially process automation in Azure. I intend to delve a bit more into hybrid runbook workers in my next post(s) so do make sure to be on the lookout for that.</p>
<p>Cheers!</p>
<p>Useful Links:</p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/overview">Azure Automation overview</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types">Azure Automation runbook types</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker">Azure Automation Hybrid Runbook Worker overview</a></p>
]]></content:encoded></item><item><title><![CDATA[Managing multiple Azure accounts/subscriptions in PowerShell]]></title><description><![CDATA[As one gets more involved with the Azure cloud environment and move beyond performing actions through the portal to the realms of automation using different tools and specifically powershell, one problem that usually crops up is how to manage the dif...]]></description><link>https://isrxl.com/managing-multiple-azure-accountssubscriptions-in-powershell</link><guid isPermaLink="true">https://isrxl.com/managing-multiple-azure-accountssubscriptions-in-powershell</guid><category><![CDATA[Powershell]]></category><category><![CDATA[Powershell scripting]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Mon, 15 Aug 2022 02:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724696764870/61feeb8b-172c-4e66-982a-a491ba60d932.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As one gets more involved with the Azure cloud environment and move beyond performing actions through the portal to the realms of automation using different tools and specifically powershell, one problem that usually crops up is how to manage the different accounts/subscriptions you have access to. If you're like me, you will want to keep some subscriptions apart especially your work and personal subscriptions.</p>
<p>Apart from this you will also want the ability to run scripts against them quickly and without having to log in and log out of accounts each time or moving between separate devices.</p>
<p>The context management cmdlets in Azure are a fantastic way to solve this problem and make your life easier.</p>
<p>To connect to one or more Azure Accounts, run the following command each time and provide your credentials for each of the accounts.</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">Connect-AzAccount</span> <span class="hljs-literal">-TenantId</span>
</code></pre>
<p>Usually when the account contains multiple subscriptions, one subscription is chosen as the context at the beginning. You can use <code>get-AzContext</code> to get the current context. You can also use <code>(get-AzContext).Name</code> which will show you the context's name only. This useful when the name is long and the display gets truncated in powershell.</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">get-AzContext</span>

(<span class="hljs-built_in">get-AzContext</span>).Name <span class="hljs-comment">#Display context name only.</span>
</code></pre>
<p>To use another subscription as the current context, use the following command(s):</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">set-AzContext</span> <span class="hljs-literal">-subscriptionId</span> &lt;subscriptionID <span class="hljs-keyword">for</span> your sub&gt;
</code></pre>
<p>You can give this new context a name directly by providing a value for the name parameter. e.g.</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">Set-AzContext</span> <span class="hljs-literal">-SubscriptionId</span> &lt;subscriptionID <span class="hljs-keyword">for</span> your sub&gt;  <span class="hljs-literal">-Name</span> <span class="hljs-string">'DevSub'</span>
</code></pre>
<pre><code class="lang-powershell"><span class="hljs-comment"># This command will also give the context a memorable name</span>
<span class="hljs-built_in">Get-AzSubscription</span> <span class="hljs-literal">-SubscriptionName</span> <span class="hljs-string">'MySubscriptionName'</span> | <span class="hljs-built_in">Set-AzContext</span> <span class="hljs-literal">-Name</span> <span class="hljs-string">'MyContextName'</span>
</code></pre>
<p>Use the following command to rename the context to something more memorable/easy to type</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">Rename-AzContext</span> <span class="hljs-literal">-SourceName</span> &lt;Name of your subscription&gt; <span class="hljs-literal">-TargetName</span> &lt;memorableName&gt;
</code></pre>
<pre><code class="lang-powershell"><span class="hljs-comment"># Change 'ISRXL' to characters contained in your subscription's name</span>

<span class="hljs-variable">$sourceName</span> = (<span class="hljs-built_in">get-AzContext</span> <span class="hljs-literal">-ListAvailable</span> | <span class="hljs-built_in">where-object</span> {<span class="hljs-variable">$_</span>.Name <span class="hljs-operator">-like</span> <span class="hljs-string">'*ISRXL*'</span>}).name

<span class="hljs-built_in">Rename-AzContext</span> <span class="hljs-literal">-SourceName</span> <span class="hljs-variable">$sourceName</span> <span class="hljs-literal">-TargetName</span> <span class="hljs-string">'PersonalSub'</span>
</code></pre>
<p>After renaming the contexts, use the <code>get-AzContext -ListAvailable</code> command again to see the Azure contexts available to you in powershell.</p>
<p>To use the contexts after they have been renamed to simpler, memorable names:</p>
<p><code>select-AzContext 'PersonalSub'</code></p>
<p>You can also save these context settings to a file:</p>
<pre><code class="lang-powershell"><span class="hljs-comment"># Save-AzContext -Path &lt;path-to-folder-you-choose&gt;</span>
<span class="hljs-built_in">Save-AzContext</span> <span class="hljs-literal">-Path</span> <span class="hljs-string">"C:\Users\Isrxl\Documents\AzureContext.json</span>
</code></pre>
<p>Once the context settings have been saved to a file, they can be imported into another powershell session as follows:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">Import-AzContext</span> <span class="hljs-literal">-Path</span> <span class="hljs-string">"C:\Users\Isrxl\Documents\AzureContext.json</span>
</code></pre>
<p>It is important to develop the habit of checking and switching your azure context before running any commands or scripts. In fact, I think it is a good practice to ensure that the first few lines of your scripts/script blocks checks that the right context is being used.</p>
<p>Useful Links:</p>
<p><a target="_blank" href="https://docs.microsoft.com/en-us/powershell/azure/context-persistence?view=azps-7.3.2">https://docs.microsoft.com/en-us/powershell/azure/context-persistence?view=azps-7.3.2</a><br /><a target="_blank" href="https://docs.microsoft.com/en-us/powershell/module/az.accounts/clear-azcontext?view=azps-7.3.2">https://docs.microsoft.com/en-us/powershell/module/az.accounts/clear-azcontext?view=azps-7.3.2</a><br /><a target="_blank" href="https://docs.microsoft.com/en-us/powershell/module/az.accounts/disconnect-azaccount?view=azps-7.3.2">https://docs.microsoft.com/en-us/powershell/module/az.accounts/disconnect-azaccount?view=azps-7.3.2</a></p>
]]></content:encoded></item><item><title><![CDATA[Markdown Shenanigans.]]></title><description><![CDATA[One fine weekend, while trying to "create my site from scratch", I got tired of HTML and decided to play around with the markdown language. It gave me a much "cleaner" authoring experience, and I had a lot of fun playing around with it. So I decided ...]]></description><link>https://isrxl.com/markdown-shenanigans</link><guid isPermaLink="true">https://isrxl.com/markdown-shenanigans</guid><category><![CDATA[markdown]]></category><category><![CDATA[markdown cheat sheet]]></category><category><![CDATA[Markdown, How to write markdown file, ]]></category><category><![CDATA[#markdown syntax]]></category><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Tue, 09 Aug 2022 02:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725810768555/ba21fe63-ae21-4435-a991-0a03ef2f3efb.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One fine weekend, while trying to "create my site from scratch", I got tired of HTML and decided to play around with the markdown language. It gave me a much "cleaner" authoring experience, and I had a lot of fun playing around with it. So I decided to share my learnings in the hope that it will help you or at least spark your curiosity.<br />According to our trusty friend, <a target="_blank" href="https://en.wikipedia.org/wiki/Markdown">Wikipedia</a>, "Markdown is a lightweight markup language for creating formatted text using a plain-text editor". Markdown is quite popular with people who spend a lot of time creating technical documentation. Its key selling point is readability without the encumbrance of tags and formatting that you'll get in a language like HTML for example.<br />Here is a summary of the things I learned along the way. For each section, I show a preview of the actual markdown syntax followed by the end result.</p>
<h3 id="heading-headings">Headings</h3>
<p>Headings in markdown are created by starting with one or more hash (#) characters. Depending on the heading type, the number of hashes (#) can range between 1 (for H1 or main headings) and 6 (for smaller headings, a.k.a subheadings). The title (markdown shenanigans) is an example of an H1 (#) heading.<br />To view other headings and see the difference between them, check the list below:</p>
<p>Markdown Syntax:</p>
<pre><code class="lang-plaintext"># Heading 1 (H1) 
## Heading 2 (H2) 
### Heading 3 (H3) 
#### Heading 4 (H4) 
##### Heading 5 (H5) 
###### Heading 6 (H6)
</code></pre>
<p>Result:</p>
<h1 id="heading-heading-1-h1">Heading 1 (H1)</h1>
<h2 id="heading-heading-2-h2">Heading 2 (H2)</h2>
<h3 id="heading-heading-3-h3">Heading 3 (H3)</h3>
<h4 id="heading-heading-4-h4">Heading 4 (H4)</h4>
<h5 id="heading-heading-5-h5">Heading 5 (H5)</h5>
<h6 id="heading-heading-6-h6">Heading 6 (H6)</h6>
<h3 id="heading-making-emphasis-with-italics-or-bold">Making Emphasis with italics or bold…</h3>
<p>Emphasizing a word or sentence can be done by adding at least one asterisk (*) at each end of that word or sentence.</p>
<pre><code class="lang-plaintext">*One asterisk at each end italicizes your words.*
**Two asterisks at each end make your words bold** 
***Three asterisks at each end make your words bold and italicized***
</code></pre>
<p><em>One asterisk at each end italicizes your words.</em><br /><strong>Two asterisks at each end make your words bold</strong><br /><strong><em>Three asterisks at each end make your words bold and italicized</em></strong></p>
<h3 id="heading-creating-block-quotes">Creating (Block) Quotes</h3>
<p>To create block quotes, you need to indent each line using a right-angle bracket (&gt;) or what mathematicians would call a "greater than" symbol. Additional angle brackets create inner block quotes.</p>
<pre><code class="lang-plaintext">&gt; ***This is a block quote in italics and bold***
&gt;&gt; This is an inner block quote\
I did this just because I can\
and also because I like it
&gt;&gt;&gt; I can even try an inner-inner block quote\
That's how we stars do it
&gt;&gt;&gt;&gt; Now keep your complaints to yourself :p
</code></pre>
<blockquote>
<p><strong><em>This is a block quote in italics and bold</em></strong></p>
<blockquote>
<p>This is an inner block quote<br />I did this just because I can<br />and also because I like it</p>
<blockquote>
<p>I can even try an inner-inner block quote<br />That's how we stars do it</p>
<blockquote>
<p>Now keep your complaints to yourself :p</p>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<h3 id="heading-now-let-us-try-lists">Now let us try lists:</h3>
<p>Creating a random unordered list…</p>
<pre><code class="lang-plaintext">* Apples
* Oranges
* Pears
- One
- Two
- Three
+ First
+ Second
+ Third
</code></pre>
<ul>
<li><p>Apples</p>
</li>
<li><p>Oranges</p>
</li>
<li><p>Pears</p>
</li>
</ul>
<ul>
<li><p>One</p>
</li>
<li><p>Two</p>
</li>
<li><p>Three</p>
</li>
</ul>
<ul>
<li><p>First</p>
</li>
<li><p>Second</p>
</li>
<li><p>Third</p>
</li>
</ul>
<p>and now a random ordered list…</p>
<pre><code class="lang-plaintext">1. Number one
2. Number two
3) Number three
4) Number four
</code></pre>
<ol>
<li><p>Number one</p>
</li>
<li><p>Number two</p>
</li>
</ol>
<ol start="3">
<li><p>Number three</p>
</li>
<li><p>Number four</p>
</li>
</ol>
<p>We can also create nested lists like…</p>
<pre><code class="lang-plaintext">* Fruits  
    1. Oranges
    2. Pears
    3. Apples

* Numbers  
    - One
    - Two  
    - Three
</code></pre>
<ul>
<li><p>Fruits</p>
<ol>
<li><p>Oranges</p>
</li>
<li><p>Pears</p>
</li>
<li><p>Apples</p>
</li>
</ol>
</li>
<li><p>Numbers</p>
<ul>
<li><p>One</p>
</li>
<li><p>Two</p>
</li>
<li><p>Three</p>
</li>
</ul>
</li>
</ul>
<p>In both cases, we need to indent at least 2 spaces on the next line to create a nested list.</p>
<p>Let's see how a link looks like:</p>
<pre><code class="lang-plaintext">[My first Blog Post](https://www.isrxl.com/i-came-i-blogged-i-conquered/)  
[Check out isrxl blog][id]

[id]: isrxl.com "title"

&lt;https://isrxl.com&gt;
</code></pre>
<p><a target="_blank" href="https://www.isrxl.com/i-came-i-blogged-i-conquered/">My first Blog Post</a><br /><a target="_blank" href="isrxl.com">Check out isrxl blog</a></p>
<p><a target="_blank" href="https://isrxl.com">https://isrxl.com</a></p>
<h3 id="heading-get-a-picture">Get a picture</h3>
<pre><code class="lang-plaintext">![Nice Picture](https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit)
</code></pre>
<p><img src="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit" alt="Nice Picture" /></p>
<h3 id="heading-how-do-we-write-code-blocks">How do we write code blocks?</h3>
<p>This is probably my favourite use of markdown. When I was still trying to create my web pages from scratch, I was having a hard time displaying code snippets in the way most tech professionals would like when reading a tech article. Once I saw how easy it was in markdown, I quickly dumped HTML/CSS/JS (for now at least).</p>
<p>Code blocks can be created by indenting the line (pressing the tab key) four (4) times in markdown before writing the code</p>
<pre><code class="lang-plaintext">                get-AzResourceGroup
                get-AzContext
</code></pre>
<pre><code class="lang-plaintext">            get-AzResourceGroup
            get-AzContext
</code></pre>
<p>Another way to create code blocks is by starting a new line with the backtick character three (3) times and optionally following it with the code language. After this, you can write your line/block of code on a new line. Once you are done you can end the code block by using the backtick character three (3) times on a new (and final) line... Like in the code example below:</p>
<pre><code class="lang-plaintext">```powershell
get-service
```
</code></pre>
<pre><code class="lang-powershell"><span class="hljs-built_in">get-service</span>
</code></pre>
<p>You can also write code in markdown as inline code. For example,</p>
<pre><code class="lang-plaintext">Inline code: `get-AzContext`, another example is `1 + 5 = 6`.
</code></pre>
<p>Inline code: <code>get-AzContext</code>, another example is <code>1 + 5 = 6</code>.</p>
<h3 id="heading-closing-remarks">Closing Remarks</h3>
<p>Once you get the hang of it, the markdown language makes authoring technical documentation quite easy and enjoyable.</p>
<p>To create markdown files, you can simply use a plain text editor like notepad or notepad++. If you like to create markdown files and also enjoy the use of some custom plugins like live visualization, Visual Studio Code should be your go-to tool.</p>
<p>If you would also like to play around with markdown, you can check the following sites for good examples:</p>
<p><a target="_blank" href="https://www.markdownguide.org/getting-started/">Getting started with Markdown</a></p>
<p><a target="_blank" href="https://www.markdowntutorial.com/">Markdown Tutorial</a></p>
<p><a target="_blank" href="https://commonmark.org/help/tutorial">CommonMark Website</a></p>
<p>Happy "markdowning' or is it Happy "markingdown"?</p>
<p>Enjoy.</p>
]]></content:encoded></item><item><title><![CDATA[I came, I blogged, I conquered...]]></title><description><![CDATA[Curious title don't you think? Trust me; I too spent some time wondering how this is a fitting title for a tech blog post. However, there was no letting go; once the idea for this title popped into my head. So I decided to use it as the title of my f...]]></description><link>https://isrxl.com/i-came-i-blogged-i-conquered</link><guid isPermaLink="true">https://isrxl.com/i-came-i-blogged-i-conquered</guid><dc:creator><![CDATA[Israel Orenuga]]></dc:creator><pubDate>Sun, 31 Jul 2022 14:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724695727642/76a69a47-6aed-4ff3-b116-f45507fad0ba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Curious title don't you think? Trust me; I too spent some time wondering how this is a fitting title for a tech blog post. However, there was no letting go; once the idea for this title popped into my head. So I decided to use it as the title of my first post. I reckoned I could use it to set the bar high for myself - and also as some sort of prophetic declaration - that some years down the line, I will be able to say "I blogged, I conquered" with a lot more confidence.</p>
<p>I had wanted to start this blog a long time ago (as far back as the beginning of 2022) but I got in my way by overthinking a lot of things. First, I spent an inordinate amount of time wondering how the blog would be received. Then I agonized for an even greater amount of time about what I would be writing about. Most laughable was how I was determined to do it in a "special way". I started out wanting to "build my site from scratch" and even started taking web development courses, but then I quickly realized that if I was going to wait till I finished the courses before launching the blog I might have to wait till 2023 at the earliest. Then I played around with "<strong>Hugo"</strong> for a while before finally coming to the realization that all I have been doing was digging myself down a rabbit hole with nothing to show for it (actually I had some help reaching this <em>point of clarity</em>, I got great advice from people like Ayo Oladejo, Jorge Arteiro and Robin Smorenburg). <strong>So I decided to chuck all the distractions and face my actual goal of blogging.</strong></p>
<p>And here I am. Blogging.</p>
<p>So while this is definitely not a technical blog post, it was my way of overcoming the initial resistance and getting the ball rolling. You know how they say, "<strong><em>the first step is always the most important"</em></strong> or that "<strong><em>it doesn't have to be perfect, it just has to get done"</em></strong>, "<strong><em>do it afraid"</em></strong> etc etc...</p>
<p>Now that I have my first post out of the way, I hope that this journey I have started will gather momentum and that this blog will be of immense benefit to me and everyone who reads my posts.</p>
<p>Welcome to my blog!</p>
]]></content:encoded></item></channel></rss>