DEV Community: Raul Piraces Alastuey The latest articles on DEV Community by Raul Piraces Alastuey (@piraces). https://dev.to/piraces https://media2.dev.to/dynamic/image/width=90,height=90,fit=cover,gravity=auto,format=auto/https:%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F84983%2F4b35aad0-264c-4824-aed2-c3df30e20abe.png DEV Community: Raul Piraces Alastuey https://dev.to/piraces en Take ownership of your Twitter data, set-up your own Twitter updated archive in GitHub Raul Piraces Alastuey Mon, 02 Jan 2023 15:53:45 +0000 https://dev.to/piraces/take-ownership-of-your-twitter-data-set-up-your-own-twitter-updated-archive-in-github-329g https://dev.to/piraces/take-ownership-of-your-twitter-data-set-up-your-own-twitter-updated-archive-in-github-329g <p>Due to the "recent" <a href="proxy.php?url=https://en.wikipedia.org/wiki/Acquisition_of_Twitter_by_Elon_Musk" rel="noopener noreferrer">acquisition of Twitter</a> and all changes that came after that, lots of users started considering other alternatives such as <a href="proxy.php?url=https://joinmastodon.org/" rel="noopener noreferrer">Mastodon</a>, <a href="proxy.php?url=https://www.tumblr.com/" rel="noopener noreferrer">Tumblr</a>, <a href="proxy.php?url=https://nostr.com/" rel="noopener noreferrer">Nostr</a> and many others...</p> <p>But what happens to the generated data over the years we have been making in this platform, what do we make with them? Discarding them? My thoughts on this are... why not just keeping it safe for the future?</p> <p>In this case it is where <strong><a href="proxy.php?url=https://github.com/tweetback/tweetback" rel="noopener noreferrer">Tweetback</a> come into action</strong>. An awesome project by <a href="proxy.php?url=https://zachleat.com/@zachleat" rel="noopener noreferrer">Zach Leatherman</a> to take ownership of your Twitter data in a public archive you can self-host (for example in GitHub).</p> <h2> Overview </h2> <p>Thanks to <a href="proxy.php?url=https://zachleat.com/@zachleat" rel="noopener noreferrer">Zach Leatherman</a> and its awesome project <a href="proxy.php?url=https://github.com/tweetback/tweetback" rel="noopener noreferrer">Tweetback</a>, we can self host our own Twitter archive and keeping it up-to-date if necessary (<a href="proxy.php?url=https://tweets.piraces.dev/" rel="noopener noreferrer">just like I have done with mine</a>).</p> <p>The process to make your own archive is simple. Just follow the steps below.</p> <h2> Getting your Twitter archive in first place </h2> <p>This part is important to initialize our archive.<br><br> Twitter allows you to export a backup of all of your data you have in this platform.</p> <p>As they explain <a href="proxy.php?url=https://help.twitter.com/en/managing-your-account/accessing-your-twitter-data" rel="noopener noreferrer">in their help center</a>, log in into your Twitter account (Web client) and:</p> <ol> <li>Click More in the main navigation menu to the left of your timeline.</li> <li>Select Settings and privacy.</li> <li>Select "Your Account" under Settings.</li> <li>Click on "Download an archive of your data".</li> </ol> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjalilhxbquyed7bqg3z8.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjalilhxbquyed7bqg3z8.png" alt="Page where the data can be claimed" width="800" height="385"></a></p> <p>After clicking there, click in the button that says to request your archive. Then it will change the status to "Requesting archive".</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikcvsiawsbrrm3kbbkjd.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikcvsiawsbrrm3kbbkjd.png" alt="Twitter settings page after requesting your archive" width="800" height="374"></a></p> <p>After taking this action, Twitter will collect your data and send you an email with a link to a <code>.zip</code> file with all your exported data. This process, can take up to 48 hours... so let's relax and keep with the process.</p> <p><em><strong>Note:</strong> don't forget to download the <code>.zip</code> file as soon as you receive the email and save it somewhere (as the links expires shortly).</em></p> <h2> Getting the <a href="proxy.php?url=https://github.com/tweetback/tweetback" rel="noopener noreferrer">Tweetback</a> code and setting it up </h2> <p>In my case <a href="proxy.php?url=https://github.com/piraces/twitter_archive" rel="noopener noreferrer">I just forked the repository</a> but you can clone it and upload it to other site if you want.</p> <p>Note that you will need to have <a href="proxy.php?url=https://nodejs.org/en/" rel="noopener noreferrer">Node.js</a> installed (and Git of course).</p> <p>Then follow this steps:</p> <ol> <li>Clone the repo (or your fork): </li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>git clone https://github.com/tweetback/tweetback.git </code></pre> </div> <p><em><strong>Note:</strong> if you choose creating a fork, the process of updating the repository for possible future changes from the base repository its easier with <a href="proxy.php?url=https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork" rel="noopener noreferrer">the "sync fork" option that GitHub provides</a>.</em></p> <ol> <li><p>Enter to the repository main folder with the terminal and perform a <code>npm install</code> (this will generate a <code>package-lock.json</code> file you will keep).</p></li> <li><p>Following <a href="proxy.php?url=https://github.com/tweetback/tweetback#usage" rel="noopener noreferrer">the instructions of Tweetback</a>, open the zip file you downloaded from Twitter, and extract the <code>data/tweets.js</code> file from it into the <code>database</code> folder in your Tweetback repository folder.</p></li> <li><p>Edit this file to change the first line <code>window.YTD.tweet.part0</code> to <code>module.exports</code> (a simple replace) so the first part of the file will look as follows:<br> </p></li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight javascript"><code><span class="nx">module</span><span class="p">.</span><span class="nx">exports</span> <span class="o">=</span> <span class="p">[</span> <span class="p">{</span> <span class="dl">"</span><span class="s2">tweet</span><span class="dl">"</span> <span class="p">:</span> <span class="p">{</span> <span class="p">...</span> </code></pre> </div> <p><em><strong>Note:</strong> if you have lots of tweets, the process can take a while...</em></p> <ol> <li><p>On the root folder of the repository run with the terminal the command <code>npm run import</code> which will perform the first initialization of our tweets database (going through the <code>tweets.js</code> file).</p></li> <li><p>At the end, you will see some output regarding counts and other info (but no errors).</p></li> <li><p>After importing the tweets, we will be making some changes to the <code>_data/metadata.js</code> file, which will customize some page settings for our archive. We will modify the options as follows (leave the others without modifications):</p></li> </ol> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikcvsiawsbrrm3kbbkjd.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikcvsiawsbrrm3kbbkjd.png" alt="Metadata settings and where they reflect the changes" width="800" height="374"></a></p> <ol> <li><p>Now we are ready with our archive! Take a look at how it will be by running <code>npm run start</code> at the root folder and accessing the local URL that the process outputs to the terminal.</p></li> <li><p>If everything looks good for you, commit and push your changes to the repository.</p></li> </ol> <h2> (Optional) Set the subpath where your page will reside </h2> <p>You can skip this step if your twitter archive will be in the root of a domain/subdomain (such as <a href="proxy.php?url=https://tweets.piraces.dev/" rel="noopener noreferrer">tweets.piraces.dev</a>), otherwise we will have to set the subpath where it will be served (for example if hosting in GitHub pages such as <code>piraces.github.io/tweets</code>).</p> <p>To do this, we will need to do some changes to the <code>eleventy.config.js</code> file as follows (at the end before the closing <code>}</code>):<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight javascript"><code><span class="p">...</span> <span class="nx">eleventyConfig</span><span class="p">.</span><span class="nf">addPlugin</span><span class="p">(</span><span class="nx">EleventyHtmlBasePlugin</span><span class="p">);</span> <span class="k">return</span> <span class="p">{</span><span class="na">pathPrefix</span><span class="p">:</span> <span class="dl">"</span><span class="s2">/twitter/</span><span class="dl">"</span><span class="p">}};</span> </code></pre> </div> <p><em><strong>Note:</strong> change the <code>pathPrefix</code> accordingly to the path you want.</em></p> <h2> (Optional) Configuring a subdomain to point to our GitHub Pages deployment </h2> <p>In this tutorial we will be using GitHub Pages to host our twitter archive, so since in my case I have the domain <code>piraces.dev</code> managed by <a href="proxy.php?url=https://www.cloudflare.com/" rel="noopener noreferrer">Cloudflare</a>, I will create a subdomain called <code>tweets.piraces.dev</code> to point to the GitHub pages deployment.</p> <p>To do this we will only have to create a <code>CNAME</code> DNS record with the value of the subdomain (<code>tweets</code> in this case) pointing to your GitHub domain that will be in the format <code>HANDLE.github.io</code> where <code>HANDLE</code> will be your handle/username in GitHub (<code>piraces.github.io</code> in my case).</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2kqcwhtsi6x3utf9sb3.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2kqcwhtsi6x3utf9sb3.png" alt="CNAME DNS record pointing to GitHub pages" width="800" height="246"></a></p> <p>The picture above shows how to set it for Cloudflare, but it will be very similar in other providers.</p> <h2> Deploying our archive using GitHub pages </h2> <p>Once our repo has all the changes mentioned above, we will have to configure GitHub pages for our repository.<br><br> To do so, we must follow this steps:</p> <ol> <li>Create an empty branch named <code>gh-pages</code> (where the deployed page will be). From your terminal: </li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>git switch <span class="nt">--orphan</span> gh-pages git commit <span class="nt">--allow-empty</span> <span class="nt">-m</span> <span class="s2">"Initial commit on orphan branch"</span> git push <span class="nt">-u</span> origin gh-pages </code></pre> </div> <ol> <li><p>Go to the repository page in GitHub and click "Settings" and there "Pages" from the left toolbar.</p></li> <li><p>In the "Pages" screen, we will be setting the options as follows:</p></li> </ol> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6ve140f8dbzbti88x71.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6ve140f8dbzbti88x71.png" alt="GitHub pages settings example" width="800" height="546"></a></p> <ol> <li>Setup a workflow to publish the page on every push to main branch or manually. To do so, we must go to the "Actions" tab in the GitHub repository page, and then select "New workflow". From this page, we select "Skip this and set up a workflow yourself -&gt;", then a editor appears and we can setup the workflow. You can use the following code (adapting some fields) to complete yours: </li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight yaml"><code><span class="na">name</span><span class="pi">:</span> <span class="s">Publish page</span> <span class="na">on</span><span class="pi">:</span> <span class="na">push</span><span class="pi">:</span> <span class="na">branches</span><span class="pi">:</span> <span class="pi">[</span> <span class="s2">"</span><span class="s">main"</span> <span class="pi">]</span> <span class="c1"># Change this to the name of the main branch of your repository</span> <span class="na">workflow_dispatch</span><span class="pi">:</span> <span class="c1"># Setting this enables the possibility to run the workflow manually</span> <span class="na">jobs</span><span class="pi">:</span> <span class="na">build</span><span class="pi">:</span> <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span> <span class="na">steps</span><span class="pi">:</span> <span class="pi">-</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v3</span> <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Use Node.js 18.x</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/setup-node@v3</span> <span class="na">with</span><span class="pi">:</span> <span class="na">node-version</span><span class="pi">:</span> <span class="s">18.x</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm ci</span> <span class="c1"># Note that you must commit the package-lock.json to the repo and quit it from the .gitignore file</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm run build</span> <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Deploy to Github Pages</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">crazy-max/ghaction-github-pages@v3</span> <span class="na">with</span><span class="pi">:</span> <span class="c1"># Build directory to deploy</span> <span class="na">build_dir</span><span class="pi">:</span> <span class="s">_site</span> <span class="c1"># Write the given domain name to the CNAME file</span> <span class="na">fqdn</span><span class="pi">:</span> <span class="s">tweets.piraces.dev</span> <span class="c1"># Change this to your configured subdomain or completely delete the line if not using it.</span> <span class="na">jekyll</span><span class="pi">:</span> <span class="kc">false</span> <span class="na">keep_history</span><span class="pi">:</span> <span class="kc">true</span> <span class="c1"># If you don't want to preserve a commit history, set to false to perform a git push --force</span> <span class="na">env</span><span class="pi">:</span> <span class="na">GITHUB_TOKEN</span><span class="pi">:</span> <span class="s">${{ secrets.GITHUB_TOKEN }}</span> <span class="c1"># This secret is available by default</span> </code></pre> </div> <ol> <li><p>When finishing editing, click on "Start commit" and commit the workflow to the main branch. The workflow will start running after the commit, you can check the status from the "Actions tab" in the GitHub repository page.</p></li> <li><p>If the workflow shows green (all steps are OK), go and check your URL and see your Twitter archive šŸŽ‰ (note, if something goes wrong with your configured subdomain, please revisit step 3).</p></li> </ol> <h2> (Optional) Setting an scheduled action to update our archive </h2> <p>If you may still using Twitter in the future, consider modifying the workflow created in the previous step to fetch all new tweets using the Twitter API.</p> <p>To do so, you will need a Twitter bearer token. You will only be allowed to obtain a Twitter bearer token if you are a "Twitter developer" 😢, so you will have to fill some forms and apply to become one.</p> <p>To start the process to become a "Twitter developer" visit <a href="proxy.php?url=https://developer.twitter.com/" rel="noopener noreferrer">developer.twitter.com</a>, sign-in and start the process... It may be an annoying process with manually reviews.</p> <p>Once you are a "Twitter developer", proceed to create an app and then getting its bearer token. Documentation about getting the bearer token can be found in the <a href="proxy.php?url=https://developer.twitter.com/en/docs/authentication/oauth-2-0/bearer-tokens" rel="noopener noreferrer">official developer documentation of Twitter</a>.</p> <p>When you have everything set-up correctly, copy the bearer token that will look like something like <code>AAAA...</code> (like random characters). Now go to the repository with your terminal and try to perform the update process for yourself locally (before modifying the remote workflow) to make sure everything goes smoothly.</p> <p>When you are in the repository base path with your terminal, run the following command (where <code>AAAAAA...</code> is your actual complete bearer token):</p> <p><code>TWITTER_BEARER_TOKEN=AAAAAA... npm run fetch-new-data</code></p> <p>This will fetch any tweets that are not in the database because you made them later.<br><br> After fetching new tweets, you will need to rebuild your site with:</p> <p><code>npm run build</code></p> <p>Take a look to the updated archive by running it locally with <code>npm run start</code> and accessing the URL that the process outputs to the terminal.<br><br> If everything looks OK to you, we are ready to modify our GitHub workflow to automate this!<br><br> Follow then this steps:</p> <ol> <li>Go to your repository GitHub page, select the "Actions" tab, select your workflow in the left navbar, locate the latest execution of the workflow and click on "View workflow file". After this, select the edit icon and start modifying your workflow.</li> </ol> <p><em><strong>Note:</strong> you can also modify the workflow in your local environment accessing the folder <code>.github/workflows/</code> and editing the workflow.</em></p> <ol> <li><p>We will be adding a new step after <code>- run: npm ci</code> to run the fetching process of new tweets. To do so, first you may need to add the Twitter bearer token as a secret to your actions (since you don't want it to be publicly visible). If you don't know how to do so, please follow the steps in <a href="proxy.php?url=https://docs.github.com/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository" rel="noopener noreferrer">the GitHub official documentation</a>.</p></li> <li><p>After declaring the secret, add a step after <code>- run: npm ci</code> like the following (assuming your secret is named <code>TWITTER_BEARER_TOKEN</code>, if not, change it accordingly):<br> </p></li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight yaml"><code><span class="nn">...</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm run fetch-new-data</span> <span class="na">env</span><span class="pi">:</span> <span class="na">TWITTER_BEARER_TOKEN</span><span class="pi">:</span> <span class="s">${{ secrets.TWITTER_BEARER_TOKEN }}</span> <span class="c1"># Change the secret name if different</span> </code></pre> </div> <ol> <li>In order to do this periodically, I decided to make my workflow run every night. To do so, you will only have to add a <code>schedule</code> entry in the <code>on</code> entry with a <a href="proxy.php?url=https://crontab.guru/" rel="noopener noreferrer">cron rule</a> (adjust it if you want to your requirements): </li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight yaml"><code><span class="na">name</span><span class="pi">:</span> <span class="s">Publish page</span> <span class="na">on</span><span class="pi">:</span> <span class="na">push</span><span class="pi">:</span> <span class="na">branches</span><span class="pi">:</span> <span class="pi">[</span> <span class="s2">"</span><span class="s">main"</span> <span class="pi">]</span> <span class="c1"># Change this to the name of the main branch of your repository</span> <span class="na">schedule</span><span class="pi">:</span> <span class="pi">-</span> <span class="na">cron</span><span class="pi">:</span> <span class="s2">"</span><span class="s">30</span><span class="nv"> </span><span class="s">0</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*"</span> <span class="c1"># This will trigger the workflow every day at 00:30 UTC</span> <span class="na">workflow_dispatch</span><span class="pi">:</span> <span class="c1"># Setting this enables the possibility to run the workflow manually</span> </code></pre> </div> <ol> <li>Your workflow now will look like this (with minimal differences): </li> </ol> <div class="highlight js-code-highlight"> <pre class="highlight yaml"><code><span class="na">name</span><span class="pi">:</span> <span class="s">Fetch new data &amp; publish page</span> <span class="na">on</span><span class="pi">:</span> <span class="na">push</span><span class="pi">:</span> <span class="na">branches</span><span class="pi">:</span> <span class="pi">[</span> <span class="s2">"</span><span class="s">main"</span> <span class="pi">]</span> <span class="na">schedule</span><span class="pi">:</span> <span class="pi">-</span> <span class="na">cron</span><span class="pi">:</span> <span class="s2">"</span><span class="s">30</span><span class="nv"> </span><span class="s">0</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*"</span> <span class="na">workflow_dispatch</span><span class="pi">:</span> <span class="na">jobs</span><span class="pi">:</span> <span class="na">build</span><span class="pi">:</span> <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span> <span class="na">steps</span><span class="pi">:</span> <span class="pi">-</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v3</span> <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Use Node.js 18.x</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/setup-node@v3</span> <span class="na">with</span><span class="pi">:</span> <span class="na">node-version</span><span class="pi">:</span> <span class="s">18.x</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm ci</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm run fetch-new-data</span> <span class="na">env</span><span class="pi">:</span> <span class="na">TWITTER_BEARER_TOKEN</span><span class="pi">:</span> <span class="s">${{ secrets.TWITTER_BEARER_TOKEN }}</span> <span class="c1"># Here the secret TWITTER_BEARER_TOKEN</span> <span class="pi">-</span> <span class="na">run</span><span class="pi">:</span> <span class="s">npm run build</span> <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Deploy to Github Pages</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">crazy-max/ghaction-github-pages@v3</span> <span class="na">with</span><span class="pi">:</span> <span class="c1"># Build directory to deploy</span> <span class="na">build_dir</span><span class="pi">:</span> <span class="s">_site</span> <span class="c1"># Write the given domain name to the CNAME file</span> <span class="na">fqdn</span><span class="pi">:</span> <span class="s">tweets.piraces.dev</span> <span class="na">jekyll</span><span class="pi">:</span> <span class="kc">false</span> <span class="na">keep_history</span><span class="pi">:</span> <span class="kc">true</span> <span class="na">env</span><span class="pi">:</span> <span class="na">GITHUB_TOKEN</span><span class="pi">:</span> <span class="s">${{ secrets.GITHUB_TOKEN }}</span> <span class="c1"># Here the secret GITHUB_TOKEN</span> </code></pre> </div> <ol> <li><p>Save your workflow, commit and push it.</p></li> <li><p>Checkout the execution process in the "Actions" tab of your GitHub repository to check everything goes OK.</p></li> <li><p>You have now an "always updated" Twitter archive to be there for you and the people that may want to check it completely unattended! šŸš€</p></li> </ol> <p><em><strong>Note:</strong> if you have any doubts regarding code/workflows you can always check my Twitter archive implementation: <a href="proxy.php?url=https://github.com/piraces/twitter_archive/" rel="noopener noreferrer">GitHub - piraces/twitter_archive</a>.</em></p> <h2> (Optional) Adding your archive to <a href="proxy.php?url=https://github.com/tweetback/tweetback-canonical" rel="noopener noreferrer">tweetback-canonical</a> (list of hosted Twitter backups) </h2> <p><a href="proxy.php?url=https://github.com/tweetback/tweetback-canonical" rel="noopener noreferrer">tweetback-canonical</a> is a package to resolve twitter URLs to new canonically hosted twitter backups, made by <a href="proxy.php?url=https://zachleat.com/@zachleat" rel="noopener noreferrer">Zach Leatherman</a> too.</p> <p>Simple follow the process in their <a href="proxy.php?url=https://github.com/tweetback/tweetback-canonical#add-your-own-twitter-archive" rel="noopener noreferrer">README.md</a> to add the URL of your Twitter archive to the <code>mapping.js</code> file.</p> <p>The process its very simple:</p> <ol> <li><p>Create a fork of the <a href="proxy.php?url=https://github.com/tweetback/tweetback-canonical" rel="noopener noreferrer">tweetback-canonical</a> repository.</p></li> <li><p>Modify the <code>mapping.js</code> file adding your Twitter archive public URL to the end of the object, in the format:</p></li> </ol> <p><code>"TWITTER_HANDLE": "FULL_URL_TO_TWITTER_ARCHIVE",</code></p> <p><em><strong>Note:</strong> your twitter handle must go without the <code>@</code> symbol and you must end the line with a comma.</em></p> <ol> <li><p>Commit and push your changes with a commit message with starts with <code>mapping:</code>.</p></li> <li><p>Open a PR to the main branch of the official repository and wait for approval and merge.</p></li> <li><p>You are done with all! Congrats! šŸŽ‰</p></li> </ol> <h2> Conclusion </h2> <p>We have seen in detail how to use the awesome <a href="proxy.php?url=https://github.com/tweetback/tweetback" rel="noopener noreferrer">Tweetback</a> project to build and self-host our own Twitter archive/backup. Whether you may, or may not, want to get off Twitter, I think it's a great approach to get the ownership of your data anyways.</p> <p>Doing this you are not exposed to lose the investment you put in generating those tweets if something bad happens (hopefully not šŸ™).</p> <p>Don't forget to report <a href="proxy.php?url=https://github.com/tweetback/tweetback/issues" rel="noopener noreferrer">any issues you find to the official repository</a> and thanks the author (<a href="proxy.php?url=https://zachleat.com/@zachleat" rel="noopener noreferrer">Zach</a>) for the awesome work!</p> <p><strong>Happy deploy!</strong> šŸŽ‰šŸŽ‰</p> twitter github selfhosted data OpenPGP identity proof Raul Piraces Alastuey Thu, 17 Nov 2022 20:04:42 +0000 https://dev.to/piraces/openpgp-identity-proof-838 https://dev.to/piraces/openpgp-identity-proof-838 <p>This is an OpenPGP proof that connects <a href="proxy.php?url=https://keyoxide.org/84BF523F3F3EFA760C3E2C0C0C1A484B87269CD7">my OpenPGP key</a> to <a href="proxy.php?url=https://dev.to/piraces">this dev.to account</a>. For details check out <a href="proxy.php?url=https://keyoxide.org/guides/openpgp-proofs">https://keyoxide.org/guides/openpgp-proofs</a></p> <p>[Verifying my OpenPGP key: openpgp4fpr:84BF523F3F3EFA760C3E2C0C0C1A484B87269CD7]</p> How to use the Google One Tap solution in your webapp to authenticate users Raul Piraces Alastuey Thu, 06 Jan 2022 17:18:01 +0000 https://dev.to/piraces/how-to-use-the-google-one-tap-solution-in-your-webapp-to-authenticate-users-c8e https://dev.to/piraces/how-to-use-the-google-one-tap-solution-in-your-webapp-to-authenticate-users-c8e <p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i20zgyy86039kslxsvm.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i20zgyy86039kslxsvm.png" alt="Google One Tap components"></a></p> <p>If you are reading this, I think it is a very high chance that you have already seen one of the components shown in the image above. Some popular sites like <a href="proxy.php?url=https://quora.com/" rel="noopener noreferrer">Quora</a>, have implemented this <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/features" rel="noopener noreferrer">"Google One Tap experience"</a> and also the traditional <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/display-button" rel="noopener noreferrer">"Sign-in with Google" button</a>. Maybe you are wondering how does it work or how they have implemented this experience, so I am going to explain this in this post.</p> <h2> Introduction </h2> <p><a href="proxy.php?url=https://developers.google.com/identity" rel="noopener noreferrer">Google Identity Services</a> its defined by Google as: <em>"our new cross-platform sign-in SDK for Web and Android apps, supporting and streamlining multiple types of credentials"</em>.</p> <p>In our case, and keeping the target to web apps, they provide to us "seamless sign-in and sign-up flows", such as:</p> <ul> <li>"Sign In With Google" button: personalized and customizable sign-up or sign-in button to our websites.</li> <li>"One-tap sign-up": allows sign up new users with just one tap of a button, no interruptions and users can end with a secure "password-less" account on your site protected by their Google Account.</li> <li>"Automatic sign-in": allows sign users automatically when they return to our site on any device or browser ( <strong>even if the session expires</strong> ).</li> </ul> <p><strong>This three points is what I will mainly cover in this post</strong>.</p> <h3> What we can do with this and for what </h3> <p>As shown in <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/overview" rel="noopener noreferrer">the main overview page of "Sign In with Google"</a>, basically this processes will allow/help us to quickly and easily manage users authentication and sign-in to out website. Users will sign-in, provide consent and we will securely receive their profile information.</p> <h3> What can be useful for? </h3> <ul> <li>Pre-populate new accounts with the received information.</li> <li>SSO with Google Accounts without making the user re-entering passwords or usernames in other sites.</li> <li>Protect comments, voting or forms from abuse.</li> <li>Ease the user experience in your site by automatically sign in them when they return (or let them do so with simple click).</li> <li>Many other things that you can imagine...</li> </ul> <p>If you would like to test it or integrate it with any of your websites, then... let's get started!</p> <h2> Getting started </h2> <p>You only need basic HTML and JS knowledge to get started. All this setup process is done with JS libraries and HTML to place the buttons/modals.</p> <p><strong>Note:</strong> you may want to know <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/supported-browsers" rel="noopener noreferrer">the supported browsers and platforms</a> before continuing.</p> <p>The first thing we will do is to setup a Google API client ID and configure your consent screen. This is a pretty straightforward process and it is very well explained and documented <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid" rel="noopener noreferrer">here</a>.</p> <p>When you are ready with these and have the necessary data, you will have to load the client library in your webpage. This can be achieved with a little script tag like the following:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;script </span><span class="na">src=</span><span class="s">"https://accounts.google.com/gsi/client"</span> <span class="na">async</span> <span class="na">defer</span><span class="nt">&gt;&lt;/script&gt;</span> </code></pre> </div> <p>This <code>script</code> tag can be included in the <code>head</code> tag of your website or in the <code>body</code> after all your content.</p> <p>Note that we are including the keywords <code>async</code> and <code>defer</code> to optimize the page loading speed. It is important too to review your <a href="proxy.php?url=https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP" rel="noopener noreferrer">CSP</a> to allow this script source and connection. Otherwise, you can download the source and serve it by yourself (I do not recommend this and I have not tested this approach).</p> <p>After doing this, we are already loading the client library to support the "Sign in With Google" experience.</p> <p>To finalize the setup, we will have to <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/choose-components" rel="noopener noreferrer">choose the components for our pages</a>, which are basically a "Sign in" button or the "Google One Tap UX".</p> <p>What Google recommends is to add the "Sign in" button on our main login pages and the "Google One Tap" to all pages in our sites and enable its "Automatic sign-in option".</p> <p>In this case we are including the "Google One Tap UX" but adding the "Sign in" button is a straightforward process and also implemented in the demo webapp we will see later. If you want to know about adding the "Sign in" button checkout this links:</p> <ul> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/personalized-button" rel="noopener noreferrer">Understand Personalized Button</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/display-button" rel="noopener noreferrer">Display the Sign In With Google button</a></li> </ul> <h3> Understanding the Google One Tap experience </h3> <p>When configured, the login widget appears as a popup, and it will prompt the users to sign in or sign up with the existing Google account when any of these conditions are met:</p> <ul> <li>They are already logged in with their Google account.</li> <li>They are already logged in with their Google account in Chrome browser.</li> </ul> <p>If you want to make users sign in when this conditions are not met, then you should display a simple "Sign In With Google button".</p> <p>There are some points to know on how and when the Google One Tap experience can change:</p> <ul> <li>The process may show a pop-up window if we are using an unsupported browser or the dialog is covered with other content (as a security measure).</li> <li>Users can opt out of One Tap if they disable the "Google Account sign-in prompts" in their account settings. In this case One Tap will not display.</li> <li>The dialog of One Tap has an exponential cooldown when it is closed by the user. The dialog then won't display in the same browser or the last website visited for a period of time. When <a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/features#exponential_cooldown" rel="noopener noreferrer">more times the user closes the prompt, the more time the prompt will not show</a>.</li> <li>On mobile devices, One Tap will close automatically after a short time if the user do not interact with it.</li> </ul> <p>After understanding the "flow", lets get the Google One Tap to display in our page.</p> <h3> Displaying Google One Tap </h3> <p>In order to display Google One tap we can choose between JS and HTML. In my case I prefer to use JS and handle the logic there, but choose whatever fits you the most.</p> <p>It is also important to choose if we will want to handle the response in the client side or in the server side. I will cover both in this post.</p> <h3> Handling the response in client side </h3> <p>When handling the response in client side, a JS function will be triggered when the flow is completed. We can achieve this in HTML+JS or only JS as said above.</p> <p>In JS (see comments inside for more information):<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight javascript"><code><span class="kd">function</span> <span class="nf">handleCredentialResponse</span><span class="p">(</span><span class="nx">response</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// Here we can do whatever process with the response we want</span> <span class="c1">// Note that response.credential is a JWT ID token</span> <span class="nx">console</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="dl">"</span><span class="s2">Encoded JWT ID token: </span><span class="dl">"</span> <span class="o">+</span> <span class="nx">response</span><span class="p">.</span><span class="nx">credential</span><span class="p">);</span> <span class="p">}</span> <span class="nb">window</span><span class="p">.</span><span class="nx">onload</span> <span class="o">=</span> <span class="nf">function </span><span class="p">()</span> <span class="p">{</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">initialize</span><span class="p">({</span> <span class="na">client_id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">YOUR_GOOGLE_CLIENT_ID</span><span class="dl">"</span><span class="p">,</span> <span class="c1">// Replace with your Google Client ID</span> <span class="na">callback</span><span class="p">:</span> <span class="nx">handleCredentialResponse</span> <span class="c1">// We choose to handle the callback in client side, so we include a reference to a function that will handle the response</span> <span class="p">});</span> <span class="c1">// You can skip the next instruction if you don't want to show the "Sign-in" button</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">renderButton</span><span class="p">(</span> <span class="nb">document</span><span class="p">.</span><span class="nf">getElementById</span><span class="p">(</span><span class="dl">"</span><span class="s2">buttonDiv</span><span class="dl">"</span><span class="p">),</span> <span class="c1">// Ensure the element exist and it is a div to display correcctly</span> <span class="p">{</span> <span class="na">theme</span><span class="p">:</span> <span class="dl">"</span><span class="s2">outline</span><span class="dl">"</span><span class="p">,</span> <span class="na">size</span><span class="p">:</span> <span class="dl">"</span><span class="s2">large</span><span class="dl">"</span> <span class="p">}</span> <span class="c1">// Customization attributes</span> <span class="p">);</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">prompt</span><span class="p">();</span> <span class="c1">// Display the One Tap dialog</span> <span class="p">}</span> </code></pre> </div> <p>In HTML:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;script&gt;</span> <span class="kd">function</span> <span class="nf">handleCredentialResponse</span><span class="p">(</span><span class="nx">response</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// Here we can do whatever process with the response we want</span> <span class="c1">// Note that response.credential is a JWT ID token</span> <span class="nx">console</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="dl">"</span><span class="s2">Encoded JWT ID token: </span><span class="dl">"</span> <span class="o">+</span> <span class="nx">response</span><span class="p">.</span><span class="nx">credential</span><span class="p">);</span> <span class="p">}</span> <span class="nt">&lt;/script&gt;</span> <span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-callback=</span><span class="s">"handleCredentialResponse"</span> <span class="na">data-your_own_param_1_to_login=</span><span class="s">"any_value"</span> <span class="na">data-your_own_param_2_to_login=</span><span class="s">"any_value"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <p>Including this HTML code in our pages will make Google One Tap prompt when the client library of Google loads. We will have to replace <code>YOUR_GOOGLE_CLIENT_ID</code> with our client id and the <code>data-callback</code> attribute value with our JS function which Google will trigger to perform a callback (this a client-side approach). Additionally, we can specify custom data as the <code>data-your_own_param_1_to_login</code> and <code>data-your_own_param_2_to_login</code> attributes, which will be sent to the callback function.</p> <h3> Handling the response in server side </h3> <p>When handling the response in server side, we will receive a call to an endpoint when the flow is completed. We can achieve this in pure HTML or pure JS as said above.</p> <p>In JS (see comments inside for more information):<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight javascript"><code><span class="nb">window</span><span class="p">.</span><span class="nx">onload</span> <span class="o">=</span> <span class="nf">function </span><span class="p">()</span> <span class="p">{</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">initialize</span><span class="p">({</span> <span class="na">client_id</span><span class="p">:</span> <span class="dl">"</span><span class="s2">YOUR_GOOGLE_CLIENT_ID</span><span class="dl">"</span><span class="p">,</span> <span class="c1">// Replace with your Google Client ID</span> <span class="na">login_uri</span><span class="p">:</span> <span class="dl">"</span><span class="s2">https://your.domain/your_login_endpoint</span><span class="dl">"</span> <span class="c1">// We choose to handle the callback in server side, so we include a reference to a endpoint that will handle the response</span> <span class="p">});</span> <span class="c1">// You can skip the next instruction if you don't want to show the "Sign-in" button</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">renderButton</span><span class="p">(</span> <span class="nb">document</span><span class="p">.</span><span class="nf">getElementById</span><span class="p">(</span><span class="dl">"</span><span class="s2">buttonDiv</span><span class="dl">"</span><span class="p">),</span> <span class="c1">// Ensure the element exist and it is a div to display correcctly</span> <span class="p">{</span> <span class="na">theme</span><span class="p">:</span> <span class="dl">"</span><span class="s2">outline</span><span class="dl">"</span><span class="p">,</span> <span class="na">size</span><span class="p">:</span> <span class="dl">"</span><span class="s2">large</span><span class="dl">"</span> <span class="p">}</span> <span class="c1">// Customization attributes</span> <span class="p">);</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">prompt</span><span class="p">();</span> <span class="c1">// Display the One Tap dialog</span> <span class="p">}</span> </code></pre> </div> <p>In HTML:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span> <span class="na">data-your_own_param_1_to_login=</span><span class="s">"any_value"</span> <span class="na">data-your_own_param_2_to_login=</span><span class="s">"any_value"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <p>Including this HTML code in our pages will make Google One Tap prompt when the client library of Google loads. We will have to replace <code>YOUR_GOOGLE_CLIENT_ID</code> with our client id and the <code>data-login_uri</code> attribute value with our login endpoint which Google will use to perform a callback (this a server-side approach). Additionally, we can specify custom data as the <code>data-your_own_param_1_to_login</code> and <code>data-your_own_param_2_to_login</code> attributes, which will be sent to the callback endpoint.</p> <h3> Customizing the experience </h3> <p>The Google One Tap experience can be customized in several ways. Here are some things you can do to customize it.</p> <h4> Automatic sign-in and sign-out </h4> <p>To do this, we only have to add the HTML attribute <code>data-auto_select</code> with a value of <code>true</code> to our HTML code as the following snippet:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-auto_select=</span><span class="s">"true"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <p>Note that the user must first be signed-in to their Google account and have previously granted consent to share their account profile with your app in order to automatic sign-in to work.</p> <p><strong>For sign out</strong> , we must take into account that we can enter a dead-loop UX if the user is redirected with a page with Google One Tap activated after the sign out. To avoid this, put the class <code>g_id_signout</code> on the button that performs the sign out of the user (this will prohibit auto-selection after a user signs out). This can be achieved as follows:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">class=</span><span class="s">"g_id_signout"</span><span class="nt">&gt;</span>Sign Out<span class="nt">&lt;/div&gt;</span> </code></pre> </div> <p>Also in JS:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight javascript"><code><span class="kd">const</span> <span class="nx">button</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nf">getElementById</span><span class="p">(</span><span class="dl">'</span><span class="s1">signout_button</span><span class="dl">'</span><span class="p">);</span> <span class="nx">button</span><span class="p">.</span><span class="nx">onclick</span> <span class="o">=</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="nx">google</span><span class="p">.</span><span class="nx">accounts</span><span class="p">.</span><span class="nx">id</span><span class="p">.</span><span class="nf">disableAutoSelect</span><span class="p">();</span> <span class="p">}</span> </code></pre> </div> <h4> Change the sign-in context </h4> <p>We can change the wording in the prompt indicating a context, in order to match our use case. We can use the following context with its corresponding wording:</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Context</th> <th>Wording</th> </tr> </thead> <tbody> <tr> <td>signin</td> <td>"Sign in with Google"</td> </tr> <tr> <td>signup</td> <td>"Sign up with Google"</td> </tr> <tr> <td>use</td> <td>"Use with Google"</td> </tr> </tbody> </table></div> <p>To specify the context, just add the HTML attribute <code>data-context</code> with a value included in the table above. For example:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span> <span class="na">data-context=</span><span class="s">"use"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <h4> Position of the prompt </h4> <p>By default, the prompt is always shown in the top-right corner of desktop web browser windows. To display it inside a container we can indicate the <code>data-prompt_parent_id</code> HTML attribute indicating an HTML element id which will be its container. Also we can establish styles to this element. For example:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span> <span class="na">data-context=</span><span class="s">"use"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <h4> Close One Tap when tapping outside </h4> <p>By default, it will close when tapping outside the prompt. If you want this behavior to change, use the <code>data-cancel_on_tap_outside</code> HTML attribute and set it to <code>false</code>. For example:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span> <span class="na">data-cancel_on_tap_outside=</span><span class="s">"false"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <h4> Managing the state with cookies </h4> <p>If you are using cookies, then you can toggle the One Tap display status if a cookie is present or not. To do so, use the <code>data-skip_prompt_cookie</code> HTML attribute and set it to the cookie name you want to use to toggle the display.</p> <p>It is simple:</p> <ul> <li>If the cookie is not set or the value is empty, Google One Tap will behave normally.</li> <li>If the cookie is set and the value not is empty, Google One Tap will not display.</li> </ul> <p>Example code snippet:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight html"><code><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">"g_id_onload"</span> <span class="na">data-client_id=</span><span class="s">"YOUR_GOOGLE_CLIENT_ID"</span> <span class="na">data-login_uri=</span><span class="s">"https://your.domain/your_login_endpoint"</span> <span class="na">data-skip_prompt_cookie=</span><span class="s">"sid"</span><span class="nt">&gt;</span> <span class="nt">&lt;/div&gt;</span> </code></pre> </div> <h3> A simple demo </h3> <p>For this blog post, I have developed a simple <a href="proxy.php?url=https://www.frameworklessmovement.org/" rel="noopener noreferrer">frameworkless</a> static site with a <a href="proxy.php?url=https://googleonetap.developer.li/" rel="noopener noreferrer"><strong>live demo</strong></a> you can try by yourself. Only HTML, CSS and JS are used... all is client side.</p> <p><a href="proxy.php?url=https://github.com/piraces/GoogleOneTapSample" rel="noopener noreferrer"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgh-card.dev%2Frepos%2Fpiraces%2FGoogleOneTapSample.svg" alt="piraces/GoogleOneTapSample - GitHub"></a></p> <p>In the demo, a popup for Google One Tap will be shown in the top right corner and once you give consent to the application, it will be auto authenticate yourself in subsequent visits to the demo. There is also a "Sign in with Google" button in the case you dismiss the modal (to trigger the authentication), this button behaves similar to the modal, pre-selecting a Google account you are already authenticated with (this can be customized).</p> <p>Here is a sample view of what happens when you authenticate:</p> <p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpiraces.dev%2Fimg%2Fposts%2Fhow-to-use-google-one-tap%2Fsample-flow-finished.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpiraces.dev%2Fimg%2Fposts%2Fhow-to-use-google-one-tap%2Fsample-flow-finished.png" alt="Live demo preview"></a></p> <p><strong>Note:</strong> The picture and possibly sensitive data has been removed from the image, but will be shown to yourself if you try it.</p> <p><strong>The following scopes are requested when you use the demo webapp:</strong></p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>API</th> <th>Scope</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>-</td> <td>.../auth/userinfo.email</td> <td>See your primary Google Account email address</td> </tr> <tr> <td>-</td> <td>.../auth/userinfo.profile</td> <td>See your personal info, including any personal info you've made publicly available</td> </tr> <tr> <td>-</td> <td>openid</td> <td>Associate you with your personal info on Google</td> </tr> </tbody> </table></div> <p><strong>Don't worry about the data or consent you are giving to this sample app, this data never leaves the browser.</strong> Trust me, I am not interested in collecting data in any way or do something with it. To do so, I would have to deploy some kind of service somewhere and I don't want and neither have time and motivation to do so. This is not an <a href="proxy.php?url=https://en.wikipedia.org/wiki/Excusatio_non_petita,_accusatio_manifesta" rel="noopener noreferrer"><em>"excusatio non petita, accusatio manifesta"</em></a>, I only want to be clear with the treatment of your data. I invite you to check <a href="proxy.php?url=https://github.com/piraces/GoogleOneTapSample" rel="noopener noreferrer">the source code for the demo</a>.<br><br> In other case do not try the demo app 😢.</p> <h3> Going further into details </h3> <p>The are other "advanced topics" that this post does not cover and you may be interested to learn. So here is a set of different topics and links to go further:</p> <ul> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/display-browsers-native-credential-manager" rel="noopener noreferrer">Display the browser's native credential manager</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/handle-credential-responses-js-functions" rel="noopener noreferrer">Handle credential responses with JavaScript functions</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/verify-google-id-token" rel="noopener noreferrer">Verify the Google ID token on your server side</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/receive-notifications-prompt-ui-status" rel="noopener noreferrer">Receive notifications on the prompt UI status</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/subdomains" rel="noopener noreferrer">Display One Tap across Subdomains</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/use-one-tap-js-api" rel="noopener noreferrer">Use the One Tap JavaScript API</a></li> <li><a href="proxy.php?url=https://developers.google.com/identity/gsi/web/guides/revoke" rel="noopener noreferrer">Revoking ID tokens</a></li> </ul> <p>Hope they are useful for you!</p> <h2> Conclusion </h2> <p>As shown and explained in this post, I consider Google One Tap a great, quick and easy "flow" to authenticate your users in a webpage or app.</p> <p>It can be incorporated as a requirement to comment in your blog, vote in a poll, pre-populate an account information if the user is signing up in your app and lots of other things your imagination could think of.</p> <p>What do you think? Would you use it?</p> <p><strong>Happy coding!</strong> šŸŽ‰šŸŽ‰</p> google programming oidc authentication Improving your code for style, quality, maintainability, design... with Roslyn Analyzers Raul Piraces Alastuey Mon, 31 May 2021 06:18:01 +0000 https://dev.to/piraces/improving-your-code-for-style-quality-maintainability-design-with-roslyn-analyzers-3bh6 https://dev.to/piraces/improving-your-code-for-style-quality-maintainability-design-with-roslyn-analyzers-3bh6 <p><a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers">Roslyn Analyzers</a> analyze your code for style, quality and maintainability, design and other issues.</p> <p>I stumbled upon Roslyn Analyzers while contributing to an issue to the <a href="proxy.php?url=https://github.com/Azure/bicep">Microsoft Bicep repository</a>, where I found a <a href="proxy.php?url=https://github.com/Azure/bicep/blob/main/src/BannedSymbols.txt"><code>BannedSymbols.txt</code> file</a> where it appeared that <code>System.Console.Write</code> and <code>System.Console.WriteLine</code> where being targeted and pointing to not use them for logging purposes.</p> <p>That triggered my interest, as I tried to put a simple <code>Console.WriteLine</code> statement and an alert similar as the image above appeared in Visual Studio.</p> <p>I thought that these kind of "custom rules" combined with the <code>csproj</code> <code>TreatWarningsAsErrors</code> option (<code>&lt;TreatWarningsAsErrors&gt;true&lt;/TreatWarningsAsErrors&gt;</code>) could be a very great solution to maintain dotnet projects code quality, maintainability, design and style in a nice way. In my opinion, more useful and necessary in OSS projects or projects with lots of people working on it.</p> <h1> Using BannedApiAnalyzers in a dotnet project </h1> <p>Using BannedApiAnalyzers in a dotnet project is easy:</p> <ul> <li><p>First of all, install the <a href="proxy.php?url=https://www.nuget.org/packages/Microsoft.CodeAnalysis.BannedApiAnalyzers"><code>Microsoft.CodeAnalysis.BannedApiAnalyzers</code></a> NuGet in the project you want to use this feature.</p></li> <li><p>Place a <code>BannedSymbols.txt</code> file in the project and mark to include it in the project. For example modifying the <code>csproj</code>:<br> </p></li> </ul> <div class="highlight js-code-highlight"> <pre class="highlight xml"><code><span class="nt">&lt;ItemGroup&gt;</span> <span class="nt">&lt;AdditionalFiles</span> <span class="na">Include=</span><span class="s">"BannedSymbols.txt"</span> <span class="nt">/&gt;</span> <span class="nt">&lt;/ItemGroup&gt;</span> </code></pre> </div> <p>Or with Visual Studio, specifying the file properties:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--A6CSWHls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_properties.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--A6CSWHls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_properties.png" alt="BannedSymbols.txt properties in Visual Studio"></a></p> <ul> <li>Include your own custom rules to ban a symbol with the following format (description text is optional and will be displayed as description in diagnostics): </li> </ul> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>{Documentation Comment ID string for the symbol}[;Description Text] </code></pre> </div> <ul> <li> <strong>That's all!</strong> The fields in the <code>BannedSymbols.txt</code> file will be processed and mark as warnings the use of the specified banned symbols. These warnings are of type <code>RS0030</code>, <code>RS0031</code> or <code>RS0035</code>. More info can be found in <a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers/blob/main/src/Microsoft.CodeAnalysis.BannedApiAnalyzers/Microsoft.CodeAnalysis.BannedApiAnalyzers.md">the roslyn analyzers repo</a>.</li> </ul> <p>Take into account that we could use a <code>BannedSymbols.txt</code> file per project or a Solution wide one, including the same <code>BannedSymbols.txt</code> file in all projects.</p> <h1> How to specify the rules </h1> <p>As explained above, the entries in <code>BannedSymbols.txt</code> must have the following format:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>{Documentation Comment ID string for the symbol}[;Description Text] </code></pre> </div> <p>For details on the ID string format, they recommend to take a look at <a href="proxy.php?url=https://github.com/dotnet/csharplang/blob/main/spec/documentation-comments.md#id-string-format">"Documentation Comments" docs</a>.</p> <p>Nevertheless, we have a awesome example in the <a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers/blob/main/src/Microsoft.CodeAnalysis.BannedApiAnalyzers/BannedApiAnalyzers.Help.md">"How to use Microsoft.CodeAnalysis.BannedApiAnalyzers" docs</a>.</p> <p>Taking this example, considering the following code:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight csharp"><code><span class="k">namespace</span> <span class="nn">N</span> <span class="p">{</span> <span class="k">class</span> <span class="nc">BannedType</span> <span class="p">{</span> <span class="k">public</span> <span class="nf">BannedType</span><span class="p">()</span> <span class="p">{}</span> <span class="k">public</span> <span class="kt">int</span> <span class="nf">BannedMethod</span><span class="p">()</span> <span class="p">{}</span> <span class="k">public</span> <span class="k">void</span> <span class="nf">BannedMethod</span><span class="p">(</span><span class="kt">int</span> <span class="n">i</span><span class="p">)</span> <span class="p">{}</span> <span class="k">public</span> <span class="k">void</span> <span class="n">BannedMethod</span><span class="p">&lt;</span><span class="n">T</span><span class="p">&gt;(</span><span class="n">T</span> <span class="n">t</span><span class="p">)</span> <span class="p">{}</span> <span class="k">public</span> <span class="k">void</span> <span class="n">BannedMethod</span><span class="p">&lt;</span><span class="n">T</span><span class="p">&gt;(</span><span class="n">Func</span><span class="p">&lt;</span><span class="n">T</span><span class="p">&gt;</span> <span class="n">f</span><span class="p">)</span> <span class="p">{}</span> <span class="k">public</span> <span class="kt">string</span> <span class="n">BannedField</span><span class="p">;</span> <span class="k">public</span> <span class="kt">string</span> <span class="n">BannedProperty</span> <span class="p">{</span> <span class="k">get</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="k">event</span> <span class="n">EventHandler</span> <span class="n">BannedEvent</span><span class="p">;</span> <span class="p">}</span> <span class="k">class</span> <span class="nc">BannedType</span><span class="p">&lt;</span><span class="n">T</span><span class="p">&gt;</span> <span class="p">{</span> <span class="p">}</span> <span class="p">}</span> </code></pre> </div> <p>We can ban different symbols regarding the code above, taking a look to the following table:</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Symbol in Source</th> <th>Sample Entry in BannedSymbols.txt</th> </tr> </thead> <tbody> <tr> <td>class BannedType</td> <td>T:N.BannedType;Don't use BannedType</td> </tr> <tr> <td>class BannedType</td> <td>T:N.BannedType`1;Don't use BannedType</td> </tr> <tr> <td>BannedType()</td> <td>M:N.BannedType.#ctor</td> </tr> <tr> <td>int BannedMethod()</td> <td>M:N.BannedType.BannedMethod</td> </tr> <tr> <td>void BannedMethod(int i)</td> <td>M:N.BannedType.BannedMethod(System.Int32);Don't use BannedMethod</td> </tr> <tr> <td>void BannedMethod(T t)</td> <td>M:N.BannedType.BannedMethod<code>1(</code>`0)</td> </tr> <tr> <td>void BannedMethod(Func f)</td> <td>M:N.BannedType.BannedMethod<code>1(System.Func{</code>`0})</td> </tr> <tr> <td>string BannedField</td> <td>F:N.BannedType.BannedField</td> </tr> <tr> <td>string BannedProperty { get; }</td> <td>P:N.BannedType.BannedProperty</td> </tr> <tr> <td>event EventHandler BannedEvent;</td> <td>E:N.BannedType.BannedEvent</td> </tr> </tbody> </table></div> <p>One of the main caveats which I found when using this feature and banning symbols was the use of wildcards... For example if you want to ban all <code>System.Console.Write</code> methods in a project, you must specify all variants of the methods, as you can see in <a href="proxy.php?url=https://github.com/Azure/bicep/blob/main/src/BannedSymbols.txt">the Project Bicep example</a>.</p> <p>I made a demo project where I played around in a project with these banning tools, so you can watch how to use them and you can try things:</p> <div class="ltag-github-readme-tag"> <div class="readme-overview"> <h2> <img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"> <a href="proxy.php?url=https://github.com/piraces"> piraces </a> / <a href="proxy.php?url=https://github.com/piraces/BannedApiAnalyzersDemo"> BannedApiAnalyzersDemo </a> </h2> <h3> A basic demo on how to use BannedApiAnalyzers in a .NET Core project </h3> </div> </div> <h1> How this work when working with IDEs or without them </h1> <p>In Visual Studio, this analyzer works out-of-the-box, as you can see in the following image:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FnqwIIcx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_alert.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FnqwIIcx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_alert.png" alt="Warning in Visual Studio"></a></p> <p>And if we enable the <code>TreatWarningsAsErrors</code> option:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--cLoRiqOD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_as_errors.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--cLoRiqOD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_as_errors.png" alt="Warnings as errors in Visual Studio"></a></p> <p>Regarding other IDEs, <a href="proxy.php?url=https://www.jetbrains.com/rider/">JetBrains Rider</a> also works out-of-the-box with this analyzer:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--y569fX1D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_overview_rider.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--y569fX1D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/banned_symbols_overview_rider.png" alt="Rider support"></a></p> <p>Regardless of the IDE, the dotnet CLI will show the warnings (or errors) when building or running the project, which is also awesome:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--E2aA1AC6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/console_error.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--E2aA1AC6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/img/posts/improve-your-code-using-roslyn-analyzers/console_error.png" alt="dotnet CLI warnings/errors"></a></p> <h1> More on Roslyn Analyzers </h1> <p>Microsoft created a set of analyzers called <a href="proxy.php?url=https://www.nuget.org/packages/Microsoft.CodeAnalysis.FxCopAnalyzers"><code>Microsoft.CodeAnalysis.FxCopAnalyzers</code></a> (which is now deprecated) that contains the most important <a href="proxy.php?url=https://en.wikipedia.org/wiki/FxCop">"FxCop"</a> rules from static code analysis, converted to Roslyn analyzers. These analyzers check your code for security, performance, and design issues, among others. Check out <a href="proxy.php?url=https://docs.microsoft.com/en-us/visualstudio/code-quality/install-net-analyzers?view=vs-2019">how to use them</a>.</p> <p>These analyzers have been consolidated in different packages.</p> <p>The <code>BannedApiAnalyzers</code> is one of them, but there are others also as useful as this one:</p> <ul> <li><p><code>Microsoft.CodeAnalysis.NetAnalyzers</code>: Included by default for .NET 5+. For earlier targets <a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers#microsoftcodeanalysisnetanalyzers">see this</a>.</p></li> <li><p><code>Microsoft.CodeAnalysis.PublicApiAnalyzers</code>: Helps library authors monitor changes to their public APIs (<a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers#microsoftcodeanalysispublicapianalyzers">more info</a>).</p></li> </ul> <p>Check out <a href="proxy.php?url=https://github.com/dotnet/roslyn-analyzers">the Roslyn Analyzers repository</a> for more information.</p> <h1> Conclusion </h1> <p>I found this analyzer very useful and I personally will make use of it in my projects to improve the code quality and maintainability.</p> <p>I think Roslyn analyzers are very powerful and can provide very useful features in our dotnet projects. It is worth trying it out in my opinion.</p> <p>What do you think?</p> <p><strong>Happy coding!</strong> šŸŽ‰šŸŽ‰</p> dotnet programming csharp coding Awesome console apps in C# Raul Piraces Alastuey Wed, 09 Dec 2020 00:00:00 +0000 https://dev.to/piraces/awesome-console-apps-in-c-40c6 https://dev.to/piraces/awesome-console-apps-in-c-40c6 <p><em>This post is part of <a href="proxy.php?url=https://www.csadvent.christmas/">C# Advent Calendar 2020</a>.</em></p> <p>It has been a while and console apps are still a thing (maybe more used by developers), but they are used everyday.<br> For example, as a developer, I use <code>npm</code> or <code>dotnet</code> command line programs to do neccessary tasks in my job or in my free time experimenting.<br> These type of tools have been since the begining of times, but also evolving, since the terminal support for different colors, rich language, and other feautures has been improved.</p> <p>We now see some programs representing progress bars, spinners, and some other features.<br> But how can we do it like this programs?</p> <h1> How to make the console more "attractive"? </h1> <p>I remember executing console apps that shows a <a href="proxy.php?url=http://www.figlet.org/">'FIGlet'</a> as a banner, introducing the main execution for the program, and then printing some output in an ordered way that could be easily understood, but maybe not familiar for some users and/or 'difficult to understand'.</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--77MY1Afx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ur9xj3u7w5osiay475vt.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--77MY1Afx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ur9xj3u7w5osiay475vt.png" alt="A sample FIGlet of the program dnschef"></a></p> <em>A sample FIGlet from the program dnschef</em> <p>I feel this kind of tools intuitive and fancy in some way, and I like to use them too. But, I think we need to understand that terminal is also evolving, and the more intuitive and easier we could make this console apps, the better for our possible audience (our target users).</p> <p>The are almost support in almost every language and libraries (like <a href="proxy.php?url=https://github.com/willmcgugan/rich">Rich</a> for Python) that we can use to improve the final user experience.</p> <p>In the case of <strong>dotnet</strong>, two of the most heared libraries/toolkit (at least for me) that will make our console apps great are:</p> <ul> <li><p><a href="proxy.php?url=https://github.com/spectresystems/spectre.console">Spectre.Console</a> a .NET 5/.NET Standard 2.0 library, heavilly based in the <a href="proxy.php?url=https://github.com/willmcgugan/rich">Rich library for Python</a>.</p></li> <li><p><a href="proxy.php?url=https://github.com/migueldeicaza/gui.cs">gui.cs</a> a simple toolkit for buiding console GUI apps for .NET, .NET Core, and Mono that works on Windows, Mac, and Linux/Unix.</p></li> </ul> <p>It is also support for doing great things in console in the dotnet framework but first lets see what this libraries can do for us.</p> <p>Let's get with them!</p> <h2> Spectre.Console </h2> <p>As said above, <a href="proxy.php?url=https://github.com/spectresystems/spectre.console">Spectre.Console</a> is a .NET 5/.NET Standard 2.0 library that makes it easier to create beautiful, cross platform, console applications (heavily inspired by the excellent <a href="proxy.php?url=https://github.com/willmcgugan/rich">Rich library for Python</a>).</p> <p>Spectre.Console, has support for tables, grids, panels and markup language.<br> Has also support for the most common SRG parameters when styling and support for up to 24-bit colors in terminal (the library will also downgrade the colors depending of the current terminal).</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--lOzjhoOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9851w08vw7v7v4x4qpzg.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--lOzjhoOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9851w08vw7v7v4x4qpzg.png" alt="A picture showing the main feautres of Spectre.Console"></a></p> <em>A picture showing the main feautres of Spectre.Console</em> <p>The <a href="proxy.php?url=https://spectresystems.github.io/spectre.console/">documentation</a> it is very well written and full of samples to get started with every feature of the library.</p> <p>With this library, we are capable to improve data output to the user, better progress indicators and better prompting for user decisions.</p> <p>On my way doing some research of this library, I developed a console app as a demo of the use of this library:</p> <div class="ltag-github-readme-tag"> <div class="readme-overview"> <h2> <img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"> <a href="proxy.php?url=https://github.com/piraces"> piraces </a> / <a href="proxy.php?url=https://github.com/piraces/ConsoleCoin"> ConsoleCoin </a> </h2> <h3> Simple console app to track CryptoCurrencies prices </h3> </div> </div> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--EPCNoZ4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/96pojpmx31q6xk0yx8xb.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--EPCNoZ4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/96pojpmx31q6xk0yx8xb.png" alt="Example console app with Spectre.Console"></a></p> <p><em>Example console app with Spectre.Console</em></p> <p>I felt the library easy to understand and use, with great support and documentation, it is easy to improve some of our console apps using it. Give it a try!</p> <h2> Gui.cs </h2> <p><a href="proxy.php?url=https://github.com/migueldeicaza/gui.cs">gui.cs</a>, its a simple (but complete and full of features) toolkit for buiding console GUI apps for .NET, .NET Core, and Mono that works on Windows, Mac, and Linux/Unix. Made by <a href="proxy.php?url=https://twitter.com/migueldeicaza">Miguel de Icaza</a>, actually working in Microsoft.</p> <p>In this case this toolkit I feel like it is oriented for more complex console apps trying to expose some UI for its users.</p> <p>It contains out of the box features and support for various controls for building text UIs: buttons, checkboxes, dialogs, labels, menus, progress bars...</p> <p>But that is not all! It is also cross platform, supports keyboard and mouse input, flexible layout, clipboard support, and others...</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--6bSOuQWb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5lxviah0iqqfwzuqn6xx.gif" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--6bSOuQWb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5lxviah0iqqfwzuqn6xx.gif" alt="Features of gui.cs"></a></p> <p><em>Features of gui.cs</em></p> <p>I have tried this library but not all its features, and as far I can tell is that is a very awesome library with you could build almost any kind of app I can think of (replacing the user interface for a text based one).</p> <h2> Other ways... </h2> <p>For sure, the <a href="proxy.php?url=https://docs.microsoft.com/en-us/dotnet/api/system.console?view=net-5.0">Console class</a> included in the .NET API, expose lots of properties, methods and events to improve the console experience.</p> <p>The libraries shown above are useful for the most of apps in order to not reinvent the wheel. But, in case you feel like you do not need them or you want to implement it better on your own. There is always the possibility of doing awesome thing with the <strong>dotnet</strong> framework itself alone.</p> <h1> Conclusion </h1> <p>Console apps exist and will be there for a long time, they improve, and also the terminal they run in. Trying to make them more user friendly and more "understandable", its for sure a very good bet for get our users comfortable using our tools/apps.</p> <p>In my case, I love executing almost everything I can in the terminal. I love console tools and apps, and I also love how they make the most out of the terminal they run in to give you the most.</p> <p>Hope this post will be useful for your console tools/apps!</p> <p>This post is part of the <a href="proxy.php?url=https://www.csadvent.christmas/">C# Advent Calendar 2020</a>, thanks for letting me participate with my humble post!</p> <p>Happy Coding! Happy Advent! Happy Christmas!</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--ZyqbnYSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ch4aa2ge4r9ff0ae0w5m.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--ZyqbnYSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ch4aa2ge4r9ff0ae0w5m.jpg" alt="Happy Christmas"></a></p> <p><em>Happy Advent and happy christmas!!</em></p> console dotnet csharp opensource Performing static code analysis of your Kubernetes object definitions with a Github Action Raul Piraces Alastuey Mon, 14 Sep 2020 18:28:36 +0000 https://dev.to/piraces/performing-static-code-analysis-of-your-kubernetes-object-definitions-with-a-github-action-4d27 https://dev.to/piraces/performing-static-code-analysis-of-your-kubernetes-object-definitions-with-a-github-action-4d27 <p>Nowadays, <a href="proxy.php?url=https://kubernetes.io/">Kubernetes</a> is one of the <a href="proxy.php?url=https://insights.stackoverflow.com/survey/2020#technology-most-loved-dreaded-and-wanted-platforms">most popular and loved platforms</a> used by the community, to run and orchestrate container workloads.</p> <p>It is present in several open-source and private projects as the base of its infrastructure. They trust to use Kubernetes as a platform.</p> <h2> The problem </h2> <p>The world of Kubernetes can be sometimes very complex and if we make use of it, we need to ensure we are doing it right.</p> <blockquote class="ltag__twitter-tweet"> <div class="ltag__twitter-tweet__main"> <div class="ltag__twitter-tweet__header"> <img class="ltag__twitter-tweet__profile-image" src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--qKfw3uMR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1189491113750605830/ChFgjH49_normal.jpg" alt="SwiftOnSecurity profile image"> <div class="ltag__twitter-tweet__full-name"> SwiftOnSecurity </div> <div class="ltag__twitter-tweet__username"> @swiftonsecurity </div> <div class="ltag__twitter-tweet__twitter-logo"> <img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"> </div> </div> <div class="ltag__twitter-tweet__body"> One time I tried to explain Kubernetes to someone.<br>Then we both didn't understand it. </div> <div class="ltag__twitter-tweet__date"> 15:40 PM - 06 Aug 2019 </div> <div class="ltag__twitter-tweet__actions"> <a href="proxy.php?url=https://twitter.com/intent/tweet?in_reply_to=1158764816426840064" class="ltag__twitter-tweet__actions__button"> <img src="proxy.php?url=https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-reply-action.svg" alt="Twitter reply action"> </a> <a href="proxy.php?url=https://twitter.com/intent/retweet?tweet_id=1158764816426840064" class="ltag__twitter-tweet__actions__button"> <img src="proxy.php?url=https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-retweet-action.svg" alt="Twitter retweet action"> </a> 478 <a href="proxy.php?url=https://twitter.com/intent/like?tweet_id=1158764816426840064" class="ltag__twitter-tweet__actions__button"> <img src="proxy.php?url=https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-like-action.svg" alt="Twitter like action"> </a> 2817 </div> </div> </blockquote> <p>There are lots of tools that can ease that work for us to simplify the process to get Kubernetes working correctly as our platform.</p> <p>As you may know, the Kubernetes workloads are most commonly defined as YAML formatted documents. Sometimes, it is rather hard to express constraints or relationships between manifests files.</p> <h3> What can be done to help ease the process of using Kubernetes correctly? </h3> <p>There are existing tools to integrate static checking allowing catching errors and policy violations closer to the development lifecycle.</p> <p>These tools gives us the guarantee around the validity and safety of the resource definitions is improved, therefore you can trust that production workloads are following best practices (which <a href="proxy.php?url=https://www.sentinelone.com/blog/kubernetes-security-challenges-risks-and-attack-vectors/">is a must nowadays</a>).</p> <p>As developers, we need to guarantee the validity and safety of our Kubernetes manifests to ensure the security and integrity of our production environments.</p> <p>Here comes when the tool <a href="proxy.php?url=https://kube-score.com/">kube-score</a> can help us.</p> <p>Kube-score is an <a href="proxy.php?url=https://github.com/zegl/kube-score">open-source</a> tool that performs Kubernetes object analysis with recommendations for improved reliability and security.</p> <p>How can we introduce kube-score in our CI/CD processes?</p> <p><strong><a href="proxy.php?url=https://github.com/features/actions">GitHub Actions</a> to the rescue!</strong></p> <h3> My Workflow </h3> <p>My developed GitHub action (<a href="proxy.php?url=https://github.com/marketplace/actions/kube-score-check">kube-score check</a>), allows GitHub users to execute <code>kube-score</code> in their workflows along other actions and guarantee the validity and safety of their Kubernetes manifests. </p> <p>It is very simple to use, you only have to perform a checkout of the project repository and execute the action, which takes as input an array of manifests (or directories with manifests) which you want to validate and test (wildcards supported):</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--vXKkLKfu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c3dk8cvyettegcw5a19x.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--vXKkLKfu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c3dk8cvyettegcw5a19x.png" alt="How to use the action"></a></p> <p>Read <a href="proxy.php?url=https://github.com/piraces/kube-score-ga/blob/master/README.md">the full Readme of the project</a> to see all options.</p> <h3> Submission Category: </h3> <p><strong>Maintainer Must-Haves</strong> / <strong>DIY Deployments</strong></p> <h3> Yaml File or Link to Code </h3> <p>Please take a look at the action repository, and give any feedback if you want! </p> <p>Contributions are welcomed too!</p> <div class="ltag-github-readme-tag"> <div class="readme-overview"> <h2> <img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"> <a href="proxy.php?url=https://github.com/piraces"> piraces </a> / <a href="proxy.php?url=https://github.com/piraces/kube-score-ga"> kube-score-ga </a> </h2> <h3> Github action to execute kube-score with selected manifests (YAML, Helm or Kustomize) </h3> </div> <div class="ltag-github-body"> <div id="readme" class="md"> <h1> kube-score Github Action</h1> <p><a rel="noopener noreferrer" href="proxy.php?url=https://github.com/piraces/kube-score-ga/workflows/Node.js%20CI/badge.svg"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--UOjC_pre--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/piraces/kube-score-ga/workflows/Node.js%2520CI/badge.svg" alt="Node.js CI (build, test, lint)"></a> <a rel="noopener noreferrer" href="proxy.php?url=https://github.com/piraces/kube-score-ga/workflows/Action%20CI/badge.svg"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--N9PuHiIa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/piraces/kube-score-ga/workflows/Action%2520CI/badge.svg" alt="Action CI"></a></p> <p>This action executes kube-score with selected manifests (with support for YAML, Helm or Kustomize manifests).</p> <h2> Features</h2> <p>šŸ’» Compatible with Windows, Linux and Darwin Operating Systems.</p> <p>šŸ— Supported architectures: ARMv6, ARM64, x64.</p> <p>šŸ“‚ Multiple folders and files supported within one run of the action (with wildcards support).</p> <p>šŸ”¢ All versions of kube-score can be selected and used.</p> <p>⚔ Support for caching kube-score tool to improve speed in subsequent runs.</p> <h2> Inputs</h2> <h3> <code>kube-score-version</code> </h3> <p><em>(Optional)</em>: The version of kube-score to use. Defaults to the latest available.</p> <h3> <code>manifests-folders</code> </h3> <p><strong>Required</strong>: An array of relative paths containing manifests to analyze with kube-score (separated with commas). It is mandatory to establish a wildcard for the files or the concrete filename.</p> <p>Example: <code>./manifests/*.yml,./other/manifests/*.yml</code></p> <h3> <code>ignore-exit-code</code> </h3> <p><em>(Optional)</em>: Will ignore the exit code provided by <code>kube-score</code>, will always pass the check. This could be useful in case of using the action in an information way.</p>…</div> </div> <div class="gh-btn-container"><a class="gh-btn" href="proxy.php?url=https://github.com/piraces/kube-score-ga">View on GitHub</a></div> </div> <p>See the YAML file for the action: <a href="proxy.php?url=https://github.com/piraces/kube-score-ga/blob/master/action.yml">action.yml</a></p> <h3> Additional Resources / Info </h3> <p>In the links below, you can see the GitHub action running with different forks of popular projects, such as the <strong><a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview">Application Gateway Ingress Controller</a> (AGIC) of Azure</strong>, or the examples provided in the <strong>official repo of Kubernetes</strong>.</p> <p>Take a look and see how AGIC <a href="proxy.php?url=https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions">Custom Resource Definitions</a> (CRDs), <a href="proxy.php?url=https://github.com/piraces/application-gateway-kubernetes-ingress/runs/1113739099?check_suite_focus=true">are passing all validations and recommendations</a> (as is a ready to use in production controller).</p> <ul> <li> <p>See the GitHub Action in use: </p> <ul> <li> <a href="proxy.php?url=https://github.com/kubernetes/examples">Kubernetes/examples</a> repository (fork): <a href="proxy.php?url=https://github.com/piraces/examples/actions">https://github.com/piraces/examples/actions</a> </li> <li> <a href="proxy.php?url=https://github.com/Azure/application-gateway-kubernetes-ingress">Azure/application-gateway-kubernetes-ingress</a> repository (fork): <a href="proxy.php?url=https://github.com/piraces/application-gateway-kubernetes-ingress/actions">https://github.com/piraces/application-gateway-kubernetes-ingress/actions</a> </li> </ul> </li> <li><p>GitHub Repository: <a href="proxy.php?url=https://github.com/piraces/kube-score-ga"></a><a href="proxy.php?url=https://github.com/piraces/kube-score-ga">https://github.com/piraces/kube-score-ga</a></p></li> <li><p>GitHub Marketplace: <a href="proxy.php?url=https://github.com/marketplace/actions/kube-score-check"></a><a href="proxy.php?url=https://github.com/marketplace/actions/kube-score-check">https://github.com/marketplace/actions/kube-score-check</a></p></li> </ul> actionshackathon github opensource showdev Getting notified on the latest releases of your day-to-day tech stack Raul Piraces Alastuey Sat, 04 Jul 2020 15:30:00 +0000 https://dev.to/piraces/getting-notified-on-the-latest-releases-of-your-day-to-day-tech-stack-339p https://dev.to/piraces/getting-notified-on-the-latest-releases-of-your-day-to-day-tech-stack-339p <p>Nowadays there are <strong>lots</strong> of technologies and dependencies involved in our day-to-day projects, therefore making it more difficult to know them all and to track their changes.</p> <p>New releases of tools, software, and software development related technologies happen almost every day. This can be sometimes kind of frustrating. You can be working in a brand new project using the latest available versions, and in the process of developing the project, versions bump and new features and fixes are available. This don’t have to be bad indeed… These new versions can improve our product/project and solve some problems for us, and we know we have to update them someday and I personally think it’s better to bump version by version than bumping from one version to another with several versions in the middle (which may result in a complicated process).</p> <p>Then… How to get up-to-date with all of these? Can we manage to get notified every time a technology we use is updated? Sure we can do it!</p> <h1> Approaches </h1> <p>There are different approaches to accomplish this. We are going to review some of the most used ones and the ones that I prefer.</p> <h2> Using GitHub notifications </h2> <p>Nowadays with the <a href="proxy.php?url=https://opensource.org/">Open Source initiative</a>, the majority of projects we use to develop every day are open sourced and hosted in platforms like GitHub, GitLab and others (for example: <a href="proxy.php?url=https://github.com/nodejs/node">Node.js</a>, <a href="proxy.php?url=https://github.com/denoland/deno">Deno</a>, <a href="proxy.php?url=https://github.com/vuejs/vue">Vue.js</a>, <a href="proxy.php?url=https://github.com/microsoft">Microsoft projects</a>…).</p> <p>This projects usually manage their releases using GitHub releases, so we can use the built-in GitHub notifications to get notified on every release of the projects we want to track in a very simple way (just tap on watch and select releases):</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--MjbQgK8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/fb69a44281d7dd531ebec181b1edbdc2/d9199/github_releases_notifications.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--MjbQgK8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/fb69a44281d7dd531ebec181b1edbdc2/d9199/github_releases_notifications.png" alt="Watch releases in a GitHub project"></a></p> <p>This way new releases will appear in our feed of notifications in our profile of GitHub. Depending in our notifications configuration, we can achieve to get this notifications in our mail too.</p> <p>Simple.</p> <p>Nevertheless, I found this process a little bit <em>confusing</em> sometimes and I missed some of the new releases of my used projects. This can be because, in my case, the GitHub notifications feed was plenty of notifications of mentions, PRs, reviews, and others. So, I tried to change this and organize myself better, moving this notifications to a RSS feed.</p> <h3> GitHub release notifications in a RSS feed </h3> <p>GitHub provides an RSS feed for releases for every project, which follows the following URL structure:</p> <p><code>https://github.com/{USER}/{PROJECT}/releases.atom</code></p> <p>For example:</p> <p><code>https://github.com/nodejs/node/releases.atom</code></p> <p><a href="proxy.php?url=https://github.com/nodejs/node/releases.atom">View the raw feed</a></p> <p>Knowing this, we can use our favorite RSS reader/client to group RSS feeds of different releases and use it to read and be notified for the latest releases.In my case, I use <a href="proxy.php?url=https://feedly.com/">Feedly</a>, to make a feed and group different sources naming and organizing them:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--hkKwhbSh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/437834659eaf9a2ffc6400e7c7416a8d/d9199/feedly_github_releases_notifications.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--hkKwhbSh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/437834659eaf9a2ffc6400e7c7416a8d/d9199/feedly_github_releases_notifications.png" alt="Feedly feed of different GitHub releases"></a></p> <p>In this way, I personally check the feed every day if new releases have been published or if some versions have went out of preview to an stable version.Also note, that <strong>it’s always important to check the changelog</strong> usually provided with each release, before attempting to use the last version (there could be breaking changes or deprecated functionalities you’ll need to manage).</p> <h2> Not on GitHub? No problem… </h2> <p>There could be some projects that are not open sourced, or using another Version Control platform like <a href="proxy.php?url=https://gitlab.com/">GitLab</a>. In the case of GitLab, we can subscribe to release notifications like GitHub and manage them to get the notifications in our mail easily as explained with GitHub above:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--TIeK-kNt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/717696d801ce23658608fd9832541d24/d9199/gitlab_releases_notifications.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--TIeK-kNt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/717696d801ce23658608fd9832541d24/d9199/gitlab_releases_notifications.png" alt="Watch releases in a GitLab project"></a></p> <p><strong>What if a project is not open sourced or not in any of the mentioned platforms?</strong></p> <p>Well… There are also plenty of options to keep track of them. Usually largely used projects tend to have a Twitter account, a blog, or other resource we can watch and keep up-to-date.</p> <p>In this case, to make all notifications came to the same place we could use some mechanism of automation to watch for us blog updates or twitter accounts and notify us on new releases.</p> <p>Some tools like <a href="proxy.php?url=https://ifttt.com/">IFTTT</a>, provides us simple yet powerful workflows to automate this kind of things. For example, we can set up a workflow to get new releases information from a blog (via RSS) to our mail:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--oRO3YF9---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/fb75afe43a9d68f88ea339e5b4004f3a/d9199/ifttt_automation.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--oRO3YF9---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/fb75afe43a9d68f88ea339e5b4004f3a/d9199/ifttt_automation.png" alt="IFTTT Automation"></a></p> <p>Other options could be to watch a Twitter account and send it to our mail, set up an RSS feed of a Twitter account updates which contains certain words (like ā€œreleaseā€), and other workflows you could imagine.</p> <p>Personally, I have not found a project whose releases I couldn’t track.</p> <h1> Conclusion </h1> <p>There are plenty of options to get up-to-date with your tech stack or favorite tooling. Simply use the most comfortable to you, or the one you find more productive.</p> <p>I’m sure this process will help you out to keep you up-to-date and be a more informed and organized developer.</p> <p>Do you use another way to keep up-to-date with new versions and releases?</p> <p><strong>Happy coding!</strong> šŸŽ‰šŸŽ‰</p> productivity github tutorial automation Automating Docker container base image updates with Watchtower Raul Piraces Alastuey Sun, 10 May 2020 20:00:00 +0000 https://dev.to/piraces/automating-docker-container-base-image-updates-with-watchtower-1io3 https://dev.to/piraces/automating-docker-container-base-image-updates-with-watchtower-1io3 <p>In the case you are using Docker for your developments, projects, apps… Normally you use base images to start with. These base images are (normally) frequently updated, and in some cases we are going to want to keep up to date with a tag of a base image. In this cases, <a href="proxy.php?url=https://github.com/containrrr/watchtower">Watchtower</a> is a great devops tool that is going to manage this cases for us with very little hassle.</p> <p>For example, lets suppose having a very simple a Python project containerized with the official <a href="proxy.php?url=https://hub.docker.com/_/python">Python 3 base image</a>.Our docker file references the base image with the <code>3</code> tag:<br> </p> <div class="highlight"><pre class="highlight plaintext"><code>FROM python:3 ADD my_awesome_script.py / RUN pip install numpy CMD ["python", "./my_awesome_script.py"] </code></pre></div> <p>If we build the image and deploy the container, it’s going to keep the latest base image of Python 3 at the moment of building it, but we want this base image to keep up to date to benefit from all changes and improvements being made in the different versions of Python 3.</p> <p>In this cases Watchtower will help us!</p> <h1> Overview </h1> <p>With <a href="proxy.php?url=https://containrrr.github.io/watchtower/">Watchtower</a> we can update the running version of our containerized apps automatically. In the case of using our own base images, by simple pushing a new image to the <a href="proxy.php?url=https://hub.docker.com/">Docker Hub</a> or our own image registry.</p> <p>Watchtower will pull down the new image, shutdown gracefully the existing container and restarting it with the same options that were used when initially deployed.</p> <p>This can be achieved as simple as running the following command in the same machine our container is deployed:<br> </p> <div class="highlight"><pre class="highlight plaintext"><code>docker run -d \ --name watchtower \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower </code></pre></div> <p>Watchtower will then start monitoring our running Docker containers and watch for changes to the images that those containers where originally started from. If a change its detected, Watchtower will automatically restart the container using the new image.</p> <p>In the previous python example, Watchtower will pull the latest python base image every few minutes, and compare it to the one used to run the container. If a change its detected, it will stop/remove the container and restart it with the same options.</p> <p><strong>Note</strong> : <em>Since Watchtower code needs to interact with the Docker API in order to monitor the running containers, we need to mount <code>/var/run/docker.sock</code> with the <code>-v</code> flag when running it.</em></p> <h2> Features </h2> <p>Watchtower can be used as simply as the example above, but it has several features and options that make it a great tool in very different scenarios.</p> <p>Let’s see them.</p> <h3> Configuring the Watchtower execution </h3> <p>There are <a href="proxy.php?url=https://containrrr.github.io/watchtower/arguments/">many arguments</a> we can use to <em>customize</em> the behavior of Watchtower.Some of the most interesting ones are:</p> <ul> <li> <code>--cleanup</code>: remove old images after updating.</li> <li> <code>--host</code>: the Docker daemon socket to connect to.</li> <li> <code>--include-stopped</code>: will also update created and exited containers.</li> <li> <code>--revive-stopped</code>: with the <code>--include-stopped</code> argument, it will start the containers after updating them.</li> <li> <code>--interval</code>: poll interval in seconds.</li> <li> <code>--label-enable</code>: to only update containers with the label <code>com.centurylinklabs.watchtower.enable</code> set to true.</li> <li> <code>--monitor-only</code>: only monitor, not update.</li> <li> <code>--no-restart</code>: useful if using an external system manages the containers.</li> <li> <code>--run-once</code>: run an update attempt one time and then exit.</li> <li> <code>--schedule</code>: allows us to specify a <a href="proxy.php?url=https://crontab.guru/">Cron expression</a> which sets when to check for updates.</li> </ul> <h3> Container selection </h3> <p>As we have seen, Watchtower will watch all containers and try to update them. However, in most cases we will need to specify which containers should be updated.</p> <p>We can control this for example when running the Watchtower container, specifying the name of the containers to watch:<br> </p> <div class="highlight"><pre class="highlight plaintext"><code>docker run -d \ --name watchtower \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower mycontainer myothercontainer </code></pre></div> <p>But the recommended way is to set the label <code>com.centurylinklabs.watchtower.enable</code> as true or false in our containers depending if we want Watchtower to update them or not. This can be achieved inserting the label in the Dockerfile or when running the container with the <code>--label</code> flag.</p> <p><strong>Note</strong> : <em>the mentioned label with a value of true is only needed if we start Watchtower with the <code>--label-enable</code> flag. This will only update containers with that flag set as true, otherwise it is not needed as it watch all containers by default (which don’t have the label set as false). Useful to reverse the default behavior of Watchtower.</em></p> <h3> Notifications </h3> <p>Watchtower is able to send notifications when containers are updated. The types of notifications to send are set by passing a comma-separated list of values to the <code>--notifications</code> option or the <code>WATCHTOWER_NOTIFICATIONS</code> environment variable, which has the following valid values: <code>email</code>, <code>slack</code>, <code>msteams</code> and <code>gotify</code>.</p> <p>We can also specify the log level for notifications using the <code>--notifications-level</code> option or the <code>WATCHTOWER_NOTIFICATIONS_LEVEL</code> environment variable, with the following values: <code>panic</code>, <code>fatal</code>, <code>error</code>, <code>warn</code>, <code>info</code>, <code>debug</code>.</p> <p>In order to configure notifications for the different services, take a look at the <a href="proxy.php?url=https://containrrr.github.io/watchtower/notifications/">official guide</a>, which everything is covered in detail.</p> <h3> Using private Docker registries </h3> <p>In the case of using private Docker registries, we need to supply Watchtower the authentication credentials for the registry with the environment variables <code>REPO_USER</code> and <code>REPO_PASS</code>. Another way to do this is to mount the host’s docker config file into the container (at the root of the filesystem <code>/</code>).</p> <p>Example:<br> </p> <div class="highlight"><pre class="highlight plaintext"><code>docker run -d \ --name watchtower \ -e REPO_USER=username \ -e REPO_PASS=password \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower container_to_watch --debug </code></pre></div> <h3> More features! </h3> <p>There are other features (such as linked containers, remote hosts, secure connections…) not covered in this blog post that can be useful in some cases too, please take a look to the <a href="proxy.php?url=https://containrrr.github.io/watchtower">Watchtower project page</a> to see them all in detail.</p> <h1> Conclusion </h1> <p>We have seen how to automate the update process of Docker images in our machines using a simple yet powerful tool.This tool can be used in our development or production environments to simplify the process of keeping images of containers up-to-date.</p> <p>I have also seen this tool being used in personal environments, such as <a href="proxy.php?url=https://www.raspberrypi.org/">Raspberry</a> ones with docker containers where in that case are used for entertainment purposes (like <a href="proxy.php?url=https://www.plex.tv/">Plex</a>) and its very useful too!</p> <p>Personally, I am using this tool and I am very happy with the results. What about you? Give it a try!</p> <p><strong>Happy deployments!</strong> šŸŽ‰šŸŽ‰</p> docker devops tutorial automation Replacing self-hosted agents with ephemeral pipelines agents in Azure DevOps Raul Piraces Alastuey Sun, 15 Mar 2020 17:00:00 +0000 https://dev.to/piraces/replacing-self-hosted-agents-with-ephemeral-pipelines-agents-in-azure-devops-i8h https://dev.to/piraces/replacing-self-hosted-agents-with-ephemeral-pipelines-agents-in-azure-devops-i8h <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--BqUBTaiM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/2a8a8400575881c6955625977dcc41fa/d587d/ephemeral_agents_schema.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--BqUBTaiM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/2a8a8400575881c6955625977dcc41fa/d587d/ephemeral_agents_schema.png" alt="Cover image"></a></p> Schema of sample architecture using ephemeral agents <p>If you have Azure Resources that aren’t exposed on the internet but only accessible via a private network, you can’t use <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops">Microsoft-hosted agents</a> because they can’t connect to the private network. Therefore, we need to maintain a pool (or several pools) of <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;tabs=browser#install">self-hosted agents</a>, with the associated costs and effort to maintain that pool(s).</p> <p>In this case it is where <strong>Ephemeral pipelines agents come into action</strong>.</p> <h1> Overview </h1> <p><a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents">Ephemeral pipelines agents</a> come to eliminate the need to use and maintain pools of self-hosted agents for deployment purposes.This type of agents are capable to deploy to private azure resources. The process is not very complex. Ephemeral pipelines agents run in an <a href="proxy.php?url=https://azure.microsoft.com/en-us/services/container-instances/">Azure Container Instance (ACI)</a> with access to the private network where the other Azure resources are. The agents are created to run a pipeline job and then deleted to avoid extra costs and resources consumption.</p> <p>This way, we can deploy to private Azure resources without having to expose them on the internet or having to maintain self-hosted agents on the same (or with access) virtual network (with their associated costs and cons).</p> <p>The agent that runs in the ACI its only a basic Docker image with the necessary dependencies to run the agent and to be capable to deploy to our private resources. For example, a base Ubuntu image with the necessary dependencies and the <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;tabs=browser">deploy agent</a> installed.</p> <p><strong>TL;DR:</strong> The purpose of <a href="proxy.php?url=https://marketplace.visualstudio.com/items?itemName=tiago-pascoal.EphemeralPipelinesAgents">this task</a> is to create a short-lived Azure Pipelines Agent to run a deploy in a private virtual network so we can deploy to Azure Resources that are not internet accessible.</p> <p>āš ļø <strong>Important:</strong> This approach (and task) is currently in preview and has <a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents#known-issueslimitations">known issues and limitations</a>. Please take this into account before proceeding to use this process.</p> <h1> How does the process works </h1> <p>Only three steps/requirements:</p> <ol> <li>One docker image that can run a deploy agent in a container (needed to provision agents).</li> <li>Provision one ephemeral pipeline agent to run the deployment job. This process can be done using <a href="proxy.php?url=https://marketplace.visualstudio.com/items?itemName=tiago-pascoal.EphemeralPipelinesAgents">this task</a>. This task provisions, configures and registers an agent on an ACI using the docker image mentioned in the first step.</li> <li>The container runs one pipeline job, and then it unregisters the agent and deletes the container (it self destructs). </li> </ol> <p>So considering we have setup a docker image that can run a deploy agent in a container, our pipeline will look like this:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--_H_J9dSE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/2b02e402758e0e9c7385d763649ccfbb/b5cea/release_sample.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--_H_J9dSE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/2b02e402758e0e9c7385d763649ccfbb/b5cea/release_sample.png" alt="Example pipeline with ephemeral agents"></a></p> Example pipeline with ephemeral agents <p>As you can see, two jobs are needed:</p> <ul> <li>The first one provisions the agent in one pool to run the deployment job.</li> <li>The second one (depending on the first being finished correctly), runs the deployment job. It runs on the agent that the first job has provisioned.</li> </ul> <h1> Tutorial </h1> <p>In this post I will guide you through a simple tutorial on how to deploy assets on a container in an Azure Storage account in a private virtual network, using Ephemeral pipeline agents.</p> <p>So let’s get started!</p> <h2> Requirements: </h2> <ul> <li>A virtual network with a security group.</li> <li>A dedicated subnet in the virtual network to run the ephemeral agents.</li> <li>The agent must run <strong>in the same Azure location</strong> as the virtual network.</li> <li>All the created subnets must share the same security group.</li> </ul> <h2> Overview of the resources </h2> <p>For this tutorial I have created two resource groups:</p> <ul> <li>One for the virtual network and the security group.</li> <li>Another one for the Azure Container Registry (ACR) and the storage we will deploy to.</li> </ul> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s---XyC0VYW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/e70722777f22370d2bc198aacb9a8b51/29114/azure_sample_vnet.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s---XyC0VYW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/e70722777f22370d2bc198aacb9a8b51/29114/azure_sample_vnet.png" alt="Resource group for vnet and security group"></a></p> Resource group for the VNet and security group <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--KcF280IK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/4d5a0b0f669ce04c1947ccc2a5dd0381/29114/azure_resource_group_sample.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--KcF280IK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/4d5a0b0f669ce04c1947ccc2a5dd0381/29114/azure_resource_group_sample.png" alt="Resource group for ACR and Storage account"></a></p> Resource group for the ACR and Storage account <p>Also, the Azure Storage account should be in the private network, with a configuration similar to this one:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--isxkUSP0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/5043b3fcbd0b06c6e70f1f9321590976/29114/vnet_storage_account.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--isxkUSP0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/5043b3fcbd0b06c6e70f1f9321590976/29114/vnet_storage_account.png" alt="Azure Storage Account VNet configuration"></a></p> Azure Storage Account VNet configuration <p>If you need it, the main repo of ephemeral agents has <a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents/tree/master/Samples">sample scripts</a> on how to deploy these resources.</p> <h2> Pushing the base image for ephemeral agents </h2> <p>Once finished configuring our resources in Azure, we will need the base image to use with the ephemeral agents.In this case I have used the <a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents/tree/master/AgentImages">Agent Images available in the GitHub repo</a>, specifically the Ubuntu one. We can use the pipeline in the GitHub repo to deploy our Agent image in the ACR (using an <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&amp;tabs=yaml#sep-docreg">ACR Service connection</a>).</p> <p>After that, we should have one repository with one image in our ACR:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FYrku_1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/0dbbb8a4515bcfff00490c9bfc2eb7b7/29114/azure_acr_agent_image.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FYrku_1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/0dbbb8a4515bcfff00490c9bfc2eb7b7/29114/azure_acr_agent_image.png" alt="ACR with Agent Image"></a></p> ACR with sample Agent Image based on Ubuntu <h2> Setting up the pipeline in Azure DevOps </h2> <p>After creating the needed resources and pushing the Agent Image to the ACR, we are ready to create our pipeline and use ephemeral agents to deploy.</p> <p>First of all, we will need to create an additional agent pool in our Project, and grant access to all pipelines.</p> <h3> Permissions: </h3> <p>For this process we will need to give specific permissions in order to allow the pipeline to register agents in the new pool, running jobs in these agents and unregister them.</p> <p>In order to register the agent a token with sufficient permissions on the Azure DevOps Organization the agent is going to be registered is needed.</p> <p>You can use two different types of tokens: a <strong>Personal Access Token</strong> or an <strong>OAuth token (recommended)</strong>.</p> <ul> <li>If using an Personal Access Token (PAT), it requires the <em>Agent Pools Read &amp; Manage scope</em> and that the user who owns the PAT has administration privileges on the agent pool you intend to register the agent.</li> <li>If using an OAuth token, the best approach is to use <strong><a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;tabs=yaml#systemaccesstoken">System.AccessToken</a></strong> which are short lived, dynamic and automatically managed by the system.</li> </ul> <p>If using the OAuth token, you need to met the following conditions:</p> <ul> <li>The timeout for the job that creates the agent only expires after the deploy job in the agent is finished (because the token is used to register and unregister the agent in the pool).</li> <li>The account <code>Project Collection Build Service (YOUR_ORG)</code> needs to have administration permissions on the pool (granted at the organization level, not at team project level).</li> </ul> <p>In this example, permissions on the pool (for the OAuth token approach) will look like the following:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--4a7PhHOs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/f4ee270cd5f5526ad0732206f549c80e/29114/pool_permissions.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--4a7PhHOs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/f4ee270cd5f5526ad0732206f549c80e/29114/pool_permissions.png" alt="Pool permissions for OAuth token"></a></p> Pool permissions for OAuth token <h4> Additional considerations: </h4> <p>You must register an Azure Resource provider for the namespace <code>Microsoft.ContainerInstance</code> in order to create container instances for the agents. This can be done easily opening a Powershell instance in the Azure portal and executing the following command:<br> </p> <div class="highlight"><pre class="highlight plaintext"><code>Register-AzureRmResourceProvider -ProviderNamespace 'Microsoft.ContainerInstance' </code></pre></div> <p>This will register the needed Azure Resource provider.</p> <h3> Service connections: </h3> <p>We will need a Service connection using a Service principal, which has to have access to the VNet resource group and the ACR and Storage Account resource group. We can set it up in the project settings in Azure DevOps, granting permissions to one resource group and then going to the Azure Portal to grant permissions to the another one. Make sure to grant permissions to all pipelines.</p> <p>Also, as mentioned above we will need an ACR Service Connection to our ACR to run the ephemeral agents in the ACI from the Agent Image in the ACR (granting permissions to all pipelines).</p> <h3> Pipeline definition: </h3> <p>In this part of the process two pipelines are created for my sample project, one for creating the base image for the agents in the ACR (which we have previously created), and another one for the main deploy process of our project.</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--oXwUsxS9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/f8e4d06f54566aa36a22eb55d66441a0/29114/two_pipelines.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--oXwUsxS9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/f8e4d06f54566aa36a22eb55d66441a0/29114/two_pipelines.png" alt="Two main pipelines defined"></a></p> Pipelines defined in our projects <p>The main deploy pipeline used in this tutorial is the same defined in the <a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents/blob/master/Samples/storage-pipeline/Azure-pipelines.yml">GitHub repo sample</a>. As we have seen in the previous stages, we have to define two jobs: one to provision the agent and another one to perform the deploy job.</p> <p>Configure the variables in the sample pipeline to reference your resources correctly and then try to run it:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--cGv-b3P_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/14de6ef3b45893151e81670437ce31f5/29114/pipeline_running_prepare_agent.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--cGv-b3P_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/14de6ef3b45893151e81670437ce31f5/29114/pipeline_running_prepare_agent.png" alt="Pipeline preparing the agent in the first job"></a></p> Pipeline preparing the agent in the first job <p>If we have configured everything correctly, the pipeline will succeed:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--a42Nk6yl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/d7e2f6b86afb845e8b6b58797181282b/29114/final_pipeline_result.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--a42Nk6yl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/d7e2f6b86afb845e8b6b58797181282b/29114/final_pipeline_result.png" alt="Final pipeline result"></a></p> Final pipeline result <p>If we inspect our created agent pool, we will see the executed job but no agents registered in the pool (which is the main purpose of this process). This is because the agent has unregistered itself when the deployment job finished.</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--uEk0PZSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/a0dc1e6946e8b466d84776329838d9b2/29114/pool_results.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--uEk0PZSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/a0dc1e6946e8b466d84776329838d9b2/29114/pool_results.png" alt="Pool executed job"></a></p> Pool executed job <h3> Final result: </h3> <p>After the deploy pipeline execution, we can check how we successfully have deployed our assets to the Azure Storage container connected to the private virtual network:</p> <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--D8MFwB8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/67e51f008539ac42eaccb8f520e47039/29114/azure_final_result_container.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--D8MFwB8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/67e51f008539ac42eaccb8f520e47039/29114/azure_final_result_container.png" alt="Deployed assets on Azure Storage container inside the VNet"></a></p> Deployed assets on Azure Storage container inside the VNet <h1> Conclusion </h1> <p>We have seen how ephemeral pipelines agents works, how to replace self-hosted agents with ephemeral ones and the most important part: how to reduce maintenance costs of having our own self-hosted agents pool(s).</p> <p>It is important to emphasize that this process its currently in preview, so maybe some things just do not work out of the box, but I personally think its an excellent approach to avoid having self-hosted agents in the mentioned situations.</p> <p><strong>Happy deploy!</strong> šŸŽ‰šŸŽ‰</p> <p>Sources:</p> <ol> <li><a href="proxy.php?url=https://github.com/microsoft/azure-pipelines-ephemeral-agents">GitHub of Microsoft Azure Pipelines ephemeral agents. (2020, March 15)</a></li> <li><a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;tabs=browser">Microsoft docs: Azure Pipelines agents. (2020, March 15)</a></li> <li><a href="proxy.php?url=https://marketplace.visualstudio.com/items?itemName=tiago-pascoal.EphemeralPipelinesAgents">Ephemeral Pipelines Agents extension. (2020, March 15)</a></li> </ol> azure devops tutorial pipelines Understanding and managing peer dependencies in your project Raul Piraces Alastuey Sun, 19 Jan 2020 12:00:00 +0000 https://dev.to/piraces/understanding-and-managing-peer-dependencies-in-your-project-3fni https://dev.to/piraces/understanding-and-managing-peer-dependencies-in-your-project-3fni <p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--1apk1hJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/3821c1aa38cc21cae6de2495bf846b4a/3f442/peerDependencies.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--1apk1hJ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/3821c1aa38cc21cae6de2495bf846b4a/3f442/peerDependencies.png" alt="Cover image"></a></p> <p><strong>Have you ever seen any similar warning to the ones in the image above?</strong></p> <p>That is completely normal.We usually install several libraries alongside our frameworks of choice in our projects. Each library and framework, has its own internal dependencies and other defined as <em>peerDependencies</em>.</p> <h1> What are peer dependencies? </h1> <p>Well… a good definition can be found <a href="proxy.php?url=https://stackoverflow.com/questions/26737819/why-use-peer-dependencies-in-npm-for-plugins/34645112#34645112">in this Stack Overflow answer</a>:</p> <blockquote> <p><code>peerDependencies</code> are for dependencies that are exposed to (and expected to be used by) the consuming code, as opposed to ā€œprivateā€ dependencies that are not exposed, and are only an implementation detail.</p> </blockquote> <p>Libraries and modules declare in their own <code>package.json</code> their own internal <code>dependencies</code> and <code>peerDependencies</code>.These peer dependencies are dependencies that are exposed to the developer using the module, consuming the code. The module is telling us that it is a dependency we should take care about and install it. The module may expose an interface where the peer dependency is used, and then we should use a compatible version of the peer dependency to use the code in order to ensure a correct behavior with no errors.</p> <p>For example some module, lets say <code>module-a</code>, has a peer dependency with <code>module-b</code>.If we install <code>module-a</code> in our project, yarn or npm will warn us about a peer dependency on <code>module-b</code> that we should install.</p> <p>If we do not use <code>module-b</code> in our project, the solution is pretty simple:</p> <ul> <li>Install the module as a dependency in the project. Depending if the module is for developing, testing, building our project or not; we should install it as a dev dependency or dependency correspondingly.</li> </ul> <p>But what about if we are already using that <code>module-b</code> in our project?</p> <h1> Conflicts… </h1> <p>Following the example above, if we have compatible versions of <code>module-b</code> then there is no problem.For example, the module establish a peer dependency with <code>module-b</code> on version 2.0 and we are using the same version in our <code>package.json</code> (or declares a range of versions which our dependency is in).</p> <p>If we use another version not the same of the peer dependency or not in the declared range, then npm or yarn will warn us about a conflict in peer dependencies in the installation process.</p> <p>Then the approach should be adapt our dependency on <code>module-b</code> to a compatible version with the version used by the <code>module-a</code>:</p> <ul> <li>In some scenarios this can involve changing our code, because changing to that module version has breaking changes.</li> <li>In other scenarios, these can be simply solved without changing our code, only changing the dependency version.</li> </ul> <h1> How do I solve this? </h1> <p>There are <a href="proxy.php?url=https://www.npmjs.com/package/install-peerdeps">some public projects of npm packages</a> that solves this for us automatically, but I would personally not recommend these. Using them could cause us more problems than we previously have. Also, doing it manually as explained in the cases above, allows us to have more knowledge about our dependencies in our project and how the dependencies work.</p> <p>The manual process involves tinkering with the package manager, installing/removing packages, adjusting versions and in some cases changing our code (as explained above).</p> <p>With this two approaches choose the one that you think fits you the most.</p> <h1> Conclusion </h1> <p>Sometimes is usual to ignore these warnings, but warnings are something we should care about.Knowing how peer dependencies work, we can get rid of these warnings, ensuring our project will work fine with all external dependencies.</p> <p>It can be a little tricky to solve all warnings in certain scenarios, where it may imply changing our code and how we use certain modules, but for sure it is worth it. This makes our project more stable, resilient and ensures no unexpected errors when using some modules.</p> <p>Happy codding!</p> javascript web npm learning My way on automated dependency updates Raul Piraces Alastuey Sun, 15 Dec 2019 18:00:00 +0000 https://dev.to/piraces/my-way-on-automated-dependency-updates-4k9e https://dev.to/piraces/my-way-on-automated-dependency-updates-4k9e <p>In order to keep my dependencies updated in all my personal projects automatically, I use <a href="proxy.php?url=https://dependabot.com/">Dependabot</a> in combination with <a href="proxy.php?url=https://github.com/features/actions">GitHub Actions</a>, <a href="proxy.php?url=https://travis-ci.org/">Travis CI</a> and <a href="proxy.php?url=https://www.netlify.com/">Netlify</a> deploy previews.</p> <p>Let’s break into each step/tool of my workflow:</p> <h1> Dependabot </h1> <p>Dependabot is the responsible of checking for package updates, open new PRs for the updates, and merge them automatically.</p> <p>It has many options in order to adapt it to any project or desired behavior. In my case, I have set it to <em>Auto</em> bump all package versions specified in the <em>package.json</em>, scheduled daily and without any filters. I also set it to add a ā€˜dependencies’ label to each opened PR (in order to difference them from other PRs).</p> <p><a href="proxy.php?url=///static/3d4639c7df0dd19c93786f32c918da05/dfb45/Dependabot_Settings.png"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--r6FhjptT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/3d4639c7df0dd19c93786f32c918da05/b7c40/Dependabot_Settings.png" alt="Dependabot main settings" title="Dependabot main settings"></a></p> <p>There are other options regarding scheduling, pull request, merge and rate limiting options. These are configured as following for my personal repositories:<a href="proxy.php?url=///static/fa78a7e077771c3ff7a581867a862627/d81c9/Dependabot_Merge_Options.png"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--3kHIMEEj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/fa78a7e077771c3ff7a581867a862627/b7c40/Dependabot_Merge_Options.png" alt="Dependabot Merge Options" title="Dependabot Merge Options"></a></p> <p>In my case, I have scheduled to run every day in the morning (except for weekends).</p> <p>Regarding the PR options, I have specified to automatically rebase PRs if they have conflicts, which it is very useful when there are several PRs opened and merging one, generates conflicts with other.</p> <p>I have also check the options to use directory branch names and to include security advisory details because I think they are useful and informative.</p> <p>To configure Dependabot automatic merge, we have several options, filters and others.</p> <p><a href="proxy.php?url=///static/31d84df05ddc3ac2e31dbc7187f510c2/ac3a0/Automatic_Merge_Settings.png"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--mwdZAda3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/31d84df05ddc3ac2e31dbc7187f510c2/b7c40/Automatic_Merge_Settings.png" alt="Automatic Merge Settings" title="Automatic Merge Settings"></a></p> <p>Regarding automatic merge, I have set to treat PR approval as a request to merge (which I automate in the next section with a GitHub action).</p> <p>I have also set to create a merge commit if the PR is merged by Dependabot to keep track of it.</p> <p>The are other options for automatic PR merging, like enabling auto-merging to be enabled on projects. Then, Dependabot will auto-merge PR if there are no conflicts and checks for PRs passes (in projects with this option enabled).</p> <p><strong>The only reason I have included the additional step of the GitHub action is to give more flexibility to my workflow in the future</strong> , so I can add custom logic for each project and customize when the PRs should be merged (since by configuration, Dependabot PRs depend on my approval, which the GitHub action gives). But the action, could not be necessary at all.</p> <h1> GitHub Actions </h1> <p>In my approach, I use GitHub actions to give the necessary approval to Dependabot PRs which is needed by Dependabot to auto-merge them (Dependabot treat the approval as a request to merge the PR).</p> <p>Actually I approve every Dependabot PR without any additional logic. Nevertheless, I think this step could bring me more flexibility to add custom logic on when a specific Dependabot PR should be merged.</p> <p>For example, if I have an specific library which I want to freeze (not upgrading it), or any other related logic, I can add this to the Github Action.</p> <p>The base action I use for this right now is <a href="proxy.php?url=https://github.com/hmarr/auto-approve-action">hmarr/auto-approve-action</a> (v2).</p> <h1> Travis CI </h1> <p>I have set up Travis CI on my projects to check commits and provide additional PR checks to the Dependabot PRs.</p> <p>In Travis CI, unit tests, the linter and other checks are run to check everything work as expected. Nevertheless, I consider this is not enough testing for the automated PRs… So I am considering to add e2e tests in the future (most probably using <a href="proxy.php?url=https://www.cypress.io/">Cypress</a>).</p> <p>If this check runs ok, Dependabot can continue with the automatic merging… but there is another additional check: Netlify.</p> <h1> Netlify </h1> <p>My automated dependency updates projects are deployed on Netlify, so I added the option on Netlify to make a <em>preview</em> of the deploy and check if it does not fail to deploy and it looks and works as expected.</p> <p>This is only an additional check to make sure the automation is enough covered.</p> <h1> Workflow </h1> <p>After explaining each check and tool, let’s get into the main workflow and what happens when Dependabot triggers.</p> <p>My current automated workflow has the following steps:</p> <ul> <li> <p><strong>Dependabot</strong> checks Daily for available updates for packages listed in each <em>package.json</em> of my projects (currently I am only using it in JS projects, but it support other languages).</p> <ul> <li>Whenever an update is available, <strong>it opens a new pull request</strong> to the project including the corresponding changes, estimated compatibility and the package changelog.</li> <li>This pull request triggers all checks for pull requests, which includes one <strong>GitHub Action</strong> that approves the PR in my name (<a href="proxy.php?url=https://github.com/hmarr/auto-approve-action">hmarr/auto-approve-action</a>), <strong>Travis CI checks</strong> to check the update does not break anything and a <strong>Netlify deploy action</strong> to preview the deploy and test there is no problem when deploying.</li> </ul> </li> <li><p><strong>Once all checks have successfully passed</strong> , Dependabot automatically merges the PR to master which triggers the deploy to production (rebasing the branch if necessary).</p></li> <li><p><strong>If one or more checks fail</strong> , Dependabot comment on the PR explaining it won’t merge the PR since some CI checks failed. Then we should analyze the problems and manually fix them to upgrade the package. Nevertheless, if we think it can be something related to the package itself, or it is not worth it to update, we can just close the PR or tell Dependabot (using the options available in the Dependabot PR description in the PR comments) to not upgrade that package or wait to other version.</p></li> </ul> <p>This is the final result:</p> <p><a href="proxy.php?url=https://piraces.dev/static/b68e8c8f2e946a84dbf48ac871ff4b94/ec071/Flow_CI_Approval.png"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--MbOWlYtE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/b68e8c8f2e946a84dbf48ac871ff4b94/ec071/Flow_CI_Approval.png" alt="CI Workflow" title="Final result"></a></p> <p><a href="proxy.php?url=///static/cda97fe4a058e7e4562c49096963b1a5/fe026/Merge_Notifications.png"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--ItNCyA5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://piraces.dev/static/cda97fe4a058e7e4562c49096963b1a5/b7c40/Merge_Notifications.png" alt="Final result" title="Final result"></a></p> <p>Daily automated dependencies updates for my main personal projects, simple and without fear of breaking them.</p> <p>How do you do in order to manage your dependencies their potential security vulnerabilities?</p> <p>Happy codding!</p> dependencies automation showdev learning The NTP Pool project: How to use and contribute Raul Piraces Alastuey Sat, 14 Sep 2019 14:27:03 +0000 https://dev.to/piraces/the-ntp-pool-project-how-to-use-and-contribute-58gl https://dev.to/piraces/the-ntp-pool-project-how-to-use-and-contribute-58gl <h1> Introduction </h1> <p>The <a href="proxy.php?url=https://www.ntppool.org" rel="noopener noreferrer">NTP Pool Project</a> is a large virtual farm of servers that offers <a href="proxy.php?url=https://en.wikipedia.org/wiki/Network_Time_Protocol" rel="noopener noreferrer">NTP service</a> for anyone. The project consists of a DNS system that balances the load of millions of queries of time synchronization for devices all across the world (tablets, smartphones, computers, routers...). Also vendors like Ubuntu, <a href="proxy.php?url=https://help.ubuntu.com/lts/serverguide/NTP.html" rel="noopener noreferrer">use this service</a> for all its clients (it's also commonly used in many other Linux distros). The actual goal is to provide real and accurate time synchronization to this devices, thanks to the great number of servers, using them to divide the load along the pool.</p> <p>Actually, the pool has around <a href="proxy.php?url=https://www.ntppool.org/zone" rel="noopener noreferrer">4000 servers</a> in different <em>stratums</em>, syncing time across them and offering time synchronization services.</p> <p>These servers are in different <em>stratums</em> following the NTP hierarchy:</p> <p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fthumb%2Fc%2Fc9%2FNetwork_Time_Protocol_servers_and_clients.svg%2F1920px-Network_Time_Protocol_servers_and_clients.svg.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fthumb%2Fc%2Fc9%2FNetwork_Time_Protocol_servers_and_clients.svg%2F1920px-Network_Time_Protocol_servers_and_clients.svg.png" alt="NTP Hierarchy"></a></p> Hierarchy of an NTP system <p>To know more about different stratums and the characteristics of each stratum servers you can take a look <a href="proxy.php?url=https://en.wikipedia.org/wiki/Network_Time_Protocol#Clock_strata" rel="noopener noreferrer">here</a>.</p> <h1> Using the pool </h1> <p>If you are interested in using this awesome service, please follow the instructions on the official page: <a href="proxy.php?url=https://www.ntppool.org/en/use.html" rel="noopener noreferrer">How do I use pool.ntp.org?</a></p> <p>Which is basically changing your NTP servers to point to the servers in the pool:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server 3.pool.ntp.org </code></pre> </div> The 0, 1, 2 and 3.pool.ntp.org names point to a random set of servers that will change every hour <h1> Contributing </h1> <blockquote> <p>"Individually, we are one drop. Together, we are an ocean." <br> -- Ryunosuke Satoro</p> </blockquote> <p>Contributing is very easy, so if you feel like wanting to contribute, simply follow the next steps.</p> <h2> Can I contribute? </h2> <p>To contribute you only need a static IP and permanent public network access. If you met these requirements, try and follow the tutorial to contribute to the project!</p> <p><em>Note: please read carefully the official page on joining the pool, before you proceed (<a href="proxy.php?url=https://www.pool.ntp.org/en/join.html" rel="noopener noreferrer">https://www.pool.ntp.org/en/join.html</a>)</em></p> <h2> Setting up the server </h2> <p>The following instructions are for Linux distributions, specifically for Ubuntu/Debian, but the package and the configuration file are almost the same across all distros. Also, we are going to configure a stratum 3 server, but the instructions are almost the same for a stratum 4 or 2 server.</p> <h3> 1. Install the NTP daemon </h3> <p>Simply execute the install command:<br> <code>sudo apt-get install ntp</code></p> <h3> 2. Choosing static NTP Servers </h3> <p>First we must search for <em>stratum 2 servers</em> to sync with.<br> <strong>It is important to choose at least 4</strong> <em>stratum 2 servers</em>, that are geographically near to our server (at least in the same country).</p> <p>You can search for servers in this official list: <a href="proxy.php?url=http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers" rel="noopener noreferrer">http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers</a></p> <p>If you want to configure an stratum 2 server, take some servers from the Stratum 1 list:<br> <a href="proxy.php?url=http://support.ntp.org/bin/view/Servers/StratumOneTimeServers" rel="noopener noreferrer">http://support.ntp.org/bin/view/Servers/StratumOneTimeServers</a></p> <p><em>Note: choose only 'OpenAccess' servers unless you’ve received approval to choose other type of server.</em></p> <p>Once we have the servers, we have to know its IPv4 and IPv6 addresses. Sometimes the IP address it is informed in the list, but if it is not or you want to confirm the IP, we can extract this info by executing a basic <em>dig</em> command. For example:</p> <p><code>dig 6.ntp.snails.email ANY</code></p> <p>Which produces the following output:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>;; ANSWER SECTION: 6.ntp.snails.email. 5999 IN A 139.162.170.219 6.ntp.snails.email. 5999 IN AAAA 2a01:7e01::f03c:91ff:fe8b:e9e0 </code></pre> </div> <h3> 3. Configure the NTP daemon </h3> <p>Once we have all IP addresses of the servers we chose, we are ready to configure the NTP daemon.</p> <p>Modify the configuration file for NTP, usually located in <code>/etc/ntp.conf</code>. Delete all lines starting with the <em>pool</em> word that are there by default, and then paste line by line the servers in the format:</p> <p><code>server ntp_server_1 iburst</code> </p> <p>Where <em>'ntp_server_1'</em> is IP address of the server (one address per line).</p> <p>Also, make sure that your configuration file has a <code>driftfile</code> and the <code>noquery</code> option is present in the restrict lines of the config file. </p> <p>The <code>driftfile</code> option, helps to achieve a stable and accurate time, storing the frequency offset and the required frequency to remain in synchronization with correct time.</p> <p>The <code>noquery</code> option does not allow management queries, which is useful to prevent attacks or being vulnerable to queries that could modify the state of the server.</p> <p>The final config file will be something like this:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>driftfile /var/lib/ntp/ntp.drift server ntp_server_1 iburst server ntp_server_2 iburst server ntp_server_3 iburst server ntp_server_4 iburst server ntp_server_5 iburst # By default, exchange time with everybody, but don't allow configuration. restrict -4 default kod notrap nomodify nopeer noquery limited restrict -6 default kod notrap nomodify nopeer noquery limited # Local users may interrogate the ntp server more closely. restrict 127.0.0.1 restrict ::1 </code></pre> </div> <p>The <code>iburst</code> option following the server, is there because of the NTP Pool recommendations. With this option, if the server is unreachable, this will send eight packets instead of one packet (only the first time).</p> <h3> 4. Restarting the NTP daemon and testing the config </h3> <p>Once configured, we simply have to restart the service to load the new configuration:</p> <p><code>sudo systemctl restart ntp</code></p> <p>After restarting the service, wait around 5 minutes until it stabilizes the time sources, make sure the port 123 (UDP) is open, and then <a href="proxy.php?url=https://servertest.online/ntp" rel="noopener noreferrer">test it</a>. You can also test the service using the command <code>ntpq -p</code> or from other server using <code>ntpdate -q SERVER_IP</code>.</p> <h3> 5. Add the server to the NTP Pool </h3> <p>This is the final step. Our server is running and configured correctly, so let's add it to the pool.</p> <ul> <li>Go to <a href="proxy.php?url=https://www.ntppool.org" rel="noopener noreferrer">ntppool.org</a>, and click in <a href="proxy.php?url=https://manage.ntppool.org/manage" rel="noopener noreferrer">'Manage Servers'</a>.</li> <li>Sign up (or login if you already have an account).</li> <li>Write the hostname of your NTP server or one of its IPv4 / IPv6 static addresses.</li> <li>Submit it!</li> </ul> <p>If you have also an IPv6 address then submit it too when adding the servers.</p> <p>Once done, your server must appear in the <a href="proxy.php?url=https://manage.ntppool.org/manage/servers" rel="noopener noreferrer">'Your servers'</a> list, where you can adjust the net speed of your server to match the requirements.</p> <p>Finally, the current score of your server will be improving over time. Initially is possible that your server will have a negative or less than 10 points in score, which means it will be excluded from the pool. But, let the service sync the time from sources and it will be added to the pool.</p> <h3> 6. Congrats! </h3> <p>Your server is part of the pool and is helping syncing time for several devices.</p> <p>Thank you for contributing!! šŸŽ‰</p> <p><a href="proxy.php?url=https://i.giphy.com/media/d7c56SbLa9PRC/giphy.gif" class="article-body-image-wrapper"><img src="proxy.php?url=https://i.giphy.com/media/d7c56SbLa9PRC/giphy.gif" alt="Feels good!"></a></p> <p><em>Originally posted in my personal blog: <a href="proxy.php?url=https://piraces.dev/posts/the-ntp-pool-project-how-to-use-and-contribute/" rel="noopener noreferrer">https://piraces.dev/posts/the-ntp-pool-project-how-to-use-and-contribute/</a></em></p> ntp tutorial contributing community